Use a GenAI app to prototype (turn PDF notes into clinical insight)
Brainstorm end goal
Who is audience? (non-technical people)
What does the app look like?
Features (drag and drop, export to csv, etc.)
be specific (do not give open ended tasks to GenAI apps)
⚠️ NOTE
Give the AI a role act as a product manager
Google Colab
Streamlit
ngrok account and auth token
Sarvam AI API key
Optional
Python
Local files
Click on New app
Here is a deployed app
Push your code to github and deploy
Setup a repository in github
Or run in Google Colab
Run streamlit using
streamlit run script.py
import streamlit as st
import pandas as pd
import numpy as np
st.write("Streamlit supports a wide range of data visualizations")
all_users = ["Alice", "Bob", "Charly"]
with st.container(border=True):
users = st.multiselect("Users", all_users, default=all_users)
rolling_average = st.toggle("Rolling average")
np.random.seed(19)
data = pd.DataFrame(np.random.randn(20, len(users)), columns=users)
if rolling_average:
data = data.rolling(7).mean().dropna()
tab1, tab2 = st.tabs(["Chart", "Dataframe"])
tab1.line_chart(data, height=250)
tab2.dataframe(data, height=250, use_container_width=True)
To save your ngrok authtoken securely in Google Colab, you should use Colab’s Secrets feature. This allows you to store sensitive information like API keys or tokens without embedding them directly in your code, which is important for security and when sharing notebooks.
ngrok authtoken, you could name it YOUR_AUTHTOKEN (or any other descriptive name you prefer, but remember it for later).ngrok authtoken into the “Value” field.Once saved, you can access this secret in your Python code using from google.colab import userdata and then userdata.get('YOUR_AUTHTOKEN') (replacing 'YOUR_AUTHTOKEN' with the name you gave your secret).
st.slider st.button st.text_input() st.selectbox() st.checkbox() st.file__uploader()
Code from deeplearning.ai
# import packages
from dotenv import load_dotenv
import openai
import streamlit as st
# load environment variables from .env file
load_dotenv()
# Initialize OpenAI client
client = openai.OpenAI()
st.title("Hello, GenAI!")
st.write("This is your first Streamlit app.")
response = client.responses.create(
model="gpt-4o",
input=[
{"role": "user", "content": "Explain generative AI in one sentence."} # Prompt
],
temperature=0.7, # A bit of creativity
max_output_tokens=100 # Limit response length
)
# print the response from OpenAI
st.write(response.output[0].content[0].text)
st.bar_chart()
st.scatter_chart(df), works with pandas dataframe
use with matplotlib st.pyplot(fig)
use with plotly st.plotly_chart(fig, use_container_width = True)
streamlitGithub Create new account on github
Create new app and connect to github
Create new repository on github
Upload your code to github (code below)
import streamlit as st
import pandas as pd
import numpy as np
import folium
from folium.plugins import HeatMap
from streamlit_folium import st_folium
st.title("Outbreak Investigator")
st.write("Adjust the settings in the sidebar, then try to identify the source of the outbreak from the map.")
# --- SIDEBAR CONTROLS ---
st.sidebar.header("Settings")
total_cases = st.sidebar.slider("Total cases", 100, 1000, 500)
cluster_pct = st.sidebar.slider("% of cases near the source", 10, 90, 70)
show_source = st.sidebar.checkbox("Reveal the true source")
# --- GENERATE SYNTHETIC DATA ---
market_lat, market_lon = 30.6195, 114.2577
cluster_count = int(total_cases * cluster_pct / 100)
noise_count = total_cases - cluster_count
np.random.seed(420)
cluster_lats = np.random.normal(market_lat, 0.005, cluster_count)
cluster_lons = np.random.normal(market_lon, 0.005, cluster_count)
noise_lats = np.random.uniform(30.50, 30.70, noise_count)
noise_lons = np.random.uniform(114.20, 114.40, noise_count)
cases = pd.DataFrame({
'lat': np.concatenate([cluster_lats, noise_lats]),
'lon': np.concatenate([cluster_lons, noise_lons]),
})
pois = pd.DataFrame({
'name': ['Wuhan International Plaza', 'Huanan Seafood Market', 'Hankou Railway Station', 'Wuhan CDC'],
'lat': [30.584, 30.6195, 30.618, 30.612],
'lon': [114.271, 114.2577, 114.250, 114.265],
'is_source': [False, True, False, False],
})
# --- SHOW STATS ---
st.write(f"**Total cases:** {total_cases} — **Clustered:** {cluster_count} — **Scattered:** {noise_count}")
# --- BUILD MAP ---
m = folium.Map(location=[30.61, 114.28], zoom_start=13, tiles='cartodbpositron')
HeatMap(cases[['lat', 'lon']].values.tolist(), radius=12, blur=15).add_to(m)
for _, poi in pois.iterrows():
if poi['is_source'] and show_source:
color = 'red'
label = f"TRUE SOURCE: {poi['name']}"
else:
color = 'black'
label = poi['name']
folium.Marker(
location=[poi['lat'], poi['lon']],
popup=label,
tooltip=label,
icon=folium.Icon(color=color, icon='question-sign'),
).add_to(m)
st_folium(m, width=900, height=550)
requirements.txt is herestreamlit>=1.35.0
streamlit-folium>=0.20.0
folium>=0.17.0
pandas>=2.0.0
numpy>=1.26.0
Go to Streamlit and connect your github repository
can also deploy on Huggingface spaces
st.session_state()