(A) quick framing tied to the UK Professional Standards Framework (so you can map evidence to SFHEA), (B) teaching/design ideas you can run as modules/activities, (C) assessment ideas (with rubrics and anti-cheating/process suggestions), and (D) the kinds of artefacts and narrative structure that make a strong Senior Fellowship submission. Demonstrate strategic leadership, scholarship, and mentoring (the heart of Senior Fellowship).
When writing your evidence, explicitly link activities to UKPSF dimensions:
For Senior Fellowship emphasise: strategic influence beyond a single module (mentoring colleagues, leading policy or curriculum change), evidence of sustained impact, and demonstrable leadership in developing others.
Foundations: “AI Literacy for Academics & Students” (short course/workshop)
“Designing Learning with AI” (staff CPD series)
Applied module: “AI in Practice” (project-based, discipline-specific)
Assessment-focused: “Assessing in the Age of AI” (seminar & toolkit)
Critical AI seminar series (ethics, historiography, policy)
Provide exemplary descriptors for each band (high/medium/low) in the rubric.
Make a deliberate evidence plan — collect these:
For each claim of influence/impact use this micro-structure (keep linking A/K/V):
Repeat across 3–5 major claims showing sustained leadership and influence.
(Do not quote percentages unless you have the data—replace X with real figures or descriptive terms.)
README that shows expected outputs.A focused, practical strategy for teaching AI to biomedical students that emphasizes clinical relevance, interpretability, reproducibility, and ethics. Blend conceptual teaching, hands-on data labs, and authentic assessments to prepare students for real-world biomedical AI tasks.
Week 1 — Intro & Foundations: probability, bias, overview of biomedical AI successes/failures. Week 2 — Data Wrangling: missing data, normalization, labeling challenges in clinical datasets. Week 3 — Supervised Learning: linear models, decision trees; biomedical performance metrics (sensitivity/specificity, ROC/PR). Week 4 — Model Interpretation: SHAP/LIME, saliency for images, case explanations for clinicians. Week 5 — Imaging & Signals: basics of convolutional ideas and time-series handling; pitfalls in imaging datasets. Week 6 — NLP in Biomedicine: clinical notes, entity extraction, de-identification concerns. Week 7 — Reproducibility & Deployment: containers, notebooks, CI for data pipelines, documentation (model cards). Week 8 — Ethics & Final Presentations: harm analysis, bias audit, student project presentations.
Task: Using the provided de-identified dataset of patient vitals, build a model to predict 48-hour deterioration risk. Deliverables:
Grading rubric highlights: technical correctness (30%), reproducibility (25%), interpretability & clinical framing (25%), ethics & limitations (20%).
nbformat/Binder/Colab for reproducibility; Docker for deployment exercises.Appendix: example rubric, sample notebook structure, and links to templates can be provided on request.
This is a compelling narrative for a Senior Fellowship (SFHEA) application. It demonstrates leadership in digital innovation, the design of inclusive learning environments, and a scholarly approach to mapping historical theory onto modern global challenges.
Below is a draft “Short Paper” and a curated reflection designed to align with the Professional Standards Framework (PSF 2023).
Author: [Your Name]
Subject: Data Visualization & Pedagogical Innovation
In 1854, Dr. John Snow’s map of the Soho cholera outbreak did more than find a broken pump; it invented a new way of seeing. By translating abstract death tolls into spatial points, Snow moved public health from superstition (miasma) to spatial evidence.
In today’s “Poly-crisis” world—defined by climate change, pandemics, and urban inequality—spatial visualization remains our most potent tool for sense-making. Whether it is tracking the spread of a respiratory virus in Wuhan or mapping “food deserts” in modern London, the ability to overlay disparate datasets onto a physical landscape allows us to identify systemic failures that statistics alone would hide.
The “Syntax Wall” is the greatest barrier for undergraduates learning data science. Students often get bogged down in Python errors before they ever reach the “Aha!” moment of discovery. AI tools (LLMs) act as a pedagogical catalyst in three distinct ways:
When writing the Fellowship application, adapt these paragraphs to address specific Dimensions of Practice:
“In designing my data visualization curriculum, I moved beyond traditional ‘click-along’ tutorials. By integrating AI-generated synthetic datasets—such as a simulated COVID-19 outbreak in Wuhan—I created a high-stakes, inquiry-based ‘scavenger hunt.’ This approach bridges historical theory (John Snow) with contemporary relevance, ensuring that students from diverse technical backgrounds can engage with complex spatial analysis without being sidelined by initial coding barriers.”
“I leverage generative AI to provide ‘just-in-time’ support for students. By providing AI-scaffolded boilerplate code for tools like Folium and Pandas, I empower students to move rapidly from data ingestion to spatial critique. This fosters a sense of ‘Digital Fluency,’ where the technology becomes a transparent medium for epidemiological storytelling rather than a hurdle. This methodology acknowledges that my students are future leaders who must interpret data rapidly in professional settings.”
“My leadership in the department involves modeling how AI can be ethically integrated into the classroom. Rather than banning AI, I use it to generate ‘noisy’ data that requires students to apply human judgment. For instance, in the ‘Wuhan Scavenger Hunt’ exercise, students must distinguish between the ‘Source’ and ‘Secondary Transmission’—a task that requires critical thinking that AI cannot yet automate. This develops their evaluative judgment, a core requirement for graduates in an AI-driven workforce.”
To further strengthen your Fellowship claim, you might want to collect Small-Scale Evaluation Data. Draft a 3-question “Student Perception Survey” to measure how much this “Scavenger Hunt” approach improved their confidence in spatial analysis.
This is an excellent expansion for your Senior Fellowship (SFHEA) application. By integrating Assessment & Feedback and Academic Tutoring, you move from “how I teach” to “how I ensure quality and support the whole student.”
Here is the reflective paper, augmented with these critical pillars of higher education practice.
In the context of the John Snow or Wuhan mapping exercises, the assessment must go beyond whether a student can “make a map.” A descriptive approach asks, “Where is the pump?”; a critical approach asks, “How does the choice of spatial parameters influence the public health narrative?”
Feedback should not be a post-mortem; it should be a roadmap. Using the Sandwich Model, we ensure that even the most rigorous critique remains encouraging and personalized.
| Layer | Component | Description |
|---|---|---|
| Top Bun | Positive Affirmation | Start with what worked (e.g., “Excellent use of the Folium library to handle the synthetic dataset”). |
| The Filling | Constructive Critique | Shift from descriptive to critical (e.g., “While the map is accurate, your analysis of ‘community spread’ versus ‘the source’ lacked depth”). |
| The Sauce | Actionable Intelligence | Explicitly tell the student how to improve (e.g., “To reach the next grade tier, try integrating a time-series element to show the evolution of the cluster”). |
| Bottom Bun | Encouraging Close | Reiterate potential (e.g., “You have a clear talent for spatial storytelling; keep pushing the boundary of your interpretations”). |
AI and Pedagogy Tip: AI can assist educators in “tone-checking” their feedback to ensure it is forward-looking and justified, helping to turn a one-sentence comment into a personalized growth plan.
As a Wellbeing Champion, the role of the tutor has evolved. In an era where students can get technical answers from AI 24/7, the human mentor must provide something AI cannot: Empathy, Privacy, and Ethical Signposting.
Will: What are the specific steps you will take this week?
If you are using this for your Senior Fellowship, here is how to map these specific sections to the Dimensions of Practice:
“My assessment strategy intentionally shifts students from descriptive to critical engagement. By providing transparent rubrics and using the Sandwich Model of feedback, I ensure that my comments are not just evaluative but provide Actionable Intelligence. This is supported by a robust moderation process that ensures consistency and quality (V4).”
“As an academic tutor, I employ the GROW framework to mentor students, particularly in navigating the ethical complexities of AI in data science. By acting as a Wellbeing Champion, I bridge the gap between academic pressure and student support, ensuring a safe, inclusive environment that respects student privacy while signposting them to broader institutional resources (V1, V2).”
“I integrate External Feedback and moderation cycles into my teaching practice. This not only ensures the reliability of my marking but also allows me to reflect on how my pedagogical tools—like the Python ‘Scavenger Hunt’—align with the evolving needs of the sector.”
Create a one-page “Student Success Guide” that explains the GROW framework and Feedback Sandwich to your students so they understand how they are being supported.
This expanded curriculum is designed for a 2-part workshop series or a short lecture course. Since your audience is biomedical scientists and clinicians, the focus remains on browser-based tools that require zero installation and zero code.
Goal: Transition staff from “AI-curious” to “AI-competent” by providing a safe, governed toolkit for research, practice, and data.
Focus: Solving “Information Overload.”
NotebookLM (by Google): Upload 20+ PDFs of your own research or a specific niche. It creates a “private brain” you can query without the AI “hallucinating” from the open web.
Focus: Reclaiming the “Administrative Burden.”
DeepL / GPT-4o: For generating patient-facing summaries in simplified language or different languages for non-English speaking patients.
Focus: “Conversation-to-Graph.”
Claude 3.5/4 (Analysis Mode): Excellent for cleaning “dirty” data (e.g., fixing date formats or inconsistent naming in a trial spreadsheet).
Crucial for UK Medical Schools and NHS Staff
Before they touch a tool, you must cover the “Human-in-the-Loop” principle:
| Time | Activity | Key Takeaway |
|---|---|---|
| 00:00 | Intro: The AI Revolution in UK Med | AI is a co-pilot, not a replacement. |
| 00:30 | Session 1: Literature & Discovery | Finding the “Needle in the Haystack.” |
| 01:15 | Coffee Break | |
| 01:30 | Session 2: Workflow & Admin | Ending “Death by Documentation.” |
| 02:15 | Session 3: Data & Visualization | From Spreadsheet to Insight. |
| 03:00 | Ethics & Compliance Panel | Staying safe within NHS/Uni guidelines. |
“Data Safety Handout” or a “Prompt Cheat Sheet” specifically tailored for these medical tools.
Lovable
Replit
Cursor
Google AI studio
Base44
NotebookLM
claude.ai
Excel/Google Sheets :-)
Basics of unsupervised machine learning using animations and widgets and PCA in your browser and visual explanations of PCA and visualizing the Swiss roll data
We show how to create interactive visualizations
How to bridge the gap between abstract technical concepts and data-driven storytelling? Below is a structured exercise designed for a Python-based data visualization class.
Paper and resources co-created with students. Link forthcoming.
Interested in learning more? See the Cambridge AI Safety Hub
In this exercise, you will use Python to model a speculative future. You will generate synthetic data representing the growth of AI benefits versus AI harms, constrained by “Institutional Inertia.” Your goal is to create a compelling visualization and a 300-word narrative that explains the “Velocity Gap” to a non-technical audience.
Add one slide here
Create a narrative around AGI and superintelligence using the code below.
Open the following code in a Google Colab notebook
or in a standalone python script.
This uses ipywidgets.
python -m venv venv_viz
source activate venv_viz
pip install matplotlib numpy ipywidgets plotly
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from ipywidgets import interact, widgets
def plot_velocity_gap(harm_growth, benefit_ceiling, inst_speed, policy_lag):
time = np.linspace(0, 25, 250)
# Models
y_harm = 0.5 * np.exp(harm_growth * time)
midpoint = 10 + policy_lag
y_benefit = benefit_ceiling / (1 + np.exp(-inst_speed * (time - midpoint)))
# Calculate Risk Score (Area between curves)
risk_score = np.trapz(np.maximum(0, y_harm - y_benefit), time)
# Plotting
plt.figure(figsize=(10, 6))
plt.plot(time, y_harm, color='red', lw=2, label='Harmful AI Potential')
plt.plot(time, y_benefit, color='blue', lw=2, label='Realized AI Benefits')
# Fill the gap
plt.fill_between(time, y_benefit, y_harm, where=(y_harm > y_benefit),
color='red', alpha=0.1, label='The Velocity Gap')
# Formatting
plt.title(f"AI Velocity Gap | Cumulative Risk: {risk_score:.2f}", fontsize=14)
plt.xlabel("Years from AGI Emergence")
plt.ylabel("Impact Magnitude")
plt.ylim(0, min(max(y_harm)*1.1, 300))
plt.grid(True, linestyle='--', alpha=0.6)
plt.legend()
plt.show()
# Interactive Sliders
interact(
plot_velocity_gap,
harm_growth = widgets.FloatSlider(value=0.25, min=0.1, max=0.4, step=0.01),
benefit_ceiling = widgets.IntSlider(value=50, min=10, max=100, step=5),
inst_speed = widgets.FloatSlider(value=0.4, min=0.1, max=1.0, step=0.05),
policy_lag = widgets.IntSlider(value=0, min=-5, max=10, step=1)
);
Use the following Python script to generate your dataset. This script simulates two trajectories:
import pandas as pd
import numpy as np
def generate_ai_narrative_data(years=20, seed=42):
np.random.seed(seed)
time = np.linspace(0, years, 100)
# Scenario A: Exponential Harm (unregulated)
# Grows at 30% annually
harm_trajectory = 0.5 * np.exp(0.25 * time) + np.random.normal(0, 1, 100).cumsum() * 0.2
# Scenario B: Sluggish Benefits (Institutional Friction)
# Logistic growth: starts strong, but hits the 'Consensus Ceiling'
L = 15 # Maximum realized benefit
k = 0.4 # Growth rate
x0 = 10 # Midpoint of adoption
benefit_trajectory = L / (1 + np.exp(-k * (time - x0))) + np.random.normal(0, 0.2, 100)
df = pd.DataFrame({
'Year': 2024 + time,
'Harmful_Potential': np.maximum(0, harm_trajectory),
'Realized_Benefits': np.maximum(0, benefit_trajectory)
})
return df
# Students: Start your analysis here
df = generate_ai_narrative_data()
print(df.head())
Choose one of the following “Institutional Environments” to model. Adjust the parameters in the code (or manually perturb the data) to reflect your chosen story:
Realized_Benefits curve plateaus early (at $L=5$), while Harmful_Potential accelerates.Harmful_Potential curve should show a sudden “kink” or drop, while Realized_Benefits continues its slow climb.Your submission must include a single, publication-quality plot created with Matplotlib, Seaborn, or Plotly that adheres to the following storytelling principles:
Write a short “news from the future” article (dated 2040) based on your plot.
| Criteria | Excellent (5/5) | Developing (3/5) |
|---|---|---|
| Technical Execution | Clean, bug-free Python code; effective use of libraries. | Code runs but has redundant steps or poor formatting. |
| Data Storytelling | Annotations and colors guide the eye to the “Velocity Gap.” | Plot is technically correct but lacks context or narrative. |
| Insight & Narrative | The story explains why the curves diverge based on Michael Nielsen’s theories. | The story is generic and doesn’t connect to the data. |
| Aesthetics | Professional styling (no default settings), clear labels, and high contrast. | Default Matplotlib colors; overlapping text or unreadable labels. |
This is a complete Python structure designed to be copied directly into a Jupyter Notebook or Google Colab. It uses Plotly to create an interactive experience where students can hover over data points to see the “Institutional Bottlenecks” and “Unregulated Risks” at specific moments in time.
This notebook explores the “Velocity Gap” — the divergence between the exponential growth of AI risks and the linear/logistic growth of AI benefits. You will generate synthetic data, visualize it interactively, and annotate the “friction points” where human institutions struggle to keep pace.
We will generate a dataset spanning from 2024 to 2044.
import pandas as pd
import numpy as np
import plotly.graph_objects as go
def generate_velocity_data(years=20):
np.random.seed(42)
time = np.linspace(0, years, 200)
# 1. Exponential Harm (Speed of Code)
harm = 0.8 * np.exp(0.22 * time) + np.random.normal(0, 0.5, 200).cumsum() * 0.1
# 2. Logistic Benefits (Institutional Speed)
# L = max benefit, k = growth rate, x0 = midpoint
L, k, x0 = 18, 0.35, 10
benefits = L / (1 + np.exp(-k * (time - x0))) + np.random.normal(0, 0.1, 200)
df = pd.DataFrame({
'Year': 2024 + time,
'Harmful_Potential': np.maximum(0, harm),
'Realized_Benefits': np.maximum(0, benefits),
'Gap': np.maximum(0, harm - benefits)
})
# Adding 'Event' labels for interactivity
df['Event'] = ""
df.loc[30, 'Event'] = "First Major AI-Driven Bank Run"
df.loc[100, 'Event'] = "UN Global Consensus Summit (Deadlocked)"
df.loc[150, 'Event'] = "Institutional Stagnation Peak"
return df
df = generate_velocity_data()
df.head()
In this step, we use graph_objects to create a dual-layered story. Hover over the lines to see the widening “Velocity Gap.”
# Create the figure
fig = go.Figure()
# Add the Harmful Potential Trace
fig.add_trace(go.Scatter(
x=df['Year'], y=df['Harmful_Potential'],
mode='lines',
name='Harmful AI Potential',
line=dict(color='#ef4444', width=4),
hovertemplate='<b>Year %{x:.1f}</b><br>Harm Level: %{y:.2f}<extra></extra>'
))
# Add the Realized Benefits Trace
fig.add_trace(go.Scatter(
x=df['Year'], y=df['Realized_Benefits'],
mode='lines',
name='Realized AI Benefits (Institutional)',
line=dict(color='#3b82f6', width=4),
fill='tonexty', # Fills the "Gap" between the two lines
fillcolor='rgba(239, 68, 68, 0.1)',
hovertemplate='<b>Year %{x:.1f}</b><br>Benefit Level: %{y:.2f}<extra></extra>'
))
# Add markers for specific "Historical Events"
events = df[df['Event'] != ""]
fig.add_trace(go.Scatter(
x=events['Year'], y=events['Harmful_Potential'],
mode='markers+text',
name='Critical Milestones',
text=events['Event'],
textposition="top left",
marker=dict(color='black', size=10, symbol='x')
))
# Update Layout for Storytelling
fig.update_layout(
title={
'text': "<b>The Velocity Gap: Why AI Risk Outpaces Policy</b><br><span style='font-size:14px; color:gray'>Exponential technical harms vs. Logistic institutional benefits</span>",
'y':0.95, 'x':0.5, 'xanchor': 'center', 'yanchor': 'top'
},
xaxis_title="Timeline (Years)",
yaxis_title="Impact Magnitude",
hovermode="x unified",
template="plotly_white",
legend=dict(orientation="h", yanchor="bottom", y=1.02, xanchor="right", x=1),
shapes=[
# Highlight the "Divergence Point"
dict(type="rect", xref="x", yref="paper",
x0=2034, y0=0, x1=2044, y1=1,
fillcolor="LightSalmon", opacity=0.1, layer="below", line_width=0),
]
)
# Add a narrative annotation
fig.add_annotation(
x=2038, y=35,
text="<b>The Velocity Gap</b><br>Risk grows permissionless;<br>Benefits require consensus.",
showarrow=True, arrowhead=2,
ax=40, ay=-40,
bordercolor="#c7c7c7", borderwidth=2, borderpad=4, bgcolor="#ffffff", opacity=0.8
)
fig.show()
Modify the code above to simulate a “Policy Breakthrough” Scenario.
benefit_trajectory parameters so that the curve doesn’t plateau at 18, but continues to grow linearly after year 10.k (growth rate) in the logistic function to see how “faster bureaucracy” changes the outcome.