Preface
Why This Handbook Exists
September 2020. I was working as a medical epidemiologist for the Nebraska Department of Health and Human Services when COVID-19 arrived. Like thousands of public health professionals, I spent the next year doing outbreak investigations, building contact tracing systems, analyzing case data, supporting overwhelmed local health departments.
But I was also running genomic sequences. SARS-CoV-2 surveillance wasn’t just about case counts. It was about understanding which variants were circulating, where they came from, how they were spreading. When we identified one of the first Omicron clusters in the United States in late 2021, it drove home something I’d been thinking about: pathogen genomics is biosecurity infrastructure.
Around the same time, I was supporting the Africa CDC’s genomic surveillance network. Training local teams and establishing sequencing workflows. Analyzing sequence data. Watching how quickly insights from genomic surveillance translated into public health action (or didn’t, when infrastructure gaps existed). Two Science publications came from that work, documenting how the pandemic unfolded across Africa through a genomic lens.
Then in 2023, I started noticing AI capabilities in biology advancing fast. AlphaFold solving protein structures. Large language models explaining laboratory protocols. Tools that made genomic surveillance more powerful were also lowering barriers to biological misuse. The AI safety community was developing biosecurity evaluation frameworks, but most lacked deep biological context. The biosecurity community was still debating gain-of-function research while foundation models were quietly changing what was possible.
Nobody was connecting these fields.
What This Is (and Isn’t)
This is:
- A practical guide pulling together classical biosecurity frameworks and emerging AI-biosecurity challenges
- Grounded in real research, published work, and evaluated capabilities
- Written from experience in pathogen genomics and public health surveillance
- Designed to be updated as the field evolves
This is not:
- Speculative threat scenarios disconnected from capabilities
- A technical ML engineering guide (biological context comes first)
- The final word on anything (the field is moving fast)
The material is structured for graduate seminars, professional development, and institutional reference. Most current biosecurity curricula do not address AI-biological convergence.
How to Use This Handbook
If you’re new to biosecurity: Start with Part I (Foundations). Read the TL;DR summaries first to gauge relevance, then dive into chapters that match your interests.
If you’re an AI safety researcher: Jump to Part IV (AI and Biosecurity). Red-Teaming AI Systems for Biosecurity Risks on red-teaming methodologies is particularly relevant for building evaluation frameworks.
If you’re a policymaker: Read Part II (Operational Biosecurity) for historical context, then Part V (Governance and Futures) for emerging policy challenges.
If you’re here for a specific topic: Use the search function. Each chapter stands alone.
Choose Your Path
Select the pathway that matches your role and immediate needs:
Public Health / Epidemiologists
“I work in infectious disease surveillance or pandemic preparedness”
Start here: - What Is Biosecurity? - Core concepts - Outbreak Detection and Surveillance - Surveillance systems - AI for Biosecurity Defense - AI applications
Your focus: Genomic surveillance, outbreak detection, AI-enhanced early warning systems
AI Safety Researchers
“I evaluate biosecurity risks from AI/ML systems”
Start here: - AI as a Biosecurity Risk Amplifier - Threat modeling - LLMs and Information Hazards - LLM evaluations - Red-Teaming AI Systems - Evaluation frameworks
Your focus: Model evaluations, red-teaming methods, assessing what AI shouldn’t reveal
Policymakers / Governance
“I develop policy frameworks for biosecurity or AI governance”
Start here: - Executive Summary - Key findings and recommendations - International Governance and the BWC - Classical frameworks - Dual-Use Research of Concern - DURC governance - Policy Frameworks for AI-Bio Convergence - Emerging governance - The Future of Biosecurity - Scenarios and trajectories
Your focus: Regulatory frameworks, international coordination, governance gaps
Laboratory Personnel
“I work in BSL-3/BSL-4 labs or manage biosafety programs”
Start here: - Laboratory Biosafety and Biosecurity - BSL protocols - Dual-Use Research of Concern - DURC oversight - Case Studies - Laboratory incidents
Your focus: Physical security, personnel reliability, incident response
Students / Career Seekers
“I want to enter the biosecurity field”
Start here: - What Is Biosecurity? - Foundation - Read Part I sequentially - Core concepts - Building a Biosecurity Career - Pathways and institutions
Your focus: Academic pathways, key institutions, emerging career opportunities
Synthetic Biologists / Researchers
“I work in synthetic biology or biotechnology R&D”
Start here: - Synthetic Biology and Democratization - Dual-use implications - Gain-of-Function Research - GOF governance - DURC - Dual-use oversight
Your focus: Responsible research practices, screening frameworks, governance
New to biosecurity entirely: Read Part I: Foundations sequentially → Part I: Foundations
A Note on Dual Audiences
Two communities that don’t always speak the same language need the same information.
Public health and biosecurity professionals often approach AI with skepticism. They’ve seen technology hype cycles before. They know that laboratory accidents, not AI-designed pathogens, remain the demonstrated risk. When AI safety researchers talk about “biological uplift,” practitioners may hear speculation disconnected from the messy realities of pathogen work.
AI safety researchers often approach biosecurity with urgency. They see capabilities advancing rapidly. They worry that evaluations lag behind deployments. When biosecurity experts emphasize that “the barriers are still high,” AI researchers may hear complacency about a narrowing window.
Both perspectives have validity. Both deserve serious treatment: AI risk discussions grounded in biological reality, and capability advancement taken at its actual pace. Where current evidence supports concern, I say so. Where it suggests caution against overstatement, I say that too.
Readers from either community may find some sections too cautious and others too alarmist. That tension is intentional. The goal is calibrated assessment, not advocacy for either “AI is fine” or “AI is catastrophic.”
A Note on Information Hazards
This handbook addresses dual-use research and biological security risks. Consistent with responsible communication practices, technical details that could enable misuse are cited from peer-reviewed literature but not expanded upon. Readers requiring operational protocols should consult institutional biosafety committees and regulatory guidance.
The material here is educational, aimed at public health professionals, AI safety researchers, policymakers, and students. It focuses on risk frameworks, governance mechanisms, and policy analysis, not actionable step-by-step instructions for hazardous activities.
A Note on References
Almost every factual claim in this handbook has a citation. I’ve prioritized peer-reviewed research, government documents, and technical reports from established biosecurity organizations. Where evidence is limited or contested, I’ve tried to be explicit about uncertainty. Biosecurity governance often operates where perfect information doesn’t exist.
Acknowledgments
This handbook wouldn’t exist without the work of biosecurity researchers, AI safety practitioners, and public health professionals who’ve been doing this work for decades.
Thanks to colleagues at the Nebraska Department of Health and Human Services, the Centers for Disease Control and Prevention, the World Health Organization, the Africa CDC, and the Nebraska Public Health Laboratory. Thanks to state and local health departments whose work shapes what effective public health response looks like in practice.
Thanks to the CDC’s Advanced Molecular Detection (AMD) program, the SPHERES consortium, StaPH-B (State Public Health Bioinformatics), and the Association of Public Health Laboratories (APHL) for building the pathogen genomics and bioinformatics infrastructure that powers modern surveillance.
Thanks to Nextstrain for the phylogenetic tools that enable real-time genomic epidemiology worldwide.
Thanks to the National Academy of Medicine and the University of Nebraska Medical Center’s Global Center for Health Security for frameworks that shape how we approach health security and biodefense.
Thanks to the biological security research community for thoughtful frameworks on dual-use challenges, and to AI safety researchers working on biosecurity evaluations. The Center for AI Safety’s Introduction to AI Safety, Ethics, and Society course (Hendrycks, 2024) provided foundational frameworks for thinking about catastrophic risk that shaped this handbook’s approach.
And thanks to the open-source community for tools like Quarto that make projects like this possible.
Feedback
If you spot errors, have case studies to contribute, or work in areas I’ve under-covered, feedback is welcome.
Particularly valuable:
- Real-world biosecurity implementation experiences
- AI model evaluation methodologies and red-teaming results
- Corrections to technical or factual errors