Preface

Why This Handbook Exists

September 2020. I was working as a medical epidemiologist for the Nebraska Department of Health and Human Services when COVID-19 arrived. Like thousands of public health professionals, I spent the next year doing outbreak investigations, building contact tracing systems, analyzing case data, supporting overwhelmed local health departments.

But I was also running genomic sequences. SARS-CoV-2 surveillance wasn’t just about case counts. It was about understanding which variants were circulating, where they came from, how they were spreading. When we identified one of the first Omicron clusters in the United States in late 2021, it drove home something I’d been thinking about: pathogen genomics is biosecurity infrastructure.

Around the same time, I was supporting the Africa CDC’s genomic surveillance network. Training local teams and establishing sequencing workflows. Analyzing sequence data. Watching how quickly insights from genomic surveillance translated into public health action (or didn’t, when infrastructure gaps existed). Two Science publications came from that work, documenting how the pandemic unfolded across Africa through a genomic lens.

Then in 2023, I started noticing AI capabilities in biology advancing fast. AlphaFold solving protein structures. Large language models explaining laboratory protocols. Tools that made genomic surveillance more powerful were also lowering barriers to biological misuse. The AI safety community was developing biosecurity evaluation frameworks, but most lacked deep biological context. The biosecurity community was still debating gain-of-function research while foundation models were quietly changing what was possible.

Nobody was connecting these fields.

This handbook addresses that missing connection.

What This Is (and Isn’t)

This is:

  • A practical guide pulling together classical biosecurity frameworks and emerging AI-biosecurity challenges
  • Grounded in real research, published work, and evaluated capabilities
  • Written from experience in pathogen genomics and public health surveillance
  • Designed to be updated as the field evolves

This is not:

  • A complete biosecurity textbook (there are excellent ones already)
  • Speculative threat scenarios disconnected from capabilities
  • A technical ML engineering guide (biological context comes first)
  • The final word on anything (the field is moving fast)

How to Use This Handbook

If you’re new to biosecurity: Start with Part I (Foundations). Read the TL;DR summaries first to gauge relevance, then dive into chapters that match your interests.

If you’re an AI safety researcher: Jump to Part IV (AI and Biosecurity). Red-Teaming AI Systems for Biosecurity Risks on red-teaming methodologies is particularly relevant for building evaluation frameworks.

If you’re a policymaker: Read Part II (Operational Biosecurity) for historical context, then Part V (Governance and Futures) for emerging policy challenges.

If you’re here for a specific topic: Use the search function. Each chapter stands alone.

A Note on Dual Audiences

This handbook serves two communities that don’t always speak the same language.

Public health and biosecurity professionals often approach AI with skepticism. They’ve seen technology hype cycles before. They know that laboratory accidents, not AI-designed pathogens, remain the demonstrated risk. When AI safety researchers talk about “biological uplift,” practitioners may hear speculation disconnected from the messy realities of pathogen work.

AI safety researchers often approach biosecurity with urgency. They see capabilities advancing rapidly. They worry that evaluations lag behind deployments. When biosecurity experts emphasize that “the barriers are still high,” AI researchers may hear complacency about a narrowing window.

Both perspectives have validity. This handbook tries to hold them together: grounding AI risk discussions in biological reality while taking seriously the pace of capability advancement. Where current evidence supports concern, I say so. Where it suggests caution against overstatement, I say that too.

Readers from either community may find some sections too cautious and others too alarmist. That tension is intentional. The goal is calibrated assessment, not advocacy for either “AI is fine” or “AI is catastrophic.”

A Note on Information Hazards

This handbook addresses dual-use research and biological security risks. Consistent with responsible communication practices, technical details that could enable misuse are cited from peer-reviewed literature but not expanded upon. Readers requiring operational protocols should consult institutional biosafety committees and regulatory guidance.

The material here is educational, aimed at public health professionals, AI safety researchers, policymakers, and students. It focuses on risk frameworks, governance mechanisms, and policy analysis, not actionable step-by-step instructions for hazardous activities.

A Note on References

Almost every factual claim in this handbook has a citation. I’ve prioritized:

  • Peer-reviewed research from Nature, Science, The Lancet, and biosecurity-focused publications
  • Government documents (CDC, WHO, BWC Implementation Support Unit)
  • Technical reports from NTI, RAND, Gryphon Scientific, and similar organizations
  • AI lab research (Anthropic, OpenAI) on biosecurity evaluations

Where evidence is limited or contested, I’ve tried to be explicit about uncertainty. Biosecurity governance often operates where perfect information doesn’t exist.

Acknowledgments

This handbook wouldn’t exist without the work of biosecurity researchers, AI safety practitioners, and public health professionals who’ve been doing this work for decades.

Thanks to colleagues at the Nebraska Department of Health and Human Services, the Africa CDC, the Nebraska Public Health Laboratory, and state/local health departments who taught me what real-world public health response looks like.

Thanks to the biological security research community for thoughtful frameworks on dual-use challenges, and to AI safety researchers working on biosecurity evaluations.

And thanks to the open-source community for tools like Quarto that make projects like this possible.

Feedback

This is a living document. If you spot errors, have case studies to contribute, or work in areas I’ve under-covered, feedback is welcome.

Particularly valuable:

  • Real-world biosecurity implementation experiences
  • AI model evaluation methodologies and red-teaming results
  • Corrections to technical or factual errors