Cloud Labs and Automated Biology

A biotech startup can now design experiments in Python on Monday, submit them via API to a robotic laboratory in another country on Tuesday, and receive experimental results on Wednesday without anyone physically entering a lab. This remote experiment execution model (cloud laboratories) disrupts four foundational biosecurity assumptions: that trained personnel are physically present, that institutional biosafety committees review protocols, that physical access controls limit who can conduct experiments, and that lab colleagues might notice suspicious work. When combined with AI systems capable of experimental design, these platforms could dissolve the tacit knowledge barrier that has historically protected against biological threats.

Learning Objectives
  • Explain what cloud laboratories are and how they enable remote experiment execution
  • Identify the biosecurity assumptions that cloud labs disrupt
  • Analyze the convergence risk when AI systems gain access to automated lab platforms
  • Evaluate current governance approaches and their limitations
  • Apply a risk-proportionate framework for balancing research access with security

What Are Cloud Labs? Facilities allowing users to design experiments remotely and have robots execute them automatically. Major platforms include Emerald Cloud Lab and specialized biofoundries.

The Biosecurity Gap: Traditional biosecurity assumes a trained human is physically present, institutional biosafety review occurs, physical access controls exist, and lab workers might notice suspicious activity. Cloud labs disrupt all four assumptions. RAND describes them as “potential loopholes” in existing frameworks.

The Convergence Concern: Previous chapters established that: - LLMs can provide troubleshooting guidance (LLMs and Information Hazards) - BDTs can design novel sequences (AI-Enabled Pathogen Design)

Cloud labs could provide the execution layer - allowing AI-designed experiments to be conducted without human wet-lab skills. This is the “tacit knowledge dissolution” scenario.

Reality Check: Automation is hard. Robots jam, reagents expire, and complex biology still requires human troubleshooting. We have time to fix governance before the technology becomes fully automated - but that time should not be wasted.

Action Needed: Establish a Cloud Lab Security Consortium, implement “Know Your Customer” standards, require human-in-the-loop for high-risk protocols, and coordinate internationally before capabilities outpace governance.

Introduction

Imagine designing a biological experiment on your laptop in one country while robots in another country execute it automatically, with results streamed back in real-time.

This is not science fiction. This is cloud laboratories - and they are already operational.

Throughout this Part, we have traced how AI affects biosecurity across multiple dimensions:

This chapter addresses what happens when these capabilities converge with automated laboratory execution. Cloud labs represent the potential bridging of the final gap - the “tacit knowledge” barrier that has historically limited who could actually conduct biological experiments.

The Tacit Knowledge Barrier - Revisited

Throughout this handbook, we have noted that the tacit knowledge barrier - the hands-on experimental skills required to successfully conduct biological work - remains one of the most important practical barriers to misuse.

You can read every protocol in the world, but making experiments work requires practice, troubleshooting, and physical technique. This is why the biosecurity community has, until now, focused primarily on information and materials rather than laboratory access.

Cloud labs change this calculus. If experiments can be designed remotely and executed automatically, the tacit knowledge requirement may begin to dissolve.

Early evidence of AI-directed experimental iteration has already appeared. In December 2025, OpenAI demonstrated GPT-5 proposing protocol modifications, analyzing results, and refining approaches across multiple experimental rounds, achieving a 79-fold improvement in molecular cloning efficiency. Human scientists executed the physical work, but AI drove the experimental logic. Connecting such capabilities to cloud lab infrastructure would remove even that human execution requirement.

The Cloud Lab Ecosystem

What Are Cloud Labs?

Cloud laboratories are facilities where robotic systems execute biological or chemical experiments designed remotely by users. The user specifies what they want done - through a web interface, API, or specialized software - and automation handles the physical execution.

The model parallels cloud computing: instead of maintaining your own servers (wet lab), you rent access to shared infrastructure (cloud lab) and pay for what you use.

Core Capabilities: - Liquid handling and sample preparation - PCR, cloning, and molecular biology workflows - Cell culture and fermentation - Analytical chemistry and quality control - High-throughput screening

Major Platforms

Platform Focus Model
Emerald Cloud Lab General life sciences Full-service cloud lab
Culture Biosciences Bioprocess optimization Fermentation-as-a-service
Synthego CRISPR workflows Gene editing services
University biofoundries Academic research Institutional access

These platforms serve legitimate and valuable purposes: enabling startups without wet lab space, improving reproducibility through standardized protocols, reducing costs through shared infrastructure, and expanding research access.

How Cloud Labs Work in Practice

A typical workflow:

  1. Design: User specifies experiment through interface or API
  2. Review: Platform may screen for hazardous materials or protocols
  3. Scheduling: Experiment enters queue for robotic execution
  4. Execution: Automated systems perform procedures
  5. Results: Data and samples returned to user

The entire process can occur without the user ever entering a laboratory or handling biological materials directly.

The Biosecurity Gap

Traditional biosecurity frameworks rest on four assumptions that cloud labs disrupt:

Assumption 1: Physical Presence of Trained Personnel

Traditional model: A human with laboratory training physically performs experiments. This person has been vetted, trained, and is accountable (their institution knows who they are).

Cloud lab disruption: The user may never enter a laboratory. Their only interaction is through a computer interface. The verification that occurs is customer onboarding, not laboratory training.

Assumption 2: Institutional Biosafety Review

Traditional model: Experiments involving biological materials go through institutional biosafety committees (IBCs). The IBC reviews protocols, ensures compliance with regulations, and provides oversight.

Cloud lab disruption: Who is the “institution” when a user in Country A submits an experiment to a robotic lab in Country B? The cloud lab company may have its own review processes, but these may not be equivalent to traditional IBC oversight and may vary significantly between providers.

Assumption 3: Physical Access Controls

Traditional model: Laboratories have physical security - badge access, locked doors, restricted areas. These controls limit who can enter and what they can access.

Cloud lab disruption: Access is now mediated through software credentials - usernames, passwords, API keys. These can be shared, stolen, or misrepresented more easily than physical presence.

Assumption 4: Human Observation

Traditional model: Lab colleagues might notice suspicious activity. A lab manager might question an unusual request. Tacit social controls exist because experiments happen around other people.

Cloud lab disruption: When experiments are executed by robots and monitored remotely, there are fewer opportunities for human observation to flag concerning patterns. The “social” aspect of laboratory oversight disappears.

The “Loophole” Framing

Some biosecurity analysts describe cloud labs as a potential “loophole” - a path around traditional controls rather than through them.

This framing can be misleading. Cloud lab providers are not indifferent to biosecurity; most implement screening and verification measures. The concern is that existing regulations were not designed with this model in mind, creating ambiguity about requirements and accountability.

A RAND Corporation analysis describes cloud labs as “potential loopholes in existing chemical and biological weapons nonproliferation frameworks,” precisely because users can outsource experimental work to facilities not clearly covered by existing regulations. Similarly, SIPRI’s 2025 analysis identifies cloud labs as “new actors” that sit uneasily within traditional export control regimes.

The risk is not that cloud labs are uncontrolled, but that controls may be inconsistent across providers, and the regulatory framework has not caught up to the technology.

AI + Cloud Lab Convergence: The Concerning Scenario

The biosecurity community has increasingly focused on what happens when AI capabilities gain access to automated laboratory execution.

The Scenario

Consider a progression:

  1. Today: An AI can suggest experiments, provide troubleshooting guidance, and design biological sequences
  2. Near-future: An AI agent with cloud lab API access could iterate experiments autonomously - all without human intervention

This “autonomous research agent” scenario is not purely hypothetical. Demonstrations of AI agents conducting chemistry experiments have already been published. Extending this to cloud biology laboratories is technically straightforward.

What Would Need to Happen

For the concerning scenario to materialize:

  1. API Access: The AI would need credentials to submit work to a cloud lab
  2. Protocol Capability: The AI would need to generate valid experimental protocols
  3. Iteration Capability: The AI would need to interpret results and design follow-up experiments
  4. Minimal Human Oversight: The human “owner” would need to be absent, negligent, or complicit

Current cloud labs require human accounts and some level of verification. But as AI agents become more capable, the distinction between “human using AI assistance” and “AI acting autonomously” may become harder to detect.

The “Google Plus Cloud Lab” Concern

In LLMs and Information Hazards, we described the current era as “Google Plus” - LLMs are essentially better search engines that synthesize information more efficiently.

The day an LLM is directly connected to a cloud laboratory - a robotic lab controllable over the internet - is the day the tacit knowledge barrier may begin to dissolve.

This is not a prediction that disaster follows immediately. But it represents a qualitative shift in the threat landscape that warrants proactive governance rather than reactive response.

Near-Term Realistic Concerns

Before the fully autonomous scenario, more immediate concerns include:

Novice uplift: Cloud labs could enable people with theoretical knowledge but no lab skills to conduct experiments, reducing the practical barrier to entry.

Distributed bad actors: A threat actor could submit seemingly innocuous work from multiple accounts, with concerning elements only apparent in aggregate.

Screening evasion: Novel AI-designed sequences (per AI-Enabled Pathogen Design) that evade traditional screening could be synthesized through cloud lab platforms if those platforms rely on the same screening approaches as DNA synthesis providers.

What We Know vs. What Remains Uncertain

Demonstrated (supported by published evidence):

  • Cloud laboratories are operational and commercially available (Emerald Cloud Lab and others)
  • Remote users can design and execute experiments without physical lab access
  • Current platforms implement customer verification and protocol screening
  • Automation remains imperfect - complex biology still requires human troubleshooting
  • No documented cases of cloud lab biosecurity incidents to date

Theoretical (plausible but not yet demonstrated):

  • AI agents autonomously using cloud labs to execute dangerous experiments
  • Cloud labs substantially lowering barriers for novice threat actors
  • Distributed attacks across multiple accounts evading aggregate detection
  • AI-designed sequences evading cloud lab screening systems

Unknown (insufficient evidence to assess):

  • How quickly automation will mature to reduce human oversight requirements
  • Whether existing screening approaches are adequate for emerging threats
  • The effectiveness of current Know Your Customer protocols against sophisticated adversaries
  • How governance frameworks will adapt as capabilities advance

The window for proactive governance exists now - before capabilities become fully integrated. Current limitations provide time to establish appropriate safeguards.

Current Governance and Gaps

How Cloud Labs Screen Today

Responsible cloud lab providers implement various controls:

Customer verification: - Identity verification during account creation - Institutional affiliation checks for certain capabilities - Export control compliance (ITAR, EAR) where applicable

Protocol review: - Automated screening for select agents and regulated materials - Human review for flagged protocols - Restrictions on certain experiment types

Activity monitoring: - Logging of all submitted work - Anomaly detection for unusual patterns - Audit trails for regulatory compliance

Material controls: - DNA synthesis screening (for labs that synthesize DNA) - Sourcing restrictions for biological materials - No handling of select agents or tier 1 pathogens

Regulatory Ambiguity

The challenge is that existing regulations were designed for traditional laboratory settings:

FSAP (Federal Select Agent Program): Regulates possession and use of select agents but focuses on physical facilities and registered entities. Cloud labs that do not handle select agents may fall outside this framework.

IBC Requirements: NIH Guidelines require IBC review for certain research, but applicability to cloud lab work may be unclear, especially for non-NIH-funded users.

Export Controls: ITAR and EAR apply to certain biological materials and technologies, but enforcement for remote access services across jurisdictions is complex.

State and Local Regulations: Laboratory regulations vary by jurisdiction, but cloud labs may operate in different jurisdictions than their users.

Industry Self-Governance Efforts

The cloud lab industry recognizes these challenges. Trade associations and individual companies have developed voluntary guidelines, including:

  • Customer screening procedures
  • Protocol review processes
  • Information sharing on concerning requests
  • Engagement with government on regulatory frameworks

Whether voluntary measures are sufficient, or whether formal regulation is needed, remains an open question. RAND has called for a “Cloud Lab Security Consortium” - modeled on the International Gene Synthesis Consortium (IGSC) - to standardize screening across providers and share threat intelligence.

Proposed Safeguards

Several frameworks have been proposed for managing cloud lab biosecurity risks:

Enhanced Screening

Protocol-level review: Screen not just DNA sequences but experimental protocols for concerning combinations or outcomes.

Aggregate monitoring: Look for patterns across experiments from the same user that might indicate stepwise progression toward concerning capabilities.

AI-assisted screening: Use AI to identify concerning experimental designs, not just sequence matches.

Human-in-the-Loop Requirements

Verification for high-risk work: Require video verification, institutional endorsement, or other enhanced authentication for experiments above certain risk thresholds.

Mandatory review points: Certain protocol types trigger human review before execution proceeds.

Rate limiting: Constrain how quickly new users can access sensitive capabilities.

Technical Controls

API restrictions: Limit automated submissions for new accounts or sensitive experiment types.

Anomaly detection: Flag unusual patterns of usage for human review.

Audit trails: Maintain detailed logs for regulatory and law enforcement access.

Governance Frameworks

Clear regulatory jurisdiction: Establish which agency has authority over cloud lab biosecurity.

Provider standards: Develop industry-wide minimum screening requirements.

International coordination: Work with international partners to prevent jurisdiction shopping.

“Know Your Customer” (KYC) Standards

Just as banks must verify client identity to prevent money laundering, cloud labs must verify the identity and intent of their users. This parallel to financial sector KYC requirements suggests:

  • Verified institutional affiliation
  • No anonymous accounts
  • Clear audit trails for all transactions
  • Shared “no-fly lists” of banned users across providers
Checklist: Evaluating Cloud Lab Biosecurity

If you are reviewing a cloud lab platform (as a funder, regulator, or potential user), ask:

1. Customer Verification - What identity verification occurs during onboarding? - Is institutional affiliation required for certain capabilities? - How are export control requirements addressed?

2. Protocol Screening - What screening occurs before experiment execution? - Is DNA synthesis screening applied? - Are protocols reviewed for dual-use concerns?

3. Activity Monitoring - What logs are maintained? - Is anomaly detection implemented? - How are concerning patterns reported?

4. Governance - Which regulations apply to this platform? - What voluntary standards are followed? - How is the platform engaging with regulators?

Benefits and Legitimate Uses

It is important to emphasize: cloud labs are not primarily a biosecurity problem. They offer genuine value for legitimate science.

Why Cloud Labs Matter for Science

Access expansion: Researchers at institutions without wet lab space can conduct experimental work.

Reproducibility: Standardized, robotic execution can reduce variability between experiments.

Cost reduction: Shared infrastructure reduces the capital investment needed for experimental research.

Speed: Parallel execution and optimized scheduling can accelerate research timelines.

Training: Some platforms are used for education, allowing students to “conduct” experiments remotely.

Who Uses Cloud Labs Legitimately

  • Biotech startups without dedicated lab space
  • Academic researchers accessing specialized equipment
  • Pharmaceutical companies for high-throughput screening
  • Synthetic biology teams for standardized DNA assembly
  • Educational institutions for remote learning

Balancing Access and Security

The goal should not be to eliminate cloud labs but to implement risk-proportionate controls. The framework should:

  • Preserve access for legitimate research
  • Screen effectively for concerning requests
  • Monitor actively for misuse patterns
  • Respond rapidly when concerns arise

This is analogous to the approach for DNA synthesis: not to ban the technology, but to implement screening that catches concerning orders while allowing the vast majority of legitimate work to proceed.

International Dimensions

Cloud labs operate globally, creating coordination challenges:

Multi-jurisdictional operation: A company may be headquartered in one country, operate facilities in another, and serve customers worldwide.

Regulatory arbitrage: If one jurisdiction implements strict controls while another does not, users might shop for the least restrictive option.

Export control complexity: The remote nature of cloud labs complicates traditional export control enforcement.

Dual-use considerations: The Biological Weapons Convention prohibits development of biological weapons but relies on national implementation that may not address cloud lab access.

Effective governance will require international coordination - likely through existing frameworks like the Australia Group or new mechanisms specifically addressing cloud biology.

Reality Check: The “Broken Robot” Factor

A necessary dose of skepticism is warranted here.

If you read discussions among biotech engineers who work with these systems, a common theme emerges: automation is hard.

The failure rate: Liquid handling robots jam. Tips fall off. Reagents expire. Incubators drift. Contamination happens.

The tacit barrier persists: Even with robots, biology requires troubleshooting. “The cells look weird today” is an observation a robot cannot easily make or act upon. The experienced lab scientist who notices something is off remains essential.

The iteration problem: In practice, most cloud lab experiments require human interpretation between iterations. Fully autonomous closed-loop systems work for narrow optimization problems, but complex biology still requires human judgment.

Conclusion: While the security risks are real and warrant proactive governance, the operational capability of cloud labs to autonomously generate pandemic agents is currently limited by the sheer complexity of wet-lab automation. We have time to fix governance before the technology becomes fully automated - but that time should not be wasted.

Future Trajectory

Increasing Automation

The trend is toward greater automation and AI integration:

  • More experimental capabilities becoming automated
  • Better APIs enabling programmatic access
  • AI systems increasingly capable of experimental design
  • Integration of design-build-test-learn cycles

Governance Keeping Pace?

The question is whether governance can keep pace with these capabilities. The window for proactive action may be limited:

Today: Cloud labs are a niche technology used primarily by sophisticated researchers. Controls can be implemented before widespread adoption.

Tomorrow: Cloud labs become a standard part of the research infrastructure. Implementing controls becomes harder as incumbents resist and alternatives proliferate.

The Window for Action

The biosecurity community has an opportunity to shape cloud lab governance before the technology becomes ubiquitous.

Learning from previous technologies (DNA synthesis, gain-of-function research), it is easier to establish norms and requirements early than to retrofit them onto mature industries. The lesson is not to wait until a concerning incident occurs.

What are cloud laboratories and how do they work?

Cloud laboratories are facilities where users design experiments remotely through web interfaces or APIs, and robotic systems execute the physical work automatically. Users specify their experimental protocols, the platform screens for hazardous materials, and automated systems perform liquid handling, PCR, cell culture, and other procedures without requiring physical lab access. Results are returned electronically.

How do cloud labs differ from traditional laboratory biosecurity?

Traditional biosecurity assumes trained personnel physically present in laboratories, institutional biosafety committee oversight, physical access controls like badge systems, and human observation of suspicious activities. Cloud labs disrupt all four assumptions: users never enter the lab, institutional oversight may be ambiguous across jurisdictions, access is controlled through software credentials that can be shared or stolen, and robotic execution eliminates social observation of concerning work.

What is the AI convergence risk with cloud laboratories?

The convergence concern arises when AI systems capable of designing experiments gain API access to cloud labs for autonomous execution. AI could potentially design protocols, submit them remotely, interpret results, and iterate new experiments without human intervention. This would dissolve the “tacit knowledge barrier” that currently limits who can conduct biological experiments, potentially enabling novices to execute dangerous work.

Are cloud labs currently being misused for biosecurity threats?

No documented biosecurity incidents involving cloud labs exist. Current platforms implement customer verification, protocol screening, and activity monitoring. More importantly, wet-lab automation remains imperfect: robots jam, reagents fail, and complex biology still requires human troubleshooting. The fully autonomous threat scenario is not yet operational, providing time to strengthen governance before capabilities mature.


This chapter completes Part IV: AI and Biosecurity. For the full picture, see earlier chapters on AI as a Biosecurity Risk Amplifier, LLMs and Information Hazards, AI-Enabled Pathogen Design, AI for Biosecurity Defense, and Digital Biosurveillance.