Co-founder, CEO
Ginkgo Bioworks
Most people still picture science as a person in a white coat, pipette in hand. That image is stubbornly accurate. Even in 2025—after decades of lab robots, high‑throughput screening, and cloud computing—the overwhelming majority of experimental work is still done by hand at the lab bench. The result is that the “limiting reagent” for progress on everything from climate‑friendly fuels to cancer therapeutics is not ideas or data analysis. It is the slow, manual generation of high‑quality experimental data.
That bottleneck is why autonomous laboratories should matter to everyone, not just to scientists. When we talk about “autonomous labs,” we are really talking about a new way to do science: one where scientists can order experiments just by asking for them. The U.S. Department of Energy’s new Genesis Mission tasks the national labs with building an AI‑driven discovery platform that links supercomputers, scientific datasets, and autonomous labs, and is a clear signal that this shift is now a national strategy, not science fiction. Actions are already underway—Pacific Northwest National Labs ordered a $47M, 97 robot autonomous lab this past December.
So, what exactly is an autonomous lab, and why has it taken so long to arrive?
The limitations of lab robotics
Automation has been around since the 1980s. and the robotics literature from that era reads strikingly contemporary. Papers with titles such as “The role of robotics in the laboratory of the 1980s”1 and “Laboratory automation: a challenge for the 1990s”2 anticipated many of today’s themes: pressure to increase throughput, reduce costs, and integrate instruments and information systems. Yet outside of large clinical and high-throughput screening facilities, the routine work of discovery science remains largely manual.
Scientists themselves are asking to shift away from the lab bench. In a recent community survey on automation and autonomy in materials discovery, Hung and colleagues found that a majority of experimental researchers wanted to automate the rate‑limiting steps in their workflows, and more than a quarter said they would be comfortable automating their entire scientific workflow. Reservations were driven less by fear of “robots taking jobs” and more by practical concerns: the difficulty of integrating equipment, worries about flexibility, and the effort required to build and maintain custom systems.3
If scientists want to leave the bench and we’ve been working on lab robotics for 40 years, why is most of the experimental work still done by hand at the lab bench?
The core reason is that the industry built automation, not autonomy.
Traditional lab automation is excellent at running a pre‑defined protocol repeatedly. A high‑throughput screening system that can test millions of compounds against a well‑specified assay is a triumph of automation. But it is also narrow: if you change the assay, the plate layout, the container types, or the upstream sample preparation, you often need to re‑engineer the system. Each new workflow becomes an integration project.
Autonomy, by contrast, is about decision‑making and adaptation. The leap from automation to autonomy is not just more actuators; it is a system that can perceive its environment, reason about options, and change its behavior on the fly—with humans delegating more of the “driving task” to software.
Laboratories face similar challenges. Most research labs are “high mix” environments: the combination of instruments, sample types, and protocols may change daily. Focusing purely on throughput works well for a single, standardized assay, but it does not solve the real problem for scientists, which is flexibility and adaptability as they deal with the messy reality of experimental science. Surveys of researchers highlight exactly this tension: they want to accelerate their rate‑limiting steps, but they also worry that existing automation is brittle, hard to reconfigure, and expensive to maintain as science evolves.3
This is why so many automated systems now sit underutilized in the corner of a lab. They were optimized to run one or two workflows at high throughput, not to serve as a general‑purpose platform. To truly make experimental science easier to do, we need laboratories that are programmable at the level of scientific intent (e.g., “measure how this enzyme behaves across these conditions”) rather than at the level of low‑level commands (e.g., “move this pipette tip from A1 to B3”).
That is the shift from automation to autonomy.
Importantly, none of this is about replacing scientists. It is about freeing them from the bench so they can spend more time doing what humans are uniquely good at: defining important problems, inventing new concepts, interpreting ambiguous data, and making consequential judgments about risk, safety, and ethics. An autonomous lab is best thought of as a new kind of scientific instrument—one that happens to be the whole lab.
Six Levels of Laboratory Autonomy
Getting to a world where scientists can retire their pipettes will not happen overnight. As with autonomous vehicles, it helps to think in stages. Inspired by both the automotive levels of autonomy and emerging academic frameworks for “self‑driving labs,” we can talk about six Levels of Laboratory Autonomy—LoLA, for short.
Level 0 (Manual Lab Work): A human scientist analyzes data, makes an experiment plan, and conducts lab protocols by hand at the lab bench.
Level 1 (Narrow Lab Automation): A human scientist analyzes data, makes an experimental plan, programs software to execute a single protocol on lab automation, and debugs the protocol after transfer to robotics.
Key technical gates: Automation hardware and software capable of conducting a single protocol end-to-end without human monitoring.
Level 2 (General Lab Automation): Multiple human scientists analyze data and make experimental plans, and each scientist programs software to define their protocol. A lab automation system executes all protocols simultaneously, and scientists debug protocols after transferring to robotics.
Key technical gates: Automation hardware and software capable of conducting tens of different protocols simultaneously without human monitoring.
Level 3 (Conditional Autonomy): Human scientists analyze data and make experimental plans, then submit these plans in human language to an AI agent to have the work conducted in the Autonomous lab. Minimal programming or protocol debugging is needed.
Key technical gates: Human-language AI agents and protocol debugging software allow the transfer of experimental plans to the Autonomous Lab without human programming or experimental debugging of protocols.
Level 4 (High Autonomy): Human scientists set a research direction or key hypothesis, keeping in mind the area of capabilities of the Autonomous lab, and then submit that direction to an AI agent. The AI agent makes an experimental plan and submits it to the Autonomous Lab–all without any human intervention.
Key technical gates: An AI agent that can analyze scientific literature, evaluate previous data, and make an experimental plan to submit to the Autonomous Lab.
Level 5 (Full Autonomy): Human scientists set a research direction or key hypothesis, then submit that direction to an AI agent, knowing it can conduct any needed experiment in their scientific domain. The AI agent makes an experimental plan and submits it to the Autonomous Lab all without any human intervention.
Key technical gates: An Autonomous Lab that can handle a large majority of possible experimental plans in a scientific domain (such as biotechnology). Requires more than 100 pieces of lab equipment in a single lab setup.
Enormous value will be created well before we reach Level 5. Moving from Level 0 to Level 2 can eliminate large fractions of repetitive bench work. Level 3 lets scientists “speak science” instead of “speaking robot.” Levels 4 and 5 have the potential to compress multi‑year projects into months, as early self‑driving labs in chemistry and materials science are already beginning to demonstrate.4
Ginkgo Bioworks is fully focused on the future of the autonomous lab. Our mission reflects a simple belief: the most powerful way to speed science is to make the lab itself programmable. If we get this right, the average person may never see the robots, AI agents, or cloud infrastructure doing the work. What they will see are its consequences: cleaner energy, better medicines, more resilient supply chains, and a generation of scientists whose time is spent less on moving liquids and more on moving ideas forward.
References
- Little JN. The role of robotics in the laboratory of the 80s. J Res Natl Bur Stand. 1988;93(3):191. doi:10.6028/jres.093.016
- Mordini C. Laboratory automation: a challenge for the 1990s. J Automat Chem. 1994;16(4):125-129. doi:10.1155/S146392469400012X
- Hung L, Yager JA, Monteverde D, Baiocchi D, Kwon H-K, Sun S, Suram S. Autonomous laboratories for accelerated materials discovery: a community survey and practical insights. Digital Discovery. 2024;3:1273-1279. doi:10.1039/D4DD00059E
- Canty RB, Bennett JA, Brown KA, Buonassisi T, Kalinin SV, Kitchin JR, Maruyama B, Moore RG, Schrier J, Seifrid M, Sun S, Vegge T, Abolhasani M. Science acceleration and accessibility with self-driving labs. Nat Commun. 2025;16(1):3856. doi:10.1038/s41467-025-59231-1
Jason Kelly, PhD, is a co-founder and CEO of Ginkgo Bioworks.
