Opentrons is using Nvidia’s Isaac and Cosmos artificial intelligence (AI) platforms to generate training data needed to develop and deploy physical AI-enabled laboratory robotics. The company’s goal is to help current and future customers of its systems use physical AI, to improve everyday laboratory workflows. Opentrons made the announcement last week ahead of this year’s international meeting of the Society for Laboratory Automation and Screening held in Boston from February 7–11.
To date, Opentrons has installed more than 10,000 robotic systems in various research universities and biopharmaceutical companies and ships about 800 robots per year. That kind of volume has allowed the company to cultivate a wealth of experience with “’execut[ing]’ lab experiments at scale,” according to James Atwood, Opentrons CEO, and sheds light on what the company believes is the next phase in the evolution of lab automation. “True autonomous science is going to require addressing what I like to call the in-between steps,” he told GEN. That includes tasks such as moving samples from liquid handlers or fetching samples from storage or reagents from the shelf. “We have been supporting that in-between step for a few years now through some of our integration partners.”
Traditionally, such integrations tend to provide a very structured platform for performing “that in-between [step]” where a “fixed system arm uses path planning to move [a sample] from field point A to point B,” Atwood explained. Integration with Nvidia builds on that functionality by making it possible to utilize vision language action models or VLAs, something Atwood is very enthusiastic about.
“The promise of that is that you can use physical video data and robotic arm trajectories coupled with synthetic data to train a VLA to allow a mobile robotic system to operate in a lab,” he told GEN. It’s an emerging technology, one that is starting to make inroads into some consumer markets and will likely continue to spread, according to Atwood. “Where we think we can fundamentally help is in prepar[ing] both physical and synthetic data purpose-built in a laboratory setting, which we know really well, to allow those physical AI models to be trained so that when they are mature, they could be deployed on robotic systems to execute autonomous software.”
The data streams behind physical AI
Achieving that vision of physical AI in the lab requires two types of visual training data. The first is video that captures the physical movements and trajectories of robotic systems at work as the mechanical arms move samples, devices, and labware throughout the lab environment. Opentrons is already capturing thousands of hours of this type of data using its own lab equipment, Atwood said.
The second type of data will come from Nvidia’s solutions. Isaac is a simulation software for generating digital twins in the lab environment. With this solution, Opentrons will be able to generate “a digital representation of the [robotic] arms” and use it to simulate the movements of labware around the digital lab environment. For its part, Cosmos makes it possible to simulate and test various environmental conditions in the lab including different lighting levels and other changes. Both these systems generate valuable synthetic data that will be combined with video information and used to train the AI models.
“The fundamental advantage of a VLA, and why we’re making this investment, is because you can have a robotic system that does not need to know the location of devices it needs to interact with,” Atwood said. The reality is that most labs have a diverse range of instruments and equipment, and installing large, fixed infrastructure, which is the current paradigm, does not fully address the complexity of the environment. “What labs really want is the ability to have a mobile system that uses camera vision to go from point A to point B to point C to execute the experiment to scale. And the way we think we’re going to unlock that is through physical AI.”
Pilot workflows and continuous learning
If the goal is robots that can effectively use camera vision in different contexts, then it is important to consider differences between lab environments and how customers choose to run their workflows to effectively train AI models. It is certainly something that Opentrons has considered and is prepared for. “Opentrons is very fortunate because we happen to operate a 20,000 square foot lab in Long Island City” that has “about 250 pieces of different analytical equipment,” Atwood said. “From an operational perspective, … we are the most well-equipped in terms of instrument diversity of any of our competitors. So that allows us to be able to be very selective about owning not only the experimental cycle, because we perform these experiments, but having all of the equipment necessary to train these models.”
For their first steps, Opentrons is developing training data from what Atwood called “bench-scale primitives”—essentially workflows that use the Opentrons Flex® platform plus one or more pieces of equipment. “We are moving things between that equipment, recording the physical videos, making the synthetic data, training the models on that. Then we’re going to be scaling up to multi-instrument integrations” and then on to “multi bench settings.” Current estimates are that the training process will go on throughout 2026. The company is also enlisting potential partners who are also working on models for autonomous labs to join its efforts.
If all goes according to plan, Opentrons will make the models accessible to a select group of customers in 2027. The marker of success that triggers this step will be “reproducibility at the multi-instrument level,” Atwood said. And the opportunity to train on other people’s data enables “that virtuous cycle of multi-environment training data, repeated training of models that’s going to improve the performance of these [models] over time.” The company has specific use cases that it is interested in working on and these interests will drive its selection of customers for partnership next year. Atwood declined to share details on what those use cases are at this time.
“We’ve seen artificial intelligence fundamentally accelerate hypothesis generation. It’s now faster than ever to design an experiment and data analysis is massively accelerated with AI,” he said. “Simultaneously we have this increasing demand for new therapeutics and new discoveries,” all of which “is driving the need [for] automation.” The bottleneck right now is doing the experiment and that could change physical AI. Fundamentally, “we want to see a world … where a scientist can use AI to design a hypothesis and experiment and then push those instructions to a fleet of robots to do the experimental work. That’s the long-term view.”
Besides the collaboration with Nvidia, Opentrons also announced a strategic partnership with HighRes last week, to co-develop and demonstrate what they believe is the first AI agent-to-agent laboratory workflow. The collaboration brings together Opentrons’ Flex platforms and OpentronsAI with the HighRes’ FlexPod™ Configurable Lab Automation platform and Cellario®, its scheduling and orchestration software.
