Artificial intelligence (AI) is no longer a futuristic add-on in the life sciences. It is rapidly becoming infrastructure. Across regulatory affairs and biomanufacturing, AI systems are moving from experimental pilots into validated, production-grade tools that directly influence how drugs are designed, manufactured, submitted, and approved.
Yet adoption has been cautious, and for good reason. Life sciences operate under some of the most stringent regulatory expectations of any industry. Accuracy, traceability, explainability, and validation are not optional. AI must earn trust not only from internal teams but also from regulators who ultimately decide whether a therapy reaches patients.
To understand how AI is being deployed responsibly across the drug lifecycle, GEN spoke with leaders from Aizon, Clarivate, Ginkgo Bioworks, and IQVIA. While their applications vary—from regulatory intelligence to factory-floor analytics to antibody design—a common theme emerges: AI’s value depends not on autonomy, but on how well it augments human expertise within regulated systems.
Making regulatory affairs proactive
For IQVIA, the regulatory challenge is one of scale and fragmentation. Regulatory teams must manage expanding global requirements, frequent guideline updates, and massive volumes of documentation—all while maintaining precision and compliance.
“Regulatory information often resides in fragmented legacy systems, making analysis difficult,” says Rachel Mercado, principal of AI consulting, clinical AI, and technology innovation at IQVIA. “At the same time, regulators require transparent, validated methods for confirming accuracy and reproducibility.”
IQVIA’s approach uses multiple forms of AI, each aligned to a specific regulatory function. Natural language processing (NLP)—a technology that allows computers to work with human language—extracts and classifies information from lengthy regulatory documents, identifying patterns, gaps, and anomalies that might otherwise be missed. Models based on machine learning (ML)—a branch of artificial intelligence where computers improve their performance by learning patterns from data instead of following fixed, hand-coded rules—add predictive capabilities, helping teams forecast approval timelines or flag risks for potential submission rejection.
On top of this foundation sit large language models (LLMs), which are advanced AI systems trained on vast amounts of text to understand context, generate coherent language, and perform a wide range of tasks, including predicting and reasoning. IQVIA uses LLMs to assist with summarization, drafting submission-ready narratives, and generating responses to regulatory queries. Knowledge graphs—structured networks of interconnected facts that represent entities and their relationships, enabling machines to reason about and retrieve information in a more human-like way—map relationships between regulatory requirements, products, and therapeutic areas, enabling semantic search and rapid impact analysis when guidelines change for a more proactive approach to regulatory submission strategies.
The most significant advance, however, is agentic AI, which consists of intelligent systems that can independently plan, make decisions, and take actions toward goals, adapting their behavior based on feedback from their environment. “Agentic AI enables dynamic, goal-driven coordination,” says Raja Shankar, vice president, machine learning, clinical AI, and technology innovation, IQVIA. “It can gather information, draft submissions, review them for consistency and quality, monitor regulatory changes, and learn from past submissions.”

Crucially, IQVIA emphasizes a human-in-the-loop model. “The promise of these solutions is not about replacing human expertise,” Mercado notes, “but about strengthening their impact.” Regulatory professionals remain accountable, with AI acting as a force multiplier rather than an autonomous decision-maker.
Looking ahead, IQVIA envisions agentic systems that augment the regulatory affairs function—maintaining regulatory intelligence, identifying potential risks early, and dynamically updating submissions as global requirements evolve. The result could be faster
approvals across regions.
Building trustworthy AI
An additional challenge for leveraging AI for regulatory affairs is trust. For Yuval Kiselstein, vice president, R&D, life sciences, and healthcare at Clarivate, trust is the gating factor for adoption.
“One of the primary challenges is ensuring accuracy and trustworthiness,” he says. “AI hallucinations or unverified outputs can introduce significant compliance risks in regulated filings.”
Clarivate addresses this through a tightly controlled architecture that combines commercially pretrained LLMs with domain-specific regulatory layers. A crucial component is retrieval augmented generation (RAG), which ensures that AI-generated responses are grounded exclusively in curated regulatory content. “This significantly reduces the risk of hallucinations,” Kiselstein explains, “and ensures alignment with current guidance and requirements.”
Transparency is equally important. Regulatory professionals must be able to trace every answer back to authoritative source documents. Without that traceability, AI outputs cannot be confidently used in submissions, audits, or interactions with health authorities.
Clarivate’s latest advance is an agentic AI assistant designed specifically for regulatory workflows. Rather than simply retrieving documents, the assistant supports conversational search, comparative analysis of guideline versions, and alerts related to regulatory changes.
“The assistant helps users identify what’s relevant, understand the implications, and determine next steps,” Kiselstein says. “It operates exclusively on high-quality, trusted regulatory content.”
The benefits include faster research, improved confidence in decision-making, and a clearer understanding of how regulatory changes affect active or planned submissions. Multilingual capabilities further enable global teams to access regulatory intelligence without delays.
Clarivate envisions its multi-agent AI solution as a trusted partner to regulatory experts, enhancing decision-making by identifying gaps, anticipating challenges, preparing critical submission materials, and strengthening inspection readiness.”
Making manufacturing AI-compliant
If regulatory AI lives in documents and databases, manufacturing AI lives on the shop floor. For Aizon, the challenge is not a lack of data but a lack of usable, contextualized, and unsiloed data.
“Artificial intelligence runs on data, and biomanufacturing generates massive amounts of it,” says Toni Manzano, PhD, co-founder and chief scientific officer of Aizon. “But it’s often siloed and lacks context.”

A temperature spike, for example, is meaningless unless the AI knows which batch, which phase, and which equipment state was active. Compounding the problem, many manufacturing processes are still partially manual, requiring digitization before AI can even be applied.
Aizon Predict, the company’s AI governance platform, blends several AI disciplines. Supervised and unsupervised ML models predict critical quality attributes and yield outcomes before a batch is finished, enabling real-time process adjustments. Digital twins interact with biological processes, such as upstream and downstream operations, allowing the process to adapt to the inherent variability of biomanufacturing to always obtain a perfect batch.
Aizon has also embraced agentic AI, integrating LLMs into applications and analytics, introducing the control mechanisms recommended by good manufacturing practices (GMP). The result is a conversational interface that replaces static dashboards.
“A quality director can ask, ‘Show me the yield trend for product X over the last six months and correlate it with pH deviations,’” Manzano says. “The system generates the analysis, charts, and report, which are then ready to be reviewed by the subject-matter expert.”
This shift dramatically accelerates insight generation, democratizes advanced analytics, and speeds creation of regulatory documentation such as product quality reviews and root cause analyses. Importantly, explainability and validation remain central—black-box AI has no place in GMP environments.
Manzano’s long-term vision is real-time release, where AI provides sufficient statistical assurance to release a batch immediately upon completion. For personalized therapies like cell and gene treatments, such capabilities could be transformative.
Developability by design
At Ginkgo Bioworks, AI is applied even earlier—during molecule design. The focus is on antibody developability: the likelihood that a promising molecule can be manufactured, formulated, and delivered successfully at scale.
“The central challenge is data,” says Rich Cohen, PhD, senior director, Ginkgo Datapoints, and Ammar Arsiwala, PhD, director of antibody development, Ginkgo Bioworks. Public developability datasets are limited in size, standardization, and metadata quality, making it difficult to build generalizable predictive models.

Ginkgo addresses this by generating high-throughput datasets for pharma customers, spanning hundreds to thousands of antibodies across multiple assays. Those customers, often in collaboration with Ginkgo, build ML models to predict developability properties directly from sequence for future drug discovery campaigns. Ginkgo also uses generative AI, which is a class of artificial intelligence that creates new content—such as text, images, code, or audio—by learning patterns from existing data and producing original outputs that resemble what it has learned. Ginkgo scientists use generative AI to design new antibody variants with targeted developability properties. “In our most recent wet-lab tested designs, we’re seeing good ability to tune properties in both directions,” Cohen and Arsiwala say.
Beyond internal modeling, Ginkgo is pushing industry-wide standardization. Through the Antibody Developability Competition, which was hosted by Ginkgo, and the Antibody Developability Consortium, which was created through a partnership between computing-platform maker Apheris and Ginkgo Datapoints, Ginkgo aims to establish shared benchmarks, standardized assays, and federated models trained across partners.
The goal is to improve the predictability of real-world outcomes, from manufacturability and stability to device interactions and clinical performance. As therapies become more complex—particularly multispecific antibodies—AI-driven design may be essential to making them manufacturable at all.
Converging on smarter and faster
Across regulatory affairs, manufacturing, and developability, a clear pattern is emerging. AI is not replacing human judgment—it is compressing timelines, surfacing insights, and connecting data across silos that once slowed drug
development.
The future of AI in life sciences will not be defined by autonomy, but by trust. Systems that are explainable, validated, and grounded in high-quality data will move from support tools to strategic infrastructure. And as these capabilities mature, the ultimate beneficiaries will be patients—who gain faster access to safe, effective medicines built for approval from the very beginning.
