Jump to content




Featured Replies

rssImage-c8fc7beb036521d05417de94b222c52c.webp

We don’t fully understand human biology. Not proteins, or cells or tissues—and certainly not how they all interact in the dynamic systems that make up our body.

I believe AI is the answer to that problem. It offers the promise of a step-change in data analysis and eventually will understand our bodies’ processes at a fundamental level. It will “solve” biology.

But it can’t be done by generalist large language models (LLMs) like ChatGPT. We’re going to need domain-specific agentic software that plans, acts, and adapts. The sort of AI that can support us across messy, multimodal workflows inherent to biological research. This is how we unlock the medicines and treatments that society needs to tackle the most urgent diseases on our doorstep.

But how much does the pharmaceutical industry agree with this? How do they see agentic AI?

We commissioned a report, uncovering what 202 members of the pharmaceutical industry, chosen from various roles across the U.S. and Europe, expect from agentic AI.

The data is clear, and sometimes surprising: When it comes to agentic AI, the industry is convinced, but cautious. Success will hinge on fixing data fundamentals, building trust, and meeting people where they work.

WHAT IS AGENTIC AI GOOD FOR?

Here are the two most important areas we identified where pharma believes agentic AI can add value.

First is in handling data. The unsexy stuff: harmonizing, cleaning, and stitching data across different data modalities. If an agent can make siloed patient data analysis-ready in a secure manner, that’s the bedrock for further advances.

The second is in early target discovery. Agentic AI can autonomously scan literature and datasets to form hypotheses on its own, then test them in robotic lab settings. This will speed up drug pipelines and improve the probability of success of clinical trials.

But there’s a divide in enthusiasm for agentic AI. Executives love it (79.4% of C-level executives and vice presidents rated it “very important” or “top priority”). But on the front lines, scientists and analysts are more reserved.

I read this as a demand signal. AI agents must deliver measurable gains for enthusiasm at the top to become adopted at the bench. It’s a classic pattern for when a new platform hits enterprise: Vision sells the first pilot, but only reductions in time‑to‑insight and insight quality improvements scale it.

And of course pharma’s appetite will depend on the cost of the meal. Perhaps surprisingly, we found a meaningful slice of enterprises allocating eight‑figure budgets to agentic AI implementation. But others haven’t even named a line item yet.

I think that will give us a two‑speed market: Fast movers with a budget to match will standardize on an agentic backbone; cautious adopters will pilot targeted use cases with clear ROI. Agentic AI providers offering on‑ramps, i.e. start small, scale to enterprise, will win.

TRUST NEEDS TO BE EARNED

But none of this matters if users can’t trust what their AI is telling them. And there is still work to do to convince industry users. Only half of the respondents would trust an AI to give them consistently correct answers. And that drops to 40% for making decisions about a drug pipeline, or even protecting intellectual property.

There are different ways to read this. ChatGPT has a reputation for hallucinating responses. Biotech has so far failed to bring a completely novel, AI-discovered target to market. Perhaps the technology is just not mature enough to be trusted with the big decisions? Even if that is true now, the AI industry is like a French cheese—it matures quickly. For example, standard large language models (the basis of agentic systems) fail at complex biological reasoning. But recent research shows they can be dramatically improved through specific reinforcement training.

My take is that agentic has a communication issue rather than a technical one. Agentic AI has moved so quickly that the details of what it can and can’t do can appear fuzzy. Pharmaceutical executives are masters of decision making based on data analysis. There simply isn’t enough well-articulated information out there for them to make a firm decision on agentic AI yet. Even the enthusiastic early adopters may flinch at being asked to trust an unproven, poorly understood, agentic system with their crown jewels.

WALK THE WALK, THEN TALK ABOUT IT

What our report says to me is that we need to put more work into explaining and demonstrating what agentic AI can do, and what it can’t do (yet). We need to show clear proof points, minus the hype—but that must be in the real world, not confined to academic publications.

For pharmaceutical companies to truly buy into what we believe, the products need to speak for themselves. We’re on the cusp of a great shift in the way the pharmaceutical industry works. Those that can show that agentic technology works will reap the rewards.

Thomas Clozel is cofounder and CEO of Owkin.

View the full article





Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.