Jump to content




Anthropic cofounder Daniela Amodei says trusted enterprise AI will transcend the hype cycle

Featured Replies

rssImage-c0c199e90cdf6bd006c6e0b832b37bfc.webp

While Silicon Valley argues over bubbles, benchmarks, and who has the smartest model, Anthropic has been focused on solving problems that rarely generate hype but ultimately determine adoption: whether AI can be trusted to operate inside the world’s most sensitive systems.

Known for its safety-first posture and the Claude family of large language models (LLMs), Anthropic is placing its biggest strategic bets where AI optimism tends to collapse fastest, i.e., regulated industries. Rather than framing Claude as a consumer product, the company has positioned its models as core enterprise infrastructure—software expected to run for hours, sometimes days, inside healthcare systems, insurance platforms, and regulatory pipelines.

“Trust is what unlocks deployment at scale,” Daniela Amodei, Anthropic cofounder and president, tells Fast Company in an exclusive interview. “In regulated industries, the question isn’t just which model is smartest—it’s which model you can actually rely on, and whether the company behind it will be a responsible long-term partner.”

That philosophy took concrete form on January 11, when Anthropic launched Claude for Healthcare and Life Sciences. The release expanded earlier life sciences tools designed for clinical trials, adding support for such requirements as HIPAA-ready infrastructure and human-in-the-loop escalation, making its models better suited to regulated workflows involving protected health information.

“We go where the work is hard and the stakes are real,” Amodei says. “What excites us is augmenting expertise—a clinician thinking through a difficult case, a researcher stress-testing a hypothesis. Those are moments where a thoughtful AI partner can genuinely accelerate the work. But that only works if the model understands nuance, not just pattern matches on surface-level inputs.”

That same thinking carried into Cowork, a new agentic AI capability released by Anthropic on January 12. Designed for general knowledge workers and usable without coding expertise, Claude Cowork can autonomously perform multistep tasks on a user’s computer—organizing files, generating expense reports from receipt images, or drafting documents from scattered notes. According to reports, the launch unintentionally intensified market and investor anxiety around the durability of software-as-a-service businesses; many began questioning the resilience of recurring software revenue in a world where general-purpose AI agents can generate bespoke tools on demand.

Anthropic’s most viral product, Claude Code, has amplified that unease. The agentic tool can help write, debug, and manage code faster using natural-language prompts, and has had a substantial impact among engineers and hobbyists. Users report building everything from custom MRI viewers to automation systems entirely with Claude. 

Over the past three years, the company’s run-rate revenue has grown from $87 million at the end of 2023 to just under $1 billion by the end of 2024 and to $9 billion-plus by the end of 2025. “That growth reflects enterprises, startups, developers, and power users integrating Claude more deeply into how they actually work. And we’ve done this with a fraction of the compute our competitors have,” Amodei says. 

Building for Trust in the Most Demanding Enterprise Environments

According to a mid-2025 report by venture capital firm Menlo Ventures, AI spending across healthcare reached $1.4 billion in 2025, nearly tripling the total from 2024. The report also found that healthcare organizations are adopting AI 2.2 times faster than the broader economy. The largest spending categories include ambient clinical documentation, which accounted for $600 million, and coding and billing automation, at $450 million. 

The fastest-growing segments, however, reflect where operational pressure is most acute, like patient engagement, where spending is up 20 times year over year, and prior authorization, which grew 10 times over the same period. Claude for Healthcare is being embedded directly into the latter’s workflows, attempting to take on time-consuming and error-prone tasks such as claims review, care coordination, and regulatory documentation. 

Claude for Life Sciences has followed a similar pattern. Anthropic has expanded integrations with Medidata, ClinicalTrials.gov, Benchling, and bioRxiv, enabling Claude to operate inside clinical trial management and scientific literature synthesis. The company has also introduced agent skills for protocol drafting, bioinformatics pipelines, and regulatory gap analysis.

Customers include Novo Nordisk, Banner Health, Sanofi, Stanford Healthcare, and Eli Lilly. According to Anthropic, more than 85% of its 22,000 providers at Banner Health reported working faster with higher accuracy using Claude-assisted workflows. Anthropic also reports that internal teams at Novo Nordisk have reduced clinical documentation timelines from more than 12 weeks to just minutes.

Amodei adds that what surprised her most was how quickly practitioners defined their relationship with the company’s AI models on their own terms.

“They’re not handing decisions off to Claude,” she says. “They’re pulling it into their workflow in really specific ways—synthesizing literature, drafting patient communications, pressure-testing their reasoning—and then applying their own judgment. That’s exactly the kind of collaboration we hoped for. But honestly, they got there faster than I expected.”

Industry experts say the appeal extends beyond raw performance. Anthropic’s deliberate emphasis on trust, restraint, and long-horizon reliability is emerging as a genuine competitive moat in regulated enterprise sectors.

“This approach aligns with bounded autonomy and sandboxed execution, which are essential for safe adoption where raw speed often introduces unacceptable risk,” says Cobus Greyling, chief evangelist at Kore.ai, a vendor of enterprise AI platforms. He adds that Anthropic’s “universal agent” concept introduced a third architectural model for AI agents, expanding how autonomy can be safely deployed.

Other AI competitors are also moving aggressively into the healthcare sector, though with different priorities. OpenAI debuted its healthcare offering, ChatGPT Health, in January 2026. The product is aimed primarily at broad consumer and primary care use cases such as symptom triage and health navigation outside clinic hours. It benefits from massive consumer-scale adoption, handling more than 230 million health-related queries globally each week. 

While GPT Health has proven effective in generalist tasks such as documentation support and patient engagement, Claude is gaining traction in more specialized domains that demand structured reasoning and regulatory rigor—including drug discovery and clinical trial design.

Greyling cautions, however, that slow procurement cycles, entrenched organizational politics, and rigid compliance requirements can delay AI adoption across healthcare, life sciences, and insurance.

“Even with strong technical performance in models like Claude 4.5, enterprise reality demands extensive validation, custom integrations, and risk-averse stakeholders,” he says. “The strategy could stall if deployment timelines stretch beyond economic justification or if cost and latency concerns outweigh reliability gains in production.”

In January, Travelers announced it would deploy Claude AI assistants and Claude Code to nearly 10,000 engineers, analysts, and product owners—one of the largest enterprise AI rollouts in insurance to date. Each assistant is personalized to employee roles and connected to internal data and tools in real time. Likewise, Snowflake committed $200 million to joint development. Salesforce integrated Claude into regulated-industry workflows, while Accenture expanded multiyear agreements to scale enterprise deployments.

AI Bubble or Inflection Point?

Skeptics argue that today’s agent hype resembles past automation cycles—big promises followed by slow institutional uptake. If valuations reflect speculation rather than substance, regulated industries should expose weaknesses quickly, and Anthropic appears willing to accept that test. Its capital posture reflects confidence, through a $13 billion Series F at a $183 billion valuation in 2025, followed by reports of a significantly larger round under discussion. Anthropic is betting that the AI race will ultimately favor those who design for trust and responsibility first.

“We built a company where research, product, and policy are integrated—the people building our models work deeply with the people studying how to make them safer. That lets us move fast without cutting corners,” Amodei says. “Countless industries are putting Claude at the center of their most critical work. That trust doesn’t happen unless you’ve earned it.”


View the full article





Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.