Jump to content




AI in healthcare is entering a new era of accountability

Featured Replies

rssImage-c0161343300183aa620558b7dfe06cf9.webp

Almost 10 years ago, physician and data scientist Dr. Ruben Amarasingham founded Pieces Technologies in Dallas with a clear goal: use artificial intelligence to make clinical work lighter, not heavier. At a time when much of healthcare AI focused on prediction and automation, Pieces concentrated on something harder to quantify but more consequential—how clinicians actually think, document, and make decisions inside busy hospital workflows.

That focus helped Pieces gain traction with health systems looking for AI that could assist with documentation, coordination, and decision-making without disrupting care. But as hospitals began relying more heavily on AI for diagnosis, triage, and daily operations, the expectations placed on these tools changed. It was no longer enough for AI to sound impressive or move fast. It had to be trustworthy under real clinical pressure.

Pieces did not set out to become a case study in healthcare AI accountability. But over the past two years, that is effectively what it became. In 2024, a regulatory investigation by the Texas Attorney General’s office into the accuracy and safety of its systems forced the company to examine how its models behaved in real-world settings, how clearly their reasoning could be explained, and how quickly problems could be identified and corrected.

Rather than retreat, the company reexamined its models, documentation practices, and safeguards. Those efforts later became central to its acquisition by Smarter Technologies, a private equity-backed healthcare automation platform formed earlier this year through the combination of SmarterDx, Thoughtful.ai, and Access Healthcare, in September 2025. The purchase price was not disclosed.

Pieces’ journey captures a defining truth about healthcare AI today: the technology is no longer judged by ambition alone, but also by whether it can withstand scrutiny, explain itself under pressure, earn clinician trust, and operate safely in environments where the cost of error is measured in human outcomes.

FROM PROMISE TO PROOF

AI arrived in healthcare with big promises. It would ease physician workloads, speed decisions in emergencies, and cut through the complexity of modern care. Some of those promises materialized early. But as adoption spread, hospitals began to see the limits of systems that were impressive in theory but fragile in practice.

In early 2025, the U.S. Food and Drug Administration published updated guidance on AI and machine learning-enabled medical devices, calling for stronger post-market monitoring, clearer audit trails, and safeguards against model drift in high-stakes settings. The Federal Trade Commission reinforced that message through enforcement actions targeting exaggerated AI claims and misuse of sensitive health data.

Those signals changed the conversation, forcing many hospitals to ask vendors harder questions: How does your system reach its conclusions? Can clinicians understand and override its recommendations? And does the model behave consistently as conditions change?

For many AI companies, the excitement of the last decade no longer buys time. Proof does.

A REAL-LIFE TEST

Pieces encountered those expectations earlier than most. The regulatory scrutiny forced the company to confront how its models reasoned through patient data and how clearly that reasoning could be explained to clinicians and regulators alike.

But Amarasingham says the company’s mission never shifted. “Our team is focused on building the tools to make life easier for physicians, nurses, and case managers who are carrying the weight of the health system every day,” he tells Fast Company.

That focus meant publishing method papers, sharing documentation with health systems, and creating processes that exposed when models struggled, drifted, or required recalibration. Those practices became foundational to the company’s next chapter.

Shekhar Natarajan, founder and CEO of Orchestro.ai and a longtime observer of healthcare regulation, sees this as part of a larger reckoning. Many AI companies, he says, relied on what he calls “emergent safety,” assuming ethical outcomes would arise naturally from good intentions and culture.

“That approach no longer holds,” Natarajan explains. Regulators now expect safety and accountability to be engineered into systems themselves, with reproducible reasoning, documented controls, and safeguards that hold up even when teams are stretched thin.

BUILDING TRUST

Trust in healthcare does not come from branding or inspiration. It comes from repeated proof that technology understands clinical work and behaves consistently under changing conditions. Clinicians want AI that respects the pace of the workday, adapts to the unpredictable rhythm of patient care, and reduces cognitive burden rather than adds to it. Above all, they want systems that behave predictably.

Pieces shaped its approach around these realities, focusing on building tools to work alongside clinicians rather than ahead of them and creating ways for teams to question the system’s conclusions. It also designed its internal processes to document when the model was correct, struggled, drifted, or needed recalibration. ​For Amarasingham, that kind of thinking was essential for the progress of the company.

“Innovation, to us, had to serve the care team first. The goal was to reduce cognitive load rather than to add to it,” he says, a view that aligns with a growing consensus in healthcare AI research.

That emphasis aligns with what independent clinicians say is holding healthcare AI back.

Dr. Ruth Kagwima, an internist at Catalyst Physician Group in Texas, says AI adoption stalls when tools disrupt already overloaded clinical workflows or fail to earn trust through clarity and validation.

“AI systems that succeed in hospitals are easy to understand, fit naturally into daily work, and show clear proof of safety and accuracy,” she says. “They have to protect patient data, respect clinical judgment, and improve care without adding friction.”

Another independent healthcare analyst, Dr. Patience Onuoha, who is an internist affiliated with multiple hospitals in Indiana, points to the practical constraints that still slow adoption at the bedside. “Data is often messy and siloed, and new tools can disrupt already busy clinical workflows,” she says. “There are also real concerns around safety, bias, legal risk, and trusting algorithms that are not easy to understand.”

Natarajan believes this will be the defining standard of the next decade. In his view, companies survive regulatory pressure when they transform their internal principles into systems that can be inspected. They build clear chains of accountability, create evidence trails that reveal where bias may appear, and show clinicians not only how a model works but also why it does.

IMPACT ON THE FUTURE

Healthcare AI is moving toward a world where oversight is a design requirement rather than an afterthought, especially with regulators demanding documentation that spans the full lifecycle of a system. They want performance data segmented across race, age, and medical conditions, assurances that the system cannot infer sensitive traits that patients never disclosed, and they want companies to demonstrate how quickly they can detect and correct model drift.

Some of this momentum comes from damage that has surfaced over time. For example, recent research reported by the Financial Times found some AI medical tools tended to understate the symptoms of women and ethnic minority patients, potentially worsening disparities in care because models weren’t trained or evaluated for fairness and transparency.

Companies that adapt to this new reality will shape the next generation of clinical AI. Pieces now operates within this landscape. As part of Smarter Technologies, it is working to bring its governance practices to a wider network of hospitals. That means integrating safety frameworks across larger datasets, more diverse populations, and broader distribution environments. It is difficult work, but also the kind of work that defines leadership in a field where the cost of failure is measured in human outcomes.

A NEW CHAPTER

Healthcare AI is entering a consequential phase of growth, where the safety of AI systems is far more important than headline-grabbing breakthroughs.

As hospitals sharpen their expectations for AI, Amarasingham believes the industry will need to adopt a different mindset. “In healthcare and AI, you’re not playing to win once and for all; you’re playing to keep playing, keep learning, and keep improving outcomes for patients,” he says.

The work, he adds, will never be finished, because the rules shift and the needs evolve. What matters is whether companies choose to design for that reality. In other words, AI in healthcare will advance only as fast as it earns trust. And that means healthcare AI vendors and buyers must now, more than ever, be committed to steady, transparent work that stands up under pressure.

View the full article





Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Account

Navigation

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.