Experts Call for Major Reforms to Make AI in Healthcare Safer and fairer

While AI tools are having a huge impact on diagnosis, patient care, and hospital operations, experts warn that real progress hinges on better evaluation, regulation, and collaboration across every stage of AI’s life cycle.

AI chatbot assisting doctor health care and medical support. artificial intelligence in medicine and robotic chat customer services.

Study: AI, Health, and Health Care Today and Tomorrow Image. Credit: witsarut sakorn/Shutterstock.com

Published in the Journal of the American Medical Association (JAMA), researchers have discussed the opportunities and risks of artificial Intelligence (AI) in healthcare. They argued that while AI has massive potential, a proper ecosystem is needed to ensure it improves health outcomes. This requires progress in four areas, which are stakeholder engagement, better evaluation methods, a national data infrastructure, and incentives for robust, equitable implementation.

If developed responsibly, AI’s disruption of healthcare could represent an incredible opportunity to address long-standing issues in access, cost, and quality of care.

Background

AI is rapidly becoming part of everyday healthcare, but its integration has reached a crucial crossroads. Despite the technology’s promise, many early efforts have been scattered, often zeroing in on technical accuracy or isolated safety checks, rather than whether these tools actually improve patient outcomes.

What’s missing is a strong, coordinated system to make sure AI in healthcare is effective, fair, and used responsibly.

Right now, regulatory systems haven’t kept pace. Evaluation methods are often too clunky for real-world use, and oversight is uneven. This means a lot of AI tools are being adopted without solid proof that they actually help patients. The paper points out that several US agencies like the FDA (Food and Drug Administration), FTC (Federal Trade Commission), CMS (Centers for Medicare & Medicaid Services), and ONC (Office of the National Coordinator for Health Information Technology each have a hand in regulating AI, but none have full authority, leaving important gaps.

To move forward, the authors lay out a practical four-pillar roadmap to build the infrastructure needed for responsible AI use in healthcare. The goal isn’t just to adopt new tools, it’s to make sure they truly benefit patients in meaningful, measurable ways.

AI Tools in Health and Health Care

AI tools are becoming more common across different areas of healthcare, but each type comes with its own set of challenges. Clinical tools like medical imaging AI and EHR alerts are widely used, yet their adoption is often slowed by high costs and lingering concerns about accuracy and real-world value.

Direct-to-consumer (DTC) tools, such as health apps and chatbots, make up a huge and fast-growing market. However, many of these tools operate with little oversight and lack solid evidence of actual health benefits. On the operational side, AI tools are being used to streamline tasks like scheduling and resource management, but their impact on patient outcomes is rarely tracked.

Then there are hybrid tools, like AI scribes, which serve both clinical and administrative functions. These are gaining traction quickly and, according to the study, are especially well received by both clinicians and patients - particularly when integrated into existing EHR systems.

Each category of tool brings its own complexities when it comes to implementation, evaluation, and regulation - highlighting the need for more targeted oversight and clearer standards.

Evaluation, Regulation, and Responsible Use of AI

A key debate in the AI-in-healthcare space centers on which tools actually require evaluation. Clinical tools typically face more scrutiny, especially those that need FDA clearance, but many direct-to-consumer (DTC) and business operations tools are rolled out with little to no evidence of their impact on health outcomes.

Evaluating AI tools is inherently complex. First, defining the intervention itself can be tricky. The performance of an AI tool isn’t just about the algorithm - it also depends on how users interact with it, how well they’re trained, and the specific clinical environment where it’s used.

On top of that, tracking how a tool affects long-term health outcomes takes time, money, and sustained effort. While randomized controlled trials (RCTs) are the gold standard for proving causality, they’re often impractical given how fast AI tools evolve. This has led to growing interest in newer trial designs and real-world observational methods that can keep pace with rapid development.

One major issue is that no single group is clearly responsible for conducting these evaluations. Developers, healthcare systems, and government bodies all have limited incentives, and sometimes limited capacity, to carry out the rigorous testing needed to ensure safety and effectiveness.

Regulatory oversight is also patchy. The FDA covers AI tools classified as medical devices, but many others, especially those used for administrative support or general wellness, are exempt. This is due in part to the 21st Century Cures Act, which excludes certain types of software (like those for billing, scheduling, or fitness tracking) from the definition of a medical device. Other agencies, such as the FTC, focus on narrower issues like misleading marketing claims, leaving many areas of AI use in healthcare without meaningful oversight.

Beyond evaluation, responsible implementation and monitoring pose ongoing challenges. An AI tool’s success often hinges on local conditions - how it’s integrated, who uses it, and how well they understand it. Most healthcare organizations simply don’t have the infrastructure, technical expertise, or bandwidth to validate AI tools in their own settings or to monitor them over time. This lack of local testing and follow-up creates real risks when deploying AI in patient care.

Potential Solutions and Implications

To address the challenges AI has brought front and center, the authors proposed four solutions. First, a shift to total product life cycle management is needed, requiring continuous collaboration between developers, clinicians, patients, health systems, and regulators from design through deployment. Second, new measurement and evaluation tools must be developed to enable fast, efficient, and robust evaluations of AI's real-world effectiveness on health outcomes, not just safety.

Third, a proper data infrastructure and learning environment should be built. This would facilitate the large-scale, representative data sharing necessary to generalize findings about an AI tool's performance across diverse settings. Finally, creating the right incentive structure through policy and market forces is crucial to drive this change, potentially modeled on past initiatives like the Health Information Technology for Economic and Clinical Health (HITECH) Act, which successfully spurred EHR adoption.

The authors also suggested that successful implementation will also require significant workforce education to improve AI literacy and adapt to shifts in professional roles and responsibilities, as well as the resolution of complex ethical and legal issues surrounding data rights and liability. They also stressed the importance of ensuring equitable access to AI tools so that technological progress does not widen existing healthcare disparities.

Conclusion

In conclusion, unlocking the full potential of AI in healthcare calls for an entirely new support system.

Today’s challenges, including fragmented evaluation processes, regulatory blind spots, and implementation hurdles, all underscore the need for a coordinated, long-term approach. The path forward depends on sustained collaboration among stakeholders throughout an AI tool’s life cycle, the development of practical methods for assessing real-world effectiveness, investment in a national data infrastructure, and the creation of financial and policy incentives to support responsible use.

The authors also pointed to ongoing ethical questions that remain unresolved and can impact oversight and accountability. Without clear frameworks and aligned efforts, there’s a risk that AI’s expansion could deepen existing inequities rather than improve outcomes across the board.

Still, the researchers remain optimistic.

With thoughtful governance, strong partnerships, and robust data systems in place, AI’s growing role in healthcare has the potential to drive meaningful improvements in access, efficiency, and patient outcomes.

Journal Reference

Angus et al. (2025). AI, Health, and Health Care Today and Tomorrow. JAMA. DOI:10.1001/jama.2025.18490. https://jamanetwork.com/journals/jama/fullarticle/2840175

Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Nandi, Soham. (2025, October 24). Experts Call for Major Reforms to Make AI in Healthcare Safer and fairer. AZoRobotics. Retrieved on October 24, 2025 from https://www.azorobotics.com/News.aspx?newsID=16219.

  • MLA

    Nandi, Soham. "Experts Call for Major Reforms to Make AI in Healthcare Safer and fairer". AZoRobotics. 24 October 2025. <https://www.azorobotics.com/News.aspx?newsID=16219>.

  • Chicago

    Nandi, Soham. "Experts Call for Major Reforms to Make AI in Healthcare Safer and fairer". AZoRobotics. https://www.azorobotics.com/News.aspx?newsID=16219. (accessed October 24, 2025).

  • Harvard

    Nandi, Soham. 2025. Experts Call for Major Reforms to Make AI in Healthcare Safer and fairer. AZoRobotics, viewed 24 October 2025, https://www.azorobotics.com/News.aspx?newsID=16219.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

Sign in to keep reading

We're committed to providing free access to quality science. By registering and providing insight into your preferences you're joining a community of over 1m science interested individuals and help us to provide you with insightful content whilst keeping our service free.

or

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.