Clinical Trials and Informed Consent: An Emerging Role for Artificial Intelligence?

In a recent episode of HBO Max’s The Pitt, an emergency-room attending physician shows medical students the time savings that can come from using a generative artificial intelligence (AI) tool to listen to a doctor–patient conversation and populate a medical record in seconds. But the same storyline highlights the downside: the tool invents details, confuses medications, and even swaps “neurology” for “urology”—mistakes that could put a patient at risk. The scene is fictional, but the tension it captures is real: AI can improve how efficiently healthcare teams document, analyze, and act on information—yet the stakes in medicine leave little room for error.
A similar debate is taking shape in the clinical trial setting. Some industry estimates suggest that AI-enabled tools could reduce certain clinical development costs and compress timelines, particularly when they help automate repetitive, document-heavy work. Meanwhile, informed consent, the process of explaining a trial and documenting a participant’s agreement—remains one of the most operationally demanding parts of study start-up and conduct. For that reason, sponsors, contract research organizations (CROs), and vendors are actively exploring AI-supported methods to streamline how informed consent forms (ICFs) are drafted, localized, version-controlled, and delivered to participants. The opportunity is meaningful, but so is the risk: integrating AI into informed consent must not compromise participant safety, fairness, or compliance. For life science companies and the insurers that cover them, the stakes are considerable.
The considerations in this article are provided as general risk-management questions and are not a substitute for study-specific regulatory or legal guidance.
The Importance of Informed Consent in Clinical Trials
Informed consent is a cornerstone of ethical research. At its core, the process is designed to ensure that participants understand a study’s purpose and design, its reasonably foreseeable risks and potential benefits, available alternatives, and their right to withdraw. ICFs are the primary vehicle for communicating and documenting these disclosures.
Traditionally, developing and maintaining ICFs can be painstaking. Drafting, legal and medical review, readability edits, translation, country-specific localization, and ethics committee feedback often require multiple cycles. And each protocol amendment can trigger the need to revise and re-approve the ICF, requiring sponsors to revisit the basic elements of informed consent (information, comprehension, and voluntariness) and to document the updated process appropriately.
How AI Is Enhancing the Informed Consent Process
AI is already reshaping clinical operations beyond consent, from digital twins that simulate patient outcomes to machine-learning tools that support site selection and recruitment. Within informed consent specifically, AI can offer practical efficiencies when used with clear governance and human oversight.
- Drafting, localization, and content management: Using natural language processing (NLP) and structured content workflows, teams can create “master” ICF language and manage local variants more efficiently, taking into account jurisdictional requirements and patient literacy targets. Automated quality checks can flag inconsistencies, missing required elements and formatting issues, reducing review cycles and version-control errors.
- Audit trails and change control: Well-designed AI-enabled document platforms can strengthen audit trails, maintain clean version histories, and help sponsors demonstrate disciplined change management across sites and regions.
- Participant engagement and comprehension support: AI-enabled interactive formats, including well-scoped chat-style tools, may help participants engage with trial information at their own pace, ask clarifying questions, and reinforce comprehension. Importantly, these tools should be positioned as supplements, not substitutes, for conversations with qualified study personnel.
As former FDA Commissioner Robert Califf has argued in his writing on modernizing consent approaches, digital tools may help free site personnel to focus on higher-value interactions, such as discussing uncertainties and helping participants digest key information in a more individualized way.
Challenges and Risks
AI-assisted informed consent introduces several practical risks that sponsors should address up front:
- Erroneous or misleading output (including “hallucinations”) that inserts incorrect study procedures, risks, or eligibility requirements.
- Oversimplification that improves readability but removes material information needed for a valid decision.
- Bias and uneven performance across languages, literacy levels, or cultural contexts.
- Inappropriate “persuasion” dynamics if an interactive tool nudges participants toward enrollment rather than supporting an informed, voluntary choice.
- Documentation and accountability gaps if teams cannot show what content participants actually saw, asked, and understood.
Data Privacy Deserves Particular Attention
Clinical trial sponsors and sites are already attractive targets because trial data and protected health information (PHI) move across multiple parties and systems, expanding the attack surface. Add AI tools and systems to the data ecosystem, and the target becomes that much more complex and difficult to defend.
AI tools vary widely in how they handle data. Some products store prompts, transcripts, or user interactions by default; others may use customer data to improve models unless the vendor contract and technical settings prohibit it. In the clinical research context, these design choices can create friction with HIPAA obligations, state privacy requirements, institutional policies, and sponsor commitments made in protocols and privacy notices.
Just as importantly, the regulatory framework places responsibility on sponsors to ensure proper oversight of trial conduct, including the systems and vendors used to support it, with the ability to transfer certain obligations to a CRO by written agreement. Sponsors using third-party AI tools for informed consent may want to understand and document (at minimum) how the tool handles data flows, ingestion, storage, retention, and permitted use, and ensure the approach is consistent with applicable privacy and human subject protection requirements.
An Evolving Regulatory and Legal Landscape
The regulatory framework governing AI in clinical research is developing rapidly. The Food and Drug Administration (FDA) and Office for Human Research Protection (OHRP) issued guidance in December 2016 on electronic informed consent, emphasizing that electronic consent processes must satisfy applicable informed consent and electronic records requirements.
In January 2025, the FDA issued draft guidance specifically on the issue of AI tools and systems (Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products) introducing a risk-based credibility assessment framework. The draft guidance emphasizes defining an AI model’s “context of use,” assessing risk, and establishing credibility through validation and appropriate governance. (As draft guidance, it is nonbinding, but it is an important signal of FDA expectations.)
The FDA has also shown a willingness to address software-related trial risks when safeguards are inadequate. In August 2024, the agency issued a warning letter to a clinical investigator following a serious dosing error involving an electronic dispensing algorithm that lacked safety guards, resulting in a 15-year-old subject receiving approximately ten times the protocol’s maximum daily dose for a period of time and being exposed to increased risk of serious adverse events.
The legal landscape is evolving in parallel. A recent npj Digital Medicine perspective reviewing nine court decisions involving informed consent and novel medical technologies found that courts can impose heightened disclosure obligations when experimental or transition-phase technologies are involved. The authors’ core takeaway for emerging tools is intuitive: when a new technology introduces material known or unknown risks, the consent process may need to be more explicit about the technology’s role, limitations, and uncertainties.
The Path Forward
As AI becomes more embedded in clinical trial workflows, sponsors and investigators should pressure-test their informed consent processes with practical questions such as:
- Accuracy and completeness: Is there a robust process to verify that AI-assisted ICF language captures all material study risks, benefits, and procedures?
- Bias and accessibility: What steps mitigate bias and support diverse linguistic, cultural, and literacy needs?
- Transparency: Will participants be told when AI tools were used (and for what), and will they have ready access to human study personnel for clarifying questions?
- Governance and accountability: Who is responsible for final review and sign-off, and how are discrepancies identified and corrected?
- Security and privacy: Are HIPAA, state privacy requirements, and vendor data-retention practices properly addressed and contractually controlled?
If these risks are addressed thoughtfully, AI can help sponsors enroll participants more efficiently while strengthening consistency and documentation. But AI should not replace the interpersonal interactions that maintain participant trust, particularly for populations that have historically been underrepresented in research.
Insurance Considerations
The liability implications of AI-assisted informed consent are real and multifaceted. Errors—whether inaccurate disclosures, biased content, oversimplified risk descriptions, or privacy/security missteps—can lead to regulatory scrutiny, disputes about consent validity, litigation, and reputational harm. As legal standards continue to develop, sponsors, CROs, and trial sites may face allegations sounding in regulatory noncompliance, negligence, or failure to obtain legally adequate consent.
Life science companies should work with their insurance advisors to evaluate whether their liability programs (including clinical trial liability, products liability, and professional liability coverages) appropriately contemplate AI-related operational risk. As AI’s role in clinical trials expands, aligning insurance protections with this evolving risk landscape will become increasingly important.
Viewed together, the operational, regulatory, and insurance considerations outlined in this article point to a single theme: AI can add efficiency, but it does not eliminate or minimize accountability. In The Pitt, referenced at the beginning, the chart is only as reliable as the clinician who reviews it. In clinical trials, consent is only as defensible as the processes used to generate, explain, and document it. The path forward in a world with AI tools and systems is not “AI or humans,” but AI with guardrails that preserve participant understanding, privacy, and trust.
Authored by Phillip Skaggs, Berkley Life Sciences, Vice President & Chief Legal & Regulatory Affairs Officer
This post is for general informational purposes only and is not intended as legal or other professional advice.