Artificial intelligence in health technology assessment: Moving from hype to responsible integration
Artificial intelligence (AI) is rapidly reshaping healthcare research, and health economics and outcomes research (HEOR) is no exception. From systematic literature reviews to economic modeling, AI-enabled tools are increasingly being explored to improve efficiency and expand analytical capabilities.
A recent symposium hosted with University College London (UCL), brought together experts from academia, industry and the National Institute for Health and Care Excellence (NICE). Insights from the experts suggest that the future of AI in health technology assessment (HTA) will depend less on automation and more on how effectively these tools are integrated into established methodological frameworks.
For the HEOR community, the implication is clear: AI is not a shortcut around HTA principles; rather, it must operate within them.
AI in HTA: a question of fit, not capability
Despite rapid advances in large language models and automation tools, HTA remains a challenging environment for AI deployment. The issue is not a lack of capability, but a lack of alignment between AI outputs and HTA requirements.
HTA is inherently complex and does not follow a straightforward path. Activities such as evidence synthesis, economic model development and reporting are deeply interconnected and involve multiple sequential layers of judgment. These processes are not easily reduced to discrete automatable tasks without risking loss of context, transparency and interpretability.
Experts at the symposium emphasized that the primary bottleneck in HTA is not technology, but expert capacity. AI may help address this constraint, but only if applied in ways that respect the structure and rigor of HTA workflows.
In addition, AI systems are typically optimized to produce singular, confident outputs, whereas HTA requires explicit characterization of uncertainty, including the exploration of alternative assumptions and edge cases. This fundamental mismatch underscores the need for careful design of AI-human interaction to facilitate human oversight.
Defining the role of AI: augmentation over automation
A consistent theme across discussions was that AI is most valuable as an assistive tool rather than a decision-maker.
Current high-value applications include:
- Supporting systematic literature reviews through de-duplication, screening prioritization and data extraction.
- Assisting with technical tasks such as code generation, model structuring and scenario analysis.
- Drafting documentation and enabling faster iteration of analyses.
These use cases can improve efficiency and allow researchers to focus on higher-value activities. However, they also introduce risks, particularly if AI outputs are accepted without appropriate validation.
Crucially, tasks requiring interpretation, contextualization and value judgment (such as defining decision problems, assessing bias and interpreting uncertainty) must remain human-led.
This has led to a shift from “human-in-the-loop” systems to a “human-at-the-helm” model, where AI supports workflows, but accountability remains with the analyst.
Implications for HTA submissions and transparency
NICE’s evolving approach to AI provides an important signal for HEOR practitioners. Rather than adapting HTA standards to accommodate AI, NICE emphasizes that AI must be used transparently and in line with existing principles of rigor and independence.
This includes growing expectations for:
- Clear disclosure of whether and how AI was used in evidence generation and analysis.
- Documentation of associated risks, including bias and uncertainty.
- Explicit description of human oversight and validation processes.
For HEOR professionals, this means that AI-enabled analyses must be both methodologically robust and clearly explainable to reviewers and decision-makers. Transparency is becoming a core requirement, not an optional consideration.
Opportunities and practical considerations toward “living HTA”
One of the most promising future applications of AI in HTA is the development of “living HTA,” a model in which evidence synthesis and economic evaluation are continuously updated as new data emerge.
AI may support this transition by:
- Automating evidence identification and updates to systematic reviews.
- Enabling dynamic economic models that evolve over time.
- Supporting more granular analyses of subpopulations and scenarios.
For the HEOR community, this represents an opportunity to enhance both the timeliness and relevance of HTA outputs. However, it also introduces practical challenges.
Implementing living HTA will require:
- Interoperable systems that integrate multiple tools and data sources.
- Agreed standards for validation, version control and reporting.
- Collaboration across stakeholders to ensure methodological consistency.
Importantly, the value of AI in this context extends beyond efficiency. By enabling more comprehensive exploration of uncertainty, AI has the potential to improve the quality of HTA decision-making.
Managing risk by maintaining methodological rigor
While AI offers clear opportunities, its use in HTA also introduces new risks that must be actively managed.
A key concern is that AI-related errors are often not random. Instead, they tend to occur in areas most critical to decision-making, such as complex assumptions or edge cases. These errors may be difficult to detect and can create significant downstream consequences.
Additional challenges include:
- Limited standardization in model validation and performance assessment.
- Sparse evidence on the actual impact of AI on efficiency and quality.
- Inconsistent reporting of AI methods.
- Legal and copyright considerations related to the use of published literature.
Addressing these issues will require the development of best practices that integrate principles from both HTA and machine learning, ensuring that innovation does not come at the expense of rigor.
Designing AI-enabled workflows for HEOR practice
For HEOR practitioners, the focus should be on designing workflows that integrate AI while maintaining transparency and control.
Emerging best practices include:
- Structuring tasks into smaller, reviewable components.
- Clearly documenting where and how AI is used.
- Ensuring that assumptions, limitations and uncertainties are explicitly communicated.
- Training analysts to critically evaluate AI-generated outputs.
In this evolving landscape, the role of the analyst shifts toward validation, interpretation and judgment, while reinforcing the continued importance of domain expertise.
AI offers a pragmatic path forward
AI has the potential to enhance HTA by improving efficiency, enabling more comprehensive analyses and supporting new approaches such as living HTA. However, its successful integration depends on maintaining the core principles of HEOR: rigor, transparency and accountability.
For HEOR professionals, the path forward is not about replacing expertise but augmenting it. AI is a tool that supports better decision-making, provided it is applied thoughtfully, transparently and within established methodological frameworks.
Learn more about the ways our team is using human expertise to harness advanced technology in HEOR
Recommended for you