The figure illustrates the four key components of the benchmark: (1) task synthesis through clinical needs induction and task distillation from prior research; (2) taxonomy construction based on clinical scenarios and reasoning levels; (3) task-specific sample extraction from real and synthetic EHR data; and (4) the model evaluation pipeline, including table input, format conversion, model inference, and answer evaluation.
Abstract
Structured Electronic Health Record (EHR) data stores patient information in relational tables and plays a central role in clinical decision-making. Recent advances have explored the use of large language models (LLMs) to process such data, showing promise across various clinical tasks.However, the absence of standardized evaluation frameworks and clearly defined tasks makes it difficult to systematically assess and compare LLM performance on structured EHR data. To address these evaluation challenges, we introduce EHRStruct, a benchmark specifically designed to evaluate LLMs on structured EHR tasks. EHRStruct defines 11 representative tasks spanning diverse clinical needs and includes 2,200 task-specific evaluation samples derived from two widely used EHR datasets. We use EHRStruct to evaluate 20 advanced and representative LLMs, covering both general and medical models. We further analyze key factors influencing model performance, including input formats, few-shot generalisation, and finetuning strategies, and compare results with 11 state-of-the-art LLM-based enhancement methods for structured data reasoning. Our results indicate that many structured EHR tasks place high demands on the understanding and reasoning capabilities of LLMs. In response, we propose EHRMaster, a code-augmented method that achieves state-of-the-art performance and offers practical insights to guide future research.
Task Description
| Task Scenarios | Task Levels | Task Categories | Task IDs | Metrics |
|---|---|---|---|---|
| Data-Driven | Understanding | Information retrieval | D-U1 / D-U2 | Accuracy |
| Reasoning | Data aggregation | D-R1 / D-R2 / D-R3 | Accuracy | |
| Arithmetic computation | D-R4 / D-R5 | Accuracy | ||
| Knowledge-Driven | Understanding | Clinical identification | K-U1 | AUC |
| Reasoning | Diagnostic assessment | K-R1 / K-R2 | AUC | |
| Treatment planning | K-R3 | AUC |
Data-Driven tasks include: D-U1/U2 for data filtering based on field conditions; D-R1/R2/R3 for value aggregation such as count, average, and sum; and D-R4/R5 for arithmetic reasoning over numeric field trends.
Knowledge-Driven tasks include: K-U1 for clinical code identification; K-R1 for mortality prediction; K-R2 for disease prediction based on clinical profiles; and K-R3 for personalized medication recommendation.
Evaluated LLMs
General Large Language Models
EHRStruct Leaderboard
Performance of LLMs on Structured EHR Tasks under the zero-shot setting (Synthea). ✖ indicates no valid output, ✔ indicates a perfect score of 100. 1st, 2nd, and 3rd denote the best, second-best, and third-best results, respectively.
Key Findings
- General LLMs Outperform Medical LLMs: General LLMs consistently outperform medical models on structured EHR tasks. Closed-source commercial models—especially the Gemini series—achieve the strongest overall performance.
- LLMs Excel at Data-Driven Tasks: LLMs perform better on Data-Driven tasks than on Knowledge-Driven ones.
- Input Format Influences Performance: Natural language inputs benefit Data-Driven reasoning tasks, while graph-structured prompts help Data-Driven understanding. No input format consistently improves Knowledge-Driven tasks.
- Few-shot Improves Performance: Few-shot prompting generally enhances performance, with 1-shot and 3-shot settings typically outperforming 5-shot.
- Multi-task Fine-tuning Outperforms Single-task Fine-tuning: Both strategies boost LLM performance, but multi-task fine-tuning yields more significant improvements.
- Enhancement Methods Are Scenario-Specific: Non-medical enhancement methods underperform in Knowledge-Driven settings, while medical-specific methods struggle in Data-Driven scenarios.
BibTeX
@article{yang2025ehrstruct,
title={EHRStruct: A Comprehensive Benchmark Framework for Evaluating Large Language Models on Structured Electronic Health Record Tasks},
author={Yang, Xiao and Zhao, Xuejiao and Shen, Zhiqi},
journal={arXiv preprint arXiv:2511.08206},
year={2025}
}