Skip to content

guestrin-lab/deepscholar-bench

Repository files navigation

🌐🔍DeepScholar-Bench: A Live Benchmark for Generative Research Synthesis

📊 Dataset | 📄 Paper | 🏆 Live Leaderboard | 🤖 DeepResearch Preview


DeepScholar-Bench provides a live benchmark dataset and holistic evaluation of generative research synthesis, an emerging capability among AI systems designed for DeepResearch.

This repository provides:

  1. Dataset Scripts - which allow you to collect new datasets from recent, high-quality Arxiv papers using our automated data-collection pipeline. You can set your own configurations (e.g., choice of valid date ranges and valid Arxiv domains) to customize your dataset
  2. An Evaluation Suite - for measuring performance of long-form research synthesis answers. Our evaluation framework supports a holistic set of metrics, which demonstrate high agreement with human annotations. Our eval suite is built using the LOTUS framework for LLM-based data processing, which provides a library for LLM-based evaluations and can be used directly to instantiate your custom LLM-judges.

If you run into any problems with the code in this repo, leaderboard, or dataset, please feel free to raise an issue and we will address it promptly. If you would like to add your AI system to the DeepScholar-bench leaderboard, please fill out this form.

🚀 Quick Start

To get started, make sure you are using Python 3.10, simply clone the repository and install dependencies as follows:

# Clone the repository git clone git@github.com:guestrin-lab/deepscholar-bench.git cd deepscholar-bench # Install dependencies conda create -n dsbench python=3.10 -y conda activate dsbench pip install -r requirements.txt

Basic Usage

1. Collect Research Data

# Collect recent AI papers since May 1, 2025 python -m data_pipeline.main \ --categories cs.AI \ --start-date 2025-05-01

2. Evaluate Research Generation Systems

# Evaluate the system answers generated by deepscholar_base_gpt_4.1 using gpt-4o as a judge model to assess organization, nugget coverage, reference coverage, and citation precision metrics python -m eval.main \ --modes deepscholar_base \ --evals organization nugget_coverage reference_coverage cite_p \ --input_folder tests/baselines_results/deepscholar_base_gpt_4.1 \ --output_folder results \ --dataset_path dataset/related_works_combined.csv \ --model_name gpt-4o

For more details and a full introduction, please continue to our Dataset Scripts Description and/or our Evaluation library Description.

🤝 Contributing

We welcome contributions to DeepScholarBench! Please feel free to submit a PR for code contributions. If you would like to add your AI system to the DeepScholar-bench leaderboard, please fill out this this form.

Citation

If you use DeepScholar-Bench in an academic work, we would greatly appreciate it if you can cite this work as follows:

@article{patel2025deepscholarbench, title={DeepScholar-Bench: A Live Benchmark and Automated Evaluation for Generative Research Synthesis}, author={Liana Patel and Negar Arabzadeh and Harshit Gupta and Ankita Sundar and Ion Stoica and Matei Zaharia and Carlos Guestrin}, year={2025}, url={https://arxiv.org/abs/2508.20033}, }

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages