PaperCoder is a multi-agent LLM system that transforms paper into a code repository. It follows a three-stage pipeline: planning, analysis, and code generation, each handled by specialized agents.
Our method outperforms strong baselines on both Paper2Code and PaperBench and produces faithful, high-quality implementations.
- β‘ Quick Start
- π Detailed Setup Instructions
- π¦ Paper2Code Benchmark Datasets
- π Model-based Evaluation of Repositories
- Note: The following command runs example paper (Attention Is All You Need).
- π΅ Estimated cost for using o3-mini: $0.50β$0.70
pip install openai export OPENAI_API_KEY="<OPENAI_API_KEY>" cd scripts bash run.sh
- If you encounter any issues installing vLLM, please refer to the official vLLM repository.
- The default model is
deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct
.
pip install vllm cd scripts bash run_llm.sh
outputs βββ Transformer β βββ analyzing_artifacts β βββ coding_artifacts β βββ planning_artifacts βββ Transformer_repo # Final output repository
- π‘ To use the
o3-mini
version, make sure you have the latestopenai
package installed. - π¦ Install only what you need:
- For OpenAI API:
openai
- For open-source models:
vllm
- If you encounter any issues installing vLLM, please refer to the official vLLM repository.
- For OpenAI API:
pip install openai pip install vllm
- Or, if you prefer, you can install all dependencies using
pip
:
pip install -r requirements.txt
The following process describes how to convert a paper PDF into JSON format.
If you have access to the LaTeX source and plan to use it with PaperCoder, you may skip this step and proceed to π Running PaperCoder.
Note: In our experiments, we converted all paper PDFs to JSON format.
- Clone the
s2orc-doc2json
repository to convert your PDF file into a structured JSON format.
(For detailed configuration, please refer to the official repository.)
git clone https://github.com/allenai/s2orc-doc2json.git
- Run the PDF processing service.
cd ./s2orc-doc2json/grobid-0.7.3 ./gradlew run
- Convert your PDF into JSON format.
mkdir -p ./s2orc-doc2json/output_dir/paper_coder python ./s2orc-doc2json/doc2json/grobid2json/process_pdf.py \ -i ${PDF_PATH} \ -t ./s2orc-doc2json/temp_dir/ \ -o ./s2orc-doc2json/output_dir/paper_coder
- Note: The following command runs example paper (Attention Is All You Need).
If you want to run PaperCoder on your own paper, please modify the environment variables accordingly.
- π΅ Estimated cost for using o3-mini: $0.50β$0.70
# Using the PDF-based JSON format of the paper export OPENAI_API_KEY="<OPENAI_API_KEY>" cd scripts bash run.sh
# Using the LaTeX source of the paper export OPENAI_API_KEY="<OPENAI_API_KEY>" cd scripts bash run_latex.sh
- The default model is
deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct
.
# Using the PDF-based JSON format of the paper cd scripts bash run_llm.sh
# Using the LaTeX source of the paper cd scripts bash run_latex_llm.sh
-
Huggingface dataset: paper2code
-
You can find the description of the Paper2Code benchmark dataset in data/paper2code.
-
For more details, refer to Section 4.1 "Paper2Code Benchmark" in the paper.
-
We evaluate repository quality using a model-based approach, supporting both reference-based and reference-free settings.
The model critiques key implementation components, assigns severity levels, and generates a 1β5 correctness score averaged over 8 samples using o3-mini-high. -
For more details, please refer to Section 4.3.1 (Paper2Code Benchmark) of the paper.
-
Note: The following examples evaluate the sample repository (Transformer_repo).
Please modify the relevant paths and arguments if you wish to evaluate a different repository.
pip install tiktoken export OPENAI_API_KEY="<OPENAI_API_KEY>"
target_repo_dir
is the generated repository.
cd codes/ python eval.py \ --paper_name Transformer \ --pdf_json_path ../examples/Transformer_cleaned.json \ --data_dir ../data \ --output_dir ../outputs/Transformer \ --target_repo_dir ../outputs/Transformer_repo \ --eval_result_dir ../results \ --eval_type ref_free \ --generated_n 8 \ --papercoder
target_repo_dir
is the generated repository.gold_repo_dir
should point to the official repository (e.g., author-released code).
cd codes/ python eval.py \ --paper_name Transformer \ --pdf_json_path ../examples/Transformer_cleaned.json \ --data_dir ../data \ --output_dir ../outputs/Transformer \ --target_repo_dir ../outputs/Transformer_repo \ --gold_repo_dir ../examples/Transformer_gold_repo \ --eval_result_dir ../results \ --eval_type ref_based \ --generated_n 8 \ --papercoder
======================================== π Evaluation Summary π π Paper name: Transformer π§ͺ Evaluation type: ref_based π Target repo directory: ../outputs/Transformer_repo π Evaluation result: π Score: 4.5000 β
Valid: 8/8 ======================================== π Usage Summary π [Evaluation] Transformer - ref_based π οΈ Model: o3-mini π₯ Input tokens: 44318 (Cost: $0.04874980) π¦ Cached input tokens: 0 (Cost: $0.00000000) π€ Output tokens: 26310 (Cost: $0.11576400) π΅ Current total cost: $0.16451380 πͺ Accumulated total cost so far: $0.16451380 ============================================