← home
Github: datasets/medline.py

ir_datasets: Medline

Index
  1. medline
  2. medline/2004
  3. medline/2004/trec-genomics-2004
  4. medline/2004/trec-genomics-2005
  5. medline/2017
  6. medline/2017/trec-pm-2017
  7. medline/2017/trec-pm-2018

"medline"

Medical articles from Medline. This collection was used by TREC Genomics 2004-05 (2004 version of dataset) and by TREC Precision Medicine 2017-18 (2017 version).


"medline/2004"

3M Medline articles including titles and abstracts, used for the TREC 2004-05 Genomics track.

docs
3.7M docs

Language: en

Document type:
MedlineDoc: (namedtuple)
  1. doc_id: str
  2. title: str
  3. abstract: str

Examples:

Python API
import ir_datasets dataset = ir_datasets.load("medline/2004") for doc in dataset.docs_iter(): doc # namedtuple<doc_id, title, abstract> 

You can find more details about the Python API here.

CLI
ir_datasets export medline/2004 docs 
[doc_id]    [title]    [abstract]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt pt.init() dataset = pt.get_dataset('irds:medline/2004') # Index medline/2004 indexer = pt.IterDictIndexer('./indices/medline_2004') index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'abstract']) 

You can find more details about PyTerrier indexing here.

XPM-IR
from datamaestro import prepare_dataset dataset = prepare_dataset('irds.medline.2004') for doc in dataset.iter_documents(): print(doc) # an AdhocDocumentStore break 

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore

Metadata

"medline/2004/trec-genomics-2004"

The TREC Genomics Track 2004 benchmark. Contains 50 queries with article-level relevance judgments.

queries
50 queries

Language: en

Query type:
TrecGenomicsQuery: (namedtuple)
  1. query_id: str
  2. title: str
  3. need: str
  4. context: str

Examples:

Python API
import ir_datasets dataset = ir_datasets.load("medline/2004/trec-genomics-2004") for query in dataset.queries_iter(): query # namedtuple<query_id, title, need, context> 

You can find more details about the Python API here.

CLI
ir_datasets export medline/2004/trec-genomics-2004 queries 
[query_id]    [title]    [need]    [context]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt pt.init() dataset = pt.get_dataset('irds:medline/2004/trec-genomics-2004') index_ref = pt.IndexRef.of('./indices/medline_2004') # assumes you have already built an index pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25') # (optionally other pipeline components) pipeline(dataset.get_topics('title')) 

You can find more details about PyTerrier retrieval here.

XPM-IR
from datamaestro import prepare_dataset topics = prepare_dataset('irds.medline.2004.trec-genomics-2004.queries') # AdhocTopics for topic in topics.iter(): print(topic) # An AdhocTopic 

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocTopics.

docs
3.7M docs

Inherits docs from medline/2004

Language: en

Document type:
MedlineDoc: (namedtuple)
  1. doc_id: str
  2. title: str
  3. abstract: str

Examples:

Python API
import ir_datasets dataset = ir_datasets.load("medline/2004/trec-genomics-2004") for doc in dataset.docs_iter(): doc # namedtuple<doc_id, title, abstract> 

You can find more details about the Python API here.

CLI
ir_datasets export medline/2004/trec-genomics-2004 docs 
[doc_id]    [title]    [abstract]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt pt.init() dataset = pt.get_dataset('irds:medline/2004/trec-genomics-2004') # Index medline/2004 indexer = pt.IterDictIndexer('./indices/medline_2004') index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'abstract']) 

You can find more details about PyTerrier indexing here.

XPM-IR
from datamaestro import prepare_dataset dataset = prepare_dataset('irds.medline.2004.trec-genomics-2004') for doc in dataset.iter_documents(): print(doc) # an AdhocDocumentStore break 

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore

qrels
8.3K qrels
Query relevance judgment type:
TrecQrel: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. relevance: int
  4. iteration: str

Relevance levels

Rel.DefinitionCount%
0not relevant0 0.0%
1possibly relevant4.6K56.0%
2definitely relevant3.6K44.0%

Examples:

Python API
import ir_datasets dataset = ir_datasets.load("medline/2004/trec-genomics-2004") for qrel in dataset.qrels_iter(): qrel # namedtuple<query_id, doc_id, relevance, iteration> 

You can find more details about the Python API here.

CLI
ir_datasets export medline/2004/trec-genomics-2004 qrels --format tsv 
[query_id]    [doc_id]    [relevance]    [iteration]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt from pyterrier.measures import * pt.init() dataset = pt.get_dataset('irds:medline/2004/trec-genomics-2004') index_ref = pt.IndexRef.of('./indices/medline_2004') # assumes you have already built an index pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25') # (optionally other pipeline components) pt.Experiment( [pipeline], dataset.get_topics('title'), dataset.get_qrels(), [MAP, nDCG@20] ) 

You can find more details about PyTerrier experiments here.

XPM-IR
from datamaestro import prepare_dataset qrels = prepare_dataset('irds.medline.2004.trec-genomics-2004.qrels') # AdhocAssessments for topic_qrels in qrels.iter(): print(topic_qrels) # An AdhocTopic 

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocAssessments.

Citation

ir_datasets.bib:

\cite{Hersh2004TrecGenomics}

Bibtex:

@inproceedings{Hersh2004TrecGenomics, title={TREC 2004 Genomics Track Overview}, author={William R. Hersh and Ravi Teja Bhuptiraju and Laura Ross and Phoebe Johnson and Aaron M. Cohen and Dale F. Kraemer}, booktitle={TREC}, year={2004} }
Metadata

"medline/2004/trec-genomics-2005"

The TREC Genomics Track 2005 benchmark. Contains 50 queries with article-level relevance judgments.

queries
50 queries

Language: en

Query type:
GenericQuery: (namedtuple)
  1. query_id: str
  2. text: str

Examples:

Python API
import ir_datasets dataset = ir_datasets.load("medline/2004/trec-genomics-2005") for query in dataset.queries_iter(): query # namedtuple<query_id, text> 

You can find more details about the Python API here.

CLI
ir_datasets export medline/2004/trec-genomics-2005 queries 
[query_id]    [text]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt pt.init() dataset = pt.get_dataset('irds:medline/2004/trec-genomics-2005') index_ref = pt.IndexRef.of('./indices/medline_2004') # assumes you have already built an index pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25') # (optionally other pipeline components) pipeline(dataset.get_topics()) 

You can find more details about PyTerrier retrieval here.

XPM-IR
from datamaestro import prepare_dataset topics = prepare_dataset('irds.medline.2004.trec-genomics-2005.queries') # AdhocTopics for topic in topics.iter(): print(topic) # An AdhocTopic 

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocTopics.

docs
3.7M docs

Inherits docs from medline/2004

Language: en

Document type:
MedlineDoc: (namedtuple)
  1. doc_id: str
  2. title: str
  3. abstract: str

Examples:

Python API
import ir_datasets dataset = ir_datasets.load("medline/2004/trec-genomics-2005") for doc in dataset.docs_iter(): doc # namedtuple<doc_id, title, abstract> 

You can find more details about the Python API here.

CLI
ir_datasets export medline/2004/trec-genomics-2005 docs 
[doc_id]    [title]    [abstract]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt pt.init() dataset = pt.get_dataset('irds:medline/2004/trec-genomics-2005') # Index medline/2004 indexer = pt.IterDictIndexer('./indices/medline_2004') index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'abstract']) 

You can find more details about PyTerrier indexing here.

XPM-IR
from datamaestro import prepare_dataset dataset = prepare_dataset('irds.medline.2004.trec-genomics-2005') for doc in dataset.iter_documents(): print(doc) # an AdhocDocumentStore break 

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore

qrels
40K qrels
Query relevance judgment type:
TrecQrel: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. relevance: int
  4. iteration: str

Relevance levels

Rel.DefinitionCount%
0not relevant35K88.5%
1possibly relevant2.1K5.2%
2definitely relevant2.5K6.3%

Examples:

Python API
import ir_datasets dataset = ir_datasets.load("medline/2004/trec-genomics-2005") for qrel in dataset.qrels_iter(): qrel # namedtuple<query_id, doc_id, relevance, iteration> 

You can find more details about the Python API here.

CLI
ir_datasets export medline/2004/trec-genomics-2005 qrels --format tsv 
[query_id]    [doc_id]    [relevance]    [iteration]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt from pyterrier.measures import * pt.init() dataset = pt.get_dataset('irds:medline/2004/trec-genomics-2005') index_ref = pt.IndexRef.of('./indices/medline_2004') # assumes you have already built an index pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25') # (optionally other pipeline components) pt.Experiment( [pipeline], dataset.get_topics(), dataset.get_qrels(), [MAP, nDCG@20] ) 

You can find more details about PyTerrier experiments here.

XPM-IR
from datamaestro import prepare_dataset qrels = prepare_dataset('irds.medline.2004.trec-genomics-2005.qrels') # AdhocAssessments for topic_qrels in qrels.iter(): print(topic_qrels) # An AdhocTopic 

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocAssessments.

Citation

ir_datasets.bib:

\cite{Hersh2005TrecGenomics}

Bibtex:

@inproceedings{Hersh2005TrecGenomics, title={TREC 2005 Genomics Track Overview}, author={William Hersh and Aaron Cohen and Jianji Yang and Ravi Teja Bhupatiraju and Phoebe Roberts and Marti Hearst}, booktitle={TREC}, year={2007} }
Metadata

"medline/2017"

26M Medline and AACR/ASCO Proceedings articles including titles and abstracts. This collection is used for the TREC 2017-18 TREC Precision Medicine track.

docs
27M docs

Language: en

Document type:
MedlineDoc: (namedtuple)
  1. doc_id: str
  2. title: str
  3. abstract: str

Examples:

Python API
import ir_datasets dataset = ir_datasets.load("medline/2017") for doc in dataset.docs_iter(): doc # namedtuple<doc_id, title, abstract> 

You can find more details about the Python API here.

CLI
ir_datasets export medline/2017 docs 
[doc_id]    [title]    [abstract]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt pt.init() dataset = pt.get_dataset('irds:medline/2017') # Index medline/2017 indexer = pt.IterDictIndexer('./indices/medline_2017') index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'abstract']) 

You can find more details about PyTerrier indexing here.

XPM-IR
from datamaestro import prepare_dataset dataset = prepare_dataset('irds.medline.2017') for doc in dataset.iter_documents(): print(doc) # an AdhocDocumentStore break 

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore

Metadata

"medline/2017/trec-pm-2017"

The TREC Precision Medicine (PM) Track 2017 benchmark. Contains 30 queries containing disease, gene, and target demographic information.

queries
30 queries

Language: en

Query type:
TrecPm2017Query: (namedtuple)
  1. query_id: str
  2. disease: str
  3. gene: str
  4. demographic: str
  5. other: str

Examples:

Python API
import ir_datasets dataset = ir_datasets.load("medline/2017/trec-pm-2017") for query in dataset.queries_iter(): query # namedtuple<query_id, disease, gene, demographic, other> 

You can find more details about the Python API here.

CLI
ir_datasets export medline/2017/trec-pm-2017 queries 
[query_id]    [disease]    [gene]    [demographic]    [other]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt pt.init() dataset = pt.get_dataset('irds:medline/2017/trec-pm-2017') index_ref = pt.IndexRef.of('./indices/medline_2017') # assumes you have already built an index pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25') # (optionally other pipeline components) pipeline(dataset.get_topics('disease')) 

You can find more details about PyTerrier retrieval here.

XPM-IR
from datamaestro import prepare_dataset topics = prepare_dataset('irds.medline.2017.trec-pm-2017.queries') # AdhocTopics for topic in topics.iter(): print(topic) # An AdhocTopic 

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocTopics.

docs
27M docs

Inherits docs from medline/2017

Language: en

Document type:
MedlineDoc: (namedtuple)
  1. doc_id: str
  2. title: str
  3. abstract: str

Examples:

Python API
import ir_datasets dataset = ir_datasets.load("medline/2017/trec-pm-2017") for doc in dataset.docs_iter(): doc # namedtuple<doc_id, title, abstract> 

You can find more details about the Python API here.

CLI
ir_datasets export medline/2017/trec-pm-2017 docs 
[doc_id]    [title]    [abstract]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt pt.init() dataset = pt.get_dataset('irds:medline/2017/trec-pm-2017') # Index medline/2017 indexer = pt.IterDictIndexer('./indices/medline_2017') index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'abstract']) 

You can find more details about PyTerrier indexing here.

XPM-IR
from datamaestro import prepare_dataset dataset = prepare_dataset('irds.medline.2017.trec-pm-2017') for doc in dataset.iter_documents(): print(doc) # an AdhocDocumentStore break 

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore

qrels
23K qrels
Query relevance judgment type:
TrecQrel: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. relevance: int
  4. iteration: str

Relevance levels

Rel.DefinitionCount%
0not relevant19K82.9%
1possibly relevant1.9K8.2%
2definitely relevant2.0K8.9%

Examples:

Python API
import ir_datasets dataset = ir_datasets.load("medline/2017/trec-pm-2017") for qrel in dataset.qrels_iter(): qrel # namedtuple<query_id, doc_id, relevance, iteration> 

You can find more details about the Python API here.

CLI
ir_datasets export medline/2017/trec-pm-2017 qrels --format tsv 
[query_id]    [doc_id]    [relevance]    [iteration]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt from pyterrier.measures import * pt.init() dataset = pt.get_dataset('irds:medline/2017/trec-pm-2017') index_ref = pt.IndexRef.of('./indices/medline_2017') # assumes you have already built an index pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25') # (optionally other pipeline components) pt.Experiment( [pipeline], dataset.get_topics('disease'), dataset.get_qrels(), [MAP, nDCG@20] ) 

You can find more details about PyTerrier experiments here.

XPM-IR
from datamaestro import prepare_dataset qrels = prepare_dataset('irds.medline.2017.trec-pm-2017.qrels') # AdhocAssessments for topic_qrels in qrels.iter(): print(topic_qrels) # An AdhocTopic 

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocAssessments.

Citation

ir_datasets.bib:

\cite{Roberts2017TrecPm}

Bibtex:

@inproceedings{Roberts2017TrecPm, title={Overview of the TREC 2017 Precision Medicine Track}, author={Kirk Roberts and Dina Demner-Fushman and Ellen Voorhees and William R. Hersh and Steven Bedrick and Alexander J. Lazar and Shubham Pant}, booktitle={TREC}, year={2017} }
Metadata

"medline/2017/trec-pm-2018"

The TREC Precision Medicine (PM) Track 2018 benchmark. Contains 50 queries containing disease, gene, and target demographic information.

queries
50 queries

Language: en

Query type:
TrecPmQuery: (namedtuple)
  1. query_id: str
  2. disease: str
  3. gene: str
  4. demographic: str

Examples:

Python API
import ir_datasets dataset = ir_datasets.load("medline/2017/trec-pm-2018") for query in dataset.queries_iter(): query # namedtuple<query_id, disease, gene, demographic> 

You can find more details about the Python API here.

CLI
ir_datasets export medline/2017/trec-pm-2018 queries 
[query_id]    [disease]    [gene]    [demographic]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt pt.init() dataset = pt.get_dataset('irds:medline/2017/trec-pm-2018') index_ref = pt.IndexRef.of('./indices/medline_2017') # assumes you have already built an index pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25') # (optionally other pipeline components) pipeline(dataset.get_topics('disease')) 

You can find more details about PyTerrier retrieval here.

XPM-IR
from datamaestro import prepare_dataset topics = prepare_dataset('irds.medline.2017.trec-pm-2018.queries') # AdhocTopics for topic in topics.iter(): print(topic) # An AdhocTopic 

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocTopics.

docs
27M docs

Inherits docs from medline/2017

Language: en

Document type:
MedlineDoc: (namedtuple)
  1. doc_id: str
  2. title: str
  3. abstract: str

Examples:

Python API
import ir_datasets dataset = ir_datasets.load("medline/2017/trec-pm-2018") for doc in dataset.docs_iter(): doc # namedtuple<doc_id, title, abstract> 

You can find more details about the Python API here.

CLI
ir_datasets export medline/2017/trec-pm-2018 docs 
[doc_id]    [title]    [abstract]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt pt.init() dataset = pt.get_dataset('irds:medline/2017/trec-pm-2018') # Index medline/2017 indexer = pt.IterDictIndexer('./indices/medline_2017') index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'abstract']) 

You can find more details about PyTerrier indexing here.

XPM-IR
from datamaestro import prepare_dataset dataset = prepare_dataset('irds.medline.2017.trec-pm-2018') for doc in dataset.iter_documents(): print(doc) # an AdhocDocumentStore break 

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore

qrels
22K qrels
Query relevance judgment type:
TrecQrel: (namedtuple)
  1. query_id: str
  2. doc_id: str
  3. relevance: int
  4. iteration: str

Relevance levels

Rel.DefinitionCount%
0not relevant17K75.1%
1possibly relevant2.1K9.6%
2definitely relevant3.4K15.3%

Examples:

Python API
import ir_datasets dataset = ir_datasets.load("medline/2017/trec-pm-2018") for qrel in dataset.qrels_iter(): qrel # namedtuple<query_id, doc_id, relevance, iteration> 

You can find more details about the Python API here.

CLI
ir_datasets export medline/2017/trec-pm-2018 qrels --format tsv 
[query_id]    [doc_id]    [relevance]    [iteration]
...

You can find more details about the CLI here.

PyTerrier
import pyterrier as pt from pyterrier.measures import * pt.init() dataset = pt.get_dataset('irds:medline/2017/trec-pm-2018') index_ref = pt.IndexRef.of('./indices/medline_2017') # assumes you have already built an index pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25') # (optionally other pipeline components) pt.Experiment( [pipeline], dataset.get_topics('disease'), dataset.get_qrels(), [MAP, nDCG@20] ) 

You can find more details about PyTerrier experiments here.

XPM-IR
from datamaestro import prepare_dataset qrels = prepare_dataset('irds.medline.2017.trec-pm-2018.qrels') # AdhocAssessments for topic_qrels in qrels.iter(): print(topic_qrels) # An AdhocTopic 

This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocAssessments.

Citation

ir_datasets.bib:

\cite{Roberts2018TrecPm}

Bibtex:

@inproceedings{Roberts2018TrecPm, title={Overview of the TREC 2018 Precision Medicine Track}, author={Kirk Roberts and Dina Demner-Fushman and Ellen Voorhees and William R. Hersh and Steven Bedrick and Alexander J. Lazar}, booktitle={TREC}, year={2018} }
Metadata