ir_datasets
: MedlineMedical articles from Medline. This collection was used by TREC Genomics 2004-05 (2004 version of dataset) and by TREC Precision Medicine 2017-18 (2017 version).
3M Medline articles including titles and abstracts, used for the TREC 2004-05 Genomics track.
Language: en
Examples:
import ir_datasets dataset = ir_datasets.load("medline/2004") for doc in dataset.docs_iter(): doc # namedtuple<doc_id, title, abstract>
You can find more details about the Python API here.
ir_datasets export medline/2004 docs
[doc_id] [title] [abstract] ...
You can find more details about the CLI here.
import pyterrier as pt pt.init() dataset = pt.get_dataset('irds:medline/2004') # Index medline/2004 indexer = pt.IterDictIndexer('./indices/medline_2004') index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'abstract'])
You can find more details about PyTerrier indexing here.
from datamaestro import prepare_dataset dataset = prepare_dataset('irds.medline.2004') for doc in dataset.iter_documents(): print(doc) # an AdhocDocumentStore break
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore
{ "docs": { "count": 3672808, "fields": { "doc_id": { "max_len": 8, "common_prefix": "" } } } }
The TREC Genomics Track 2004 benchmark. Contains 50 queries with article-level relevance judgments.
Language: en
Examples:
import ir_datasets dataset = ir_datasets.load("medline/2004/trec-genomics-2004") for query in dataset.queries_iter(): query # namedtuple<query_id, title, need, context>
You can find more details about the Python API here.
ir_datasets export medline/2004/trec-genomics-2004 queries
[query_id] [title] [need] [context] ...
You can find more details about the CLI here.
import pyterrier as pt pt.init() dataset = pt.get_dataset('irds:medline/2004/trec-genomics-2004') index_ref = pt.IndexRef.of('./indices/medline_2004') # assumes you have already built an index pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25') # (optionally other pipeline components) pipeline(dataset.get_topics('title'))
You can find more details about PyTerrier retrieval here.
from datamaestro import prepare_dataset topics = prepare_dataset('irds.medline.2004.trec-genomics-2004.queries') # AdhocTopics for topic in topics.iter(): print(topic) # An AdhocTopic
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocTopics.
Inherits docs from medline/2004
Language: en
Examples:
import ir_datasets dataset = ir_datasets.load("medline/2004/trec-genomics-2004") for doc in dataset.docs_iter(): doc # namedtuple<doc_id, title, abstract>
You can find more details about the Python API here.
ir_datasets export medline/2004/trec-genomics-2004 docs
[doc_id] [title] [abstract] ...
You can find more details about the CLI here.
import pyterrier as pt pt.init() dataset = pt.get_dataset('irds:medline/2004/trec-genomics-2004') # Index medline/2004 indexer = pt.IterDictIndexer('./indices/medline_2004') index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'abstract'])
You can find more details about PyTerrier indexing here.
from datamaestro import prepare_dataset dataset = prepare_dataset('irds.medline.2004.trec-genomics-2004') for doc in dataset.iter_documents(): print(doc) # an AdhocDocumentStore break
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore
Relevance levels
Rel. | Definition | Count | % |
---|---|---|---|
0 | not relevant | 0 | 0.0% |
1 | possibly relevant | 4.6K | 56.0% |
2 | definitely relevant | 3.6K | 44.0% |
Examples:
import ir_datasets dataset = ir_datasets.load("medline/2004/trec-genomics-2004") for qrel in dataset.qrels_iter(): qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export medline/2004/trec-genomics-2004 qrels --format tsv
[query_id] [doc_id] [relevance] [iteration] ...
You can find more details about the CLI here.
import pyterrier as pt from pyterrier.measures import * pt.init() dataset = pt.get_dataset('irds:medline/2004/trec-genomics-2004') index_ref = pt.IndexRef.of('./indices/medline_2004') # assumes you have already built an index pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25') # (optionally other pipeline components) pt.Experiment( [pipeline], dataset.get_topics('title'), dataset.get_qrels(), [MAP, nDCG@20] )
You can find more details about PyTerrier experiments here.
from datamaestro import prepare_dataset qrels = prepare_dataset('irds.medline.2004.trec-genomics-2004.qrels') # AdhocAssessments for topic_qrels in qrels.iter(): print(topic_qrels) # An AdhocTopic
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocAssessments.
Bibtex:
@inproceedings{Hersh2004TrecGenomics, title={TREC 2004 Genomics Track Overview}, author={William R. Hersh and Ravi Teja Bhuptiraju and Laura Ross and Phoebe Johnson and Aaron M. Cohen and Dale F. Kraemer}, booktitle={TREC}, year={2004} }{ "docs": { "count": 3672808, "fields": { "doc_id": { "max_len": 8, "common_prefix": "" } } }, "queries": { "count": 50 }, "qrels": { "count": 8268, "fields": { "relevance": { "counts_by_value": { "2": 3639, "1": 4629 } } } } }
The TREC Genomics Track 2005 benchmark. Contains 50 queries with article-level relevance judgments.
Language: en
Examples:
import ir_datasets dataset = ir_datasets.load("medline/2004/trec-genomics-2005") for query in dataset.queries_iter(): query # namedtuple<query_id, text>
You can find more details about the Python API here.
ir_datasets export medline/2004/trec-genomics-2005 queries
[query_id] [text] ...
You can find more details about the CLI here.
import pyterrier as pt pt.init() dataset = pt.get_dataset('irds:medline/2004/trec-genomics-2005') index_ref = pt.IndexRef.of('./indices/medline_2004') # assumes you have already built an index pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25') # (optionally other pipeline components) pipeline(dataset.get_topics())
You can find more details about PyTerrier retrieval here.
from datamaestro import prepare_dataset topics = prepare_dataset('irds.medline.2004.trec-genomics-2005.queries') # AdhocTopics for topic in topics.iter(): print(topic) # An AdhocTopic
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocTopics.
Inherits docs from medline/2004
Language: en
Examples:
import ir_datasets dataset = ir_datasets.load("medline/2004/trec-genomics-2005") for doc in dataset.docs_iter(): doc # namedtuple<doc_id, title, abstract>
You can find more details about the Python API here.
ir_datasets export medline/2004/trec-genomics-2005 docs
[doc_id] [title] [abstract] ...
You can find more details about the CLI here.
import pyterrier as pt pt.init() dataset = pt.get_dataset('irds:medline/2004/trec-genomics-2005') # Index medline/2004 indexer = pt.IterDictIndexer('./indices/medline_2004') index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'abstract'])
You can find more details about PyTerrier indexing here.
from datamaestro import prepare_dataset dataset = prepare_dataset('irds.medline.2004.trec-genomics-2005') for doc in dataset.iter_documents(): print(doc) # an AdhocDocumentStore break
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore
Relevance levels
Rel. | Definition | Count | % |
---|---|---|---|
0 | not relevant | 35K | 88.5% |
1 | possibly relevant | 2.1K | 5.2% |
2 | definitely relevant | 2.5K | 6.3% |
Examples:
import ir_datasets dataset = ir_datasets.load("medline/2004/trec-genomics-2005") for qrel in dataset.qrels_iter(): qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export medline/2004/trec-genomics-2005 qrels --format tsv
[query_id] [doc_id] [relevance] [iteration] ...
You can find more details about the CLI here.
import pyterrier as pt from pyterrier.measures import * pt.init() dataset = pt.get_dataset('irds:medline/2004/trec-genomics-2005') index_ref = pt.IndexRef.of('./indices/medline_2004') # assumes you have already built an index pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25') # (optionally other pipeline components) pt.Experiment( [pipeline], dataset.get_topics(), dataset.get_qrels(), [MAP, nDCG@20] )
You can find more details about PyTerrier experiments here.
from datamaestro import prepare_dataset qrels = prepare_dataset('irds.medline.2004.trec-genomics-2005.qrels') # AdhocAssessments for topic_qrels in qrels.iter(): print(topic_qrels) # An AdhocTopic
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocAssessments.
Bibtex:
@inproceedings{Hersh2005TrecGenomics, title={TREC 2005 Genomics Track Overview}, author={William Hersh and Aaron Cohen and Jianji Yang and Ravi Teja Bhupatiraju and Phoebe Roberts and Marti Hearst}, booktitle={TREC}, year={2007} }{ "docs": { "count": 3672808, "fields": { "doc_id": { "max_len": 8, "common_prefix": "" } } }, "queries": { "count": 50 }, "qrels": { "count": 39958, "fields": { "relevance": { "counts_by_value": { "0": 35374, "2": 2525, "1": 2059 } } } } }
26M Medline and AACR/ASCO Proceedings articles including titles and abstracts. This collection is used for the TREC 2017-18 TREC Precision Medicine track.
Language: en
Examples:
import ir_datasets dataset = ir_datasets.load("medline/2017") for doc in dataset.docs_iter(): doc # namedtuple<doc_id, title, abstract>
You can find more details about the Python API here.
ir_datasets export medline/2017 docs
[doc_id] [title] [abstract] ...
You can find more details about the CLI here.
import pyterrier as pt pt.init() dataset = pt.get_dataset('irds:medline/2017') # Index medline/2017 indexer = pt.IterDictIndexer('./indices/medline_2017') index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'abstract'])
You can find more details about PyTerrier indexing here.
from datamaestro import prepare_dataset dataset = prepare_dataset('irds.medline.2017') for doc in dataset.iter_documents(): print(doc) # an AdhocDocumentStore break
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore
{ "docs": { "count": 26740025, "fields": { "doc_id": { "max_len": 15, "common_prefix": "" } } } }
The TREC Precision Medicine (PM) Track 2017 benchmark. Contains 30 queries containing disease, gene, and target demographic information.
Language: en
Examples:
import ir_datasets dataset = ir_datasets.load("medline/2017/trec-pm-2017") for query in dataset.queries_iter(): query # namedtuple<query_id, disease, gene, demographic, other>
You can find more details about the Python API here.
ir_datasets export medline/2017/trec-pm-2017 queries
[query_id] [disease] [gene] [demographic] [other] ...
You can find more details about the CLI here.
import pyterrier as pt pt.init() dataset = pt.get_dataset('irds:medline/2017/trec-pm-2017') index_ref = pt.IndexRef.of('./indices/medline_2017') # assumes you have already built an index pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25') # (optionally other pipeline components) pipeline(dataset.get_topics('disease'))
You can find more details about PyTerrier retrieval here.
from datamaestro import prepare_dataset topics = prepare_dataset('irds.medline.2017.trec-pm-2017.queries') # AdhocTopics for topic in topics.iter(): print(topic) # An AdhocTopic
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocTopics.
Inherits docs from medline/2017
Language: en
Examples:
import ir_datasets dataset = ir_datasets.load("medline/2017/trec-pm-2017") for doc in dataset.docs_iter(): doc # namedtuple<doc_id, title, abstract>
You can find more details about the Python API here.
ir_datasets export medline/2017/trec-pm-2017 docs
[doc_id] [title] [abstract] ...
You can find more details about the CLI here.
import pyterrier as pt pt.init() dataset = pt.get_dataset('irds:medline/2017/trec-pm-2017') # Index medline/2017 indexer = pt.IterDictIndexer('./indices/medline_2017') index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'abstract'])
You can find more details about PyTerrier indexing here.
from datamaestro import prepare_dataset dataset = prepare_dataset('irds.medline.2017.trec-pm-2017') for doc in dataset.iter_documents(): print(doc) # an AdhocDocumentStore break
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore
Relevance levels
Rel. | Definition | Count | % |
---|---|---|---|
0 | not relevant | 19K | 82.9% |
1 | possibly relevant | 1.9K | 8.2% |
2 | definitely relevant | 2.0K | 8.9% |
Examples:
import ir_datasets dataset = ir_datasets.load("medline/2017/trec-pm-2017") for qrel in dataset.qrels_iter(): qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export medline/2017/trec-pm-2017 qrels --format tsv
[query_id] [doc_id] [relevance] [iteration] ...
You can find more details about the CLI here.
import pyterrier as pt from pyterrier.measures import * pt.init() dataset = pt.get_dataset('irds:medline/2017/trec-pm-2017') index_ref = pt.IndexRef.of('./indices/medline_2017') # assumes you have already built an index pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25') # (optionally other pipeline components) pt.Experiment( [pipeline], dataset.get_topics('disease'), dataset.get_qrels(), [MAP, nDCG@20] )
You can find more details about PyTerrier experiments here.
from datamaestro import prepare_dataset qrels = prepare_dataset('irds.medline.2017.trec-pm-2017.qrels') # AdhocAssessments for topic_qrels in qrels.iter(): print(topic_qrels) # An AdhocTopic
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocAssessments.
Bibtex:
@inproceedings{Roberts2017TrecPm, title={Overview of the TREC 2017 Precision Medicine Track}, author={Kirk Roberts and Dina Demner-Fushman and Ellen Voorhees and William R. Hersh and Steven Bedrick and Alexander J. Lazar and Shubham Pant}, booktitle={TREC}, year={2017} }{ "docs": { "count": 26740025, "fields": { "doc_id": { "max_len": 15, "common_prefix": "" } } }, "queries": { "count": 30 }, "qrels": { "count": 22642, "fields": { "relevance": { "counts_by_value": { "0": 18767, "1": 1853, "2": 2022 } } } } }
The TREC Precision Medicine (PM) Track 2018 benchmark. Contains 50 queries containing disease, gene, and target demographic information.
Language: en
Examples:
import ir_datasets dataset = ir_datasets.load("medline/2017/trec-pm-2018") for query in dataset.queries_iter(): query # namedtuple<query_id, disease, gene, demographic>
You can find more details about the Python API here.
ir_datasets export medline/2017/trec-pm-2018 queries
[query_id] [disease] [gene] [demographic] ...
You can find more details about the CLI here.
import pyterrier as pt pt.init() dataset = pt.get_dataset('irds:medline/2017/trec-pm-2018') index_ref = pt.IndexRef.of('./indices/medline_2017') # assumes you have already built an index pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25') # (optionally other pipeline components) pipeline(dataset.get_topics('disease'))
You can find more details about PyTerrier retrieval here.
from datamaestro import prepare_dataset topics = prepare_dataset('irds.medline.2017.trec-pm-2018.queries') # AdhocTopics for topic in topics.iter(): print(topic) # An AdhocTopic
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocTopics.
Inherits docs from medline/2017
Language: en
Examples:
import ir_datasets dataset = ir_datasets.load("medline/2017/trec-pm-2018") for doc in dataset.docs_iter(): doc # namedtuple<doc_id, title, abstract>
You can find more details about the Python API here.
ir_datasets export medline/2017/trec-pm-2018 docs
[doc_id] [title] [abstract] ...
You can find more details about the CLI here.
import pyterrier as pt pt.init() dataset = pt.get_dataset('irds:medline/2017/trec-pm-2018') # Index medline/2017 indexer = pt.IterDictIndexer('./indices/medline_2017') index_ref = indexer.index(dataset.get_corpus_iter(), fields=['title', 'abstract'])
You can find more details about PyTerrier indexing here.
from datamaestro import prepare_dataset dataset = prepare_dataset('irds.medline.2017.trec-pm-2018') for doc in dataset.iter_documents(): print(doc) # an AdhocDocumentStore break
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocDocumentStore
Relevance levels
Rel. | Definition | Count | % |
---|---|---|---|
0 | not relevant | 17K | 75.1% |
1 | possibly relevant | 2.1K | 9.6% |
2 | definitely relevant | 3.4K | 15.3% |
Examples:
import ir_datasets dataset = ir_datasets.load("medline/2017/trec-pm-2018") for qrel in dataset.qrels_iter(): qrel # namedtuple<query_id, doc_id, relevance, iteration>
You can find more details about the Python API here.
ir_datasets export medline/2017/trec-pm-2018 qrels --format tsv
[query_id] [doc_id] [relevance] [iteration] ...
You can find more details about the CLI here.
import pyterrier as pt from pyterrier.measures import * pt.init() dataset = pt.get_dataset('irds:medline/2017/trec-pm-2018') index_ref = pt.IndexRef.of('./indices/medline_2017') # assumes you have already built an index pipeline = pt.BatchRetrieve(index_ref, wmodel='BM25') # (optionally other pipeline components) pt.Experiment( [pipeline], dataset.get_topics('disease'), dataset.get_qrels(), [MAP, nDCG@20] )
You can find more details about PyTerrier experiments here.
from datamaestro import prepare_dataset qrels = prepare_dataset('irds.medline.2017.trec-pm-2018.qrels') # AdhocAssessments for topic_qrels in qrels.iter(): print(topic_qrels) # An AdhocTopic
This examples requires that experimaestro-ir be installed. For more information about the returned object, see the documentation about AdhocAssessments.
Bibtex:
@inproceedings{Roberts2018TrecPm, title={Overview of the TREC 2018 Precision Medicine Track}, author={Kirk Roberts and Dina Demner-Fushman and Ellen Voorhees and William R. Hersh and Steven Bedrick and Alexander J. Lazar}, booktitle={TREC}, year={2018} }{ "docs": { "count": 26740025, "fields": { "doc_id": { "max_len": 15, "common_prefix": "" } } }, "queries": { "count": 50 }, "qrels": { "count": 22429, "fields": { "relevance": { "counts_by_value": { "0": 16841, "2": 3442, "1": 2146 } } } } }