Datasets:
Tasks:
Text Retrieval
Formats:
parquet
Sub-tasks:
document-retrieval
Languages:
English
Size:
1M - 10M
License:
Upload folder using huggingface_hub
Browse files- README.md +101 -0
- corpus.parquet +3 -0
- qrels_test.parquet +3 -0
- qrels_train.parquet +3 -0
- qrels_validation.parquet +3 -0
- queries.parquet +3 -0
README.md
ADDED
|
@@ -0,0 +1,101 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- eng
|
| 4 |
+
license: mit
|
| 5 |
+
task_categories:
|
| 6 |
+
- text-retrieval
|
| 7 |
+
task_ids:
|
| 8 |
+
- document-retrieval
|
| 9 |
+
tags:
|
| 10 |
+
- decontaminated
|
| 11 |
+
- beir
|
| 12 |
+
- information-retrieval
|
| 13 |
+
configs:
|
| 14 |
+
- config_name: corpus
|
| 15 |
+
data_files:
|
| 16 |
+
- split: corpus
|
| 17 |
+
path: corpus.parquet
|
| 18 |
+
- config_name: queries
|
| 19 |
+
data_files:
|
| 20 |
+
- split: queries
|
| 21 |
+
path: queries.parquet
|
| 22 |
+
- config_name: qrels-test
|
| 23 |
+
data_files:
|
| 24 |
+
- split: test
|
| 25 |
+
path: qrels_test.parquet
|
| 26 |
+
- config_name: qrels-train
|
| 27 |
+
data_files:
|
| 28 |
+
- split: train
|
| 29 |
+
path: qrels_train.parquet
|
| 30 |
+
- config_name: qrels-validation
|
| 31 |
+
data_files:
|
| 32 |
+
- split: validation
|
| 33 |
+
path: qrels_validation.parquet
|
| 34 |
+
---
|
| 35 |
+
|
| 36 |
+
# hotpotqa (Decontaminated)
|
| 37 |
+
|
| 38 |
+
A decontaminated version of the [hotpotqa](https://huggingface.co/datasets/BeIR/hotpotqa) dataset from the BEIR benchmark, with samples found in the [mgte-en](https://huggingface.co/datasets/Alibaba-NLP/mgte-en) pre-training dataset removed.
|
| 39 |
+
|
| 40 |
+
## Decontamination methodology
|
| 41 |
+
|
| 42 |
+
Contamination was detected using a two-pass approach against the full mgte-en dataset (484 GB, 1,235 parquet files):
|
| 43 |
+
|
| 44 |
+
### Pass 1: Exact hash matching
|
| 45 |
+
|
| 46 |
+
All texts (queries and corpus documents) were normalized (lowercased, unicode NFKD, whitespace collapsed) and hashed with xxHash-64. The same normalization + hashing was applied to every `query` and `document` field in mgte-en. Any sample whose hash appeared in mgte-en was flagged as contaminated.
|
| 47 |
+
|
| 48 |
+
### Pass 2: 13-gram containment (GPT-3 style)
|
| 49 |
+
|
| 50 |
+
Following the methodology introduced in the GPT-3 paper (Brown et al., 2020), word-level 13-grams were extracted from all remaining samples. For each sample, containment was computed as:
|
| 51 |
+
|
| 52 |
+
```
|
| 53 |
+
containment = |ngrams_in_sample ∩ ngrams_in_mgte| / |ngrams_in_sample|
|
| 54 |
+
```
|
| 55 |
+
|
| 56 |
+
Samples with containment >= 0.5 were flagged as near-duplicates.
|
| 57 |
+
|
| 58 |
+
### Qrels filtering
|
| 59 |
+
|
| 60 |
+
Relevance judgments (qrels) referencing any removed query or corpus document were also removed.
|
| 61 |
+
|
| 62 |
+
## Decontamination results
|
| 63 |
+
|
| 64 |
+
| Component | Original | Clean | Removed |
|
| 65 |
+
|---|---|---|---|
|
| 66 |
+
| Corpus | 5,233,329 | 2,314,813 | 2,918,516 |
|
| 67 |
+
| Queries | 97,852 | 12,932 | 84,920 |
|
| 68 |
+
|
| 69 |
+
### Qrels per split
|
| 70 |
+
|
| 71 |
+
| Split | Original | Clean | Removed |
|
| 72 |
+
|---|---|---|---|
|
| 73 |
+
| test | 14,810 | 2,296 | 12,514 |
|
| 74 |
+
| train | 170,000 | 61 | 169,939 |
|
| 75 |
+
| validation | 10,894 | 1,788 | 9,106 |
|
| 76 |
+
|
| 77 |
+
## Usage
|
| 78 |
+
|
| 79 |
+
```python
|
| 80 |
+
from datasets import load_dataset
|
| 81 |
+
|
| 82 |
+
corpus = load_dataset("lightonai/hotpotqa-decontaminated", "corpus", split="corpus")
|
| 83 |
+
queries = load_dataset("lightonai/hotpotqa-decontaminated", "queries", split="queries")
|
| 84 |
+
```
|
| 85 |
+
|
| 86 |
+
## Citation
|
| 87 |
+
|
| 88 |
+
Please cite the original BEIR benchmark:
|
| 89 |
+
|
| 90 |
+
```bibtex
|
| 91 |
+
@inproceedings{thakur2021beir,
|
| 92 |
+
title={BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
|
| 93 |
+
author={Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Irena},
|
| 94 |
+
booktitle={NeurIPS Datasets and Benchmarks},
|
| 95 |
+
year={2021}
|
| 96 |
+
}
|
| 97 |
+
```
|
| 98 |
+
|
| 99 |
+
## License
|
| 100 |
+
|
| 101 |
+
MIT (same as original BEIR)
|
corpus.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4c8150fae4283f9bbc7d884368061330571b22808c8faa4441ef8b15d83aa95d
|
| 3 |
+
size 329922088
|
qrels_test.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4d8fa3cef2e42d323a526216f9afb26fd9367780b5442527ec13a1a33d927fe4
|
| 3 |
+
size 53516
|
qrels_train.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b9e028c831056c13da6822aab69b69b98046f2ec6ade61fb6c8f4125a617dff3
|
| 3 |
+
size 4304
|
qrels_validation.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f97298e30442a2f845339819e09e881797e7c7c48c5924568e62d3614e5f3969
|
| 3 |
+
size 42005
|
queries.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:70b6e881789842730e72496c1c35583e04397ef948a3dfce894bef76ebf75e68
|
| 3 |
+
size 1105402
|