Datasets:
Tasks:
Question Answering
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
10M - 100M
ArXiv:
File size: 1,206 Bytes
8bd621d 0a83147 8bd621d 0a83147 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: train
num_bytes: 13638576512
num_examples: 20970784
download_size: 7557029888
dataset_size: 13638576512
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- question-answering
language:
- en
size_categories:
- 10M<n<100M
---
# Wikipedia Dump without Duplicates
## Dataset Summary
This is a cleaned and de-duplicated version of the English Wikipedia dump dated December 20, 2018. Originally sourced from the [DPR repository](https://github.com/facebookresearch/DPR), it has been processed to remove duplicates, resulting in a final count of **20,970,784** passages, each consisting of 100 words.
The original corpus is available for download via this [link](https://dl.fbaipublicfiles.com/dpr/wikipedia_split/psgs_w100.tsv.gz).
The corpus is used in the research paper [A Tale of Trust and Accuracy: Base vs. Instruct LLMs in RAG Systems](https://arxiv.org/abs/2406.14972), supporting experiments comparing base and instruct Large Language Models within Retrieval-Augmented Generation systems.
|