meettilavat commited on
Commit
4432713
·
verified ·
1 Parent(s): b886cdb

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +1 -0
  2. .ipynb_checkpoints/README-checkpoint.md +173 -0
  3. README.md +173 -0
  4. checkpoint_processed_ids.txt +3 -0
  5. data/shard_00000.parquet +3 -0
  6. data/shard_00001.parquet +3 -0
  7. data/shard_00002.parquet +3 -0
  8. data/shard_00003.parquet +3 -0
  9. data/shard_00004.parquet +3 -0
  10. data/shard_00005.parquet +3 -0
  11. data/shard_00006.parquet +3 -0
  12. data/shard_00007.parquet +3 -0
  13. data/shard_00008.parquet +3 -0
  14. data/shard_00009.parquet +3 -0
  15. data/shard_00010.parquet +3 -0
  16. data/shard_00011.parquet +3 -0
  17. data/shard_00012.parquet +3 -0
  18. data/shard_00013.parquet +3 -0
  19. data/shard_00014.parquet +3 -0
  20. data/shard_00015.parquet +3 -0
  21. data/shard_00016.parquet +3 -0
  22. data/shard_00017.parquet +3 -0
  23. data/shard_00018.parquet +3 -0
  24. data/shard_00019.parquet +3 -0
  25. data/shard_00020.parquet +3 -0
  26. data/shard_00021.parquet +3 -0
  27. data/shard_00022.parquet +3 -0
  28. data/shard_00023.parquet +3 -0
  29. data/shard_00024.parquet +3 -0
  30. data/shard_00025.parquet +3 -0
  31. data/shard_00026.parquet +3 -0
  32. data/shard_00027.parquet +3 -0
  33. data/shard_00028.parquet +3 -0
  34. data/shard_00029.parquet +3 -0
  35. data/shard_00030.parquet +3 -0
  36. data/shard_00031.parquet +3 -0
  37. data/shard_00032.parquet +3 -0
  38. data/shard_00033.parquet +3 -0
  39. data/shard_00034.parquet +3 -0
  40. data/shard_00035.parquet +3 -0
  41. data/shard_00036.parquet +3 -0
  42. data/shard_00037.parquet +3 -0
  43. data/shard_00038.parquet +3 -0
  44. data/shard_00039.parquet +3 -0
  45. data/shard_00040.parquet +3 -0
  46. data/shard_00041.parquet +3 -0
  47. data/shard_00042.parquet +3 -0
  48. data/shard_00043.parquet +3 -0
  49. data/shard_00045.parquet +3 -0
  50. data/shard_00046.parquet +3 -0
.gitattributes CHANGED
@@ -57,3 +57,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
 
 
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
60
+ checkpoint_processed_ids.txt filter=lfs diff=lfs merge=lfs -text
.ipynb_checkpoints/README-checkpoint.md ADDED
@@ -0,0 +1,173 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Internet Archive Historical Texts (0001-1899)
2
+
3
+ ## TL;DR
4
+ - 711,680 cleaned public-domain style documents harvested from the Internet Archive using `download_texts_improved.py`.
5
+ - Coverage targets items that contain textual content dated between 0001 and 1899, ranked by download counts; ~715k IDs were attempted, ~4.1k were filtered during preprocessing.
6
+ - Stored in 620 Zstandard-compressed Parquet shards (`shard_00000.parquet` ... `shard_00619.parquet`) occupying ~240 GB on disk and ~622 billion characters uncompressed.
7
+ - Texts underwent aggressive OCR cleanup (disclaimer removal, page-number stripping, ASCII ratio checks, min length=100) to match the fineweb/nanochat training format.
8
+ - Sample-based language detection shows the collection is overwhelmingly English (~97%), with trace amounts of French, Dutch, Slovene, and Czech.
9
+
10
+ ## Repository Layout
11
+ - `shard_#####.parquet` – text-only Parquet shards with string column `text`; row groups are sized at 1024 documents, and many shards contain two groups (2048 docs).
12
+
13
+ ## Dataset Card
14
+
15
+ ### Dataset Description
16
+ - **Dataset summary**: Internet Archive texts retrieved via the query “text documents between 0001 and 1899 sorted by download counts.” The run used a 1 M ID candidate list and stored ~700k+ successful texts aimed at training large language models on historical/public-domain style prose.
17
+ - **Supported tasks**: Language modeling, context pretraining, OCR research, historical text analysis. No structured metadata accompanies the texts.
18
+ - **Languages**: Dominantly English. Spot checks reveal minority representation in several other European languages (see Language Profile below).
19
+
20
+ ### Data Summary
21
+
22
+ | Metric | Value |
23
+ | --- | --- |
24
+ | Total documents kept | 711,680 |
25
+ | Processed Archive IDs (kept + filtered) | 715,776 |
26
+ | Filtered during preprocessing | 4,096 (~0.6%) |
27
+ | Parquet shards | 620 |
28
+ | Rows per shard | 1,024–2,048 (avg 1,148) |
29
+ | On-disk size (`shard_*.parquet`) | 240 GB |
30
+ | Total characters (uncompressed) | 622,091,938,957 |
31
+ | Mean characters per doc | 874,117 |
32
+ | Std deviation | 1,625,026 |
33
+ | Min / Max characters | 100 / 67,609,272 |
34
+ | P25 / P50 / P75 | 152,401 / 483,738 / 1,076,868 |
35
+ | P90 / P95 / P99 | 1,891,420 / 2,737,610 / 6,333,235 |
36
+
37
+ ### Language Profile (sample of 200 docs across evenly spaced shards)
38
+
39
+ | ISO code | Language | Count | Share |
40
+ | --- | --- | --- | --- |
41
+ | `en` | English | 195 | 97.5% |
42
+ | `fr` | French | 2 | 1.0% |
43
+ | `nl` | Dutch | 1 | 0.5% |
44
+ | `sl` | Slovene | 1 | 0.5% |
45
+ | `cs` | Czech | 1 | 0.5% |
46
+ | `unknown` | Detection failure | 0 | 0% |
47
+
48
+ Detection used `langdetect` on the first 2k characters per sampled document. Results are indicative, not exhaustive; rarer languages may be underrepresented due to the small sample.
49
+
50
+ ### Data Collection and Preprocessing
51
+ - **Acquisition pipeline**: `download_texts_improved.py` queues Archive.org identifiers, downloads OCR’d text files with high concurrency (default 256 threads), and writes batched Parquet shards while checkpointing processed IDs.
52
+ - **Filters applied**:
53
+ - Removal of common Internet Archive, Google Books, JSTOR disclaimers.
54
+ - Page-number and bracketed page annotation stripping.
55
+ - OCR artifact smoothing (single-letter noise, em/en dash normalization, whitespace compaction).
56
+ - Printable-character filtering and ASCII ratio threshold (≥70% ASCII).
57
+ - Length filter: documents shorter than 100 characters dropped.
58
+ - **Shard writing**: Zstandard compression level 1, Arrow row group size 1024. Shard size targets `500M` characters but varies with doc length distribution.
59
+
60
+ ### Known Issues and Limitations
61
+ - Residual OCR errors remain, especially in very long volumes where heuristic cleaning is limited.
62
+ - The dataset stores plain text only; metadata such as author, title, year, or download counts are not preserved in the shards.
63
+ - Some public-domain disclaimers survive when pattern variants were not recognized.
64
+ - Documents can be extremely long (max >67M characters) leading to significant memory pressure when loaded naively.
65
+ - Language balance is skewed toward English due to query bias (download count ranking).
66
+
67
+ ### Ethical Considerations
68
+ - All texts were sourced from the Internet Archive. Users must ensure their downstream use complies with the Archive’s Terms of Use and the legal status of individual works in their jurisdiction.
69
+ - The dataset targets historical materials; nevertheless, manual review is advised before deploying outputs in production settings.
70
+
71
+ ### Suggested Citation
72
+ > “Internet Archive Historical Texts (0001-1899) dataset, assembled via `download_texts_improved.py` from Archive.org items sorted by download counts.”
73
+
74
+ Please also cite the Internet Archive and the original works when appropriate.
75
+
76
+ ## Working With the Dataset
77
+
78
+ ### High-throughput Reading Tips
79
+ The environment used to build this dataset offered 52 CPU cores, ~700 GB RAM, and NVMe storage rated around 10 GB/s. To exploit similar hardware when loading the data:
80
+
81
+ ```python
82
+ import pyarrow.dataset as ds
83
+ import pyarrow.compute as pc
84
+
85
+ dataset = ds.dataset("shard_*.parquet", format="parquet")
86
+
87
+ scanner = dataset.scanner(
88
+ columns=["text"],
89
+ use_threads=True, # leverage multi-core CPU
90
+ batch_size=4096 # larger batches amortize I/O
91
+ )
92
+
93
+ for batch in scanner.to_batches():
94
+ # operate on Arrow arrays without converting to Python when possible
95
+ lengths = pc.utf8_length(batch["text"])
96
+ # ... downstream processing ...
97
+ ```
98
+
99
+ Additional practical tips:
100
+ - Enable Arrow memory mapping (`dataset = ds.dataset(files, format="parquet", filesystem=...)`) if the filesystem supports it; this avoids copying data into Python space.
101
+ - For PyTorch/NumPy pipelines, stream batches instead of materializing the entire dataset (`scanner.to_reader()`).
102
+ - When sampling large texts, slice in Arrow before conversion (`pc.utf8_slice_codeunits`) to avoid pulling multi-megabyte strings into Python.
103
+ - Use `pyarrow.parquet.ParquetFile` to inspect shard metadata (row counts, row groups) before launching heavy jobs.
104
+
105
+ ### Quick Statistics Script
106
+ Recompute the headline metrics directly from the shards:
107
+
108
+ ```bash
109
+ python - <<'PY'
110
+ import pyarrow.dataset as ds
111
+ import pyarrow.compute as pc
112
+ from math import sqrt
113
+
114
+ dataset = ds.dataset("shard_*.parquet", format="parquet")
115
+ scanner = dataset.scanner(columns=["text"], use_threads=True)
116
+
117
+ count = 0
118
+ char_sum = 0
119
+ char_sq_sum = 0
120
+ min_len = None
121
+ max_len = 0
122
+
123
+ for batch in scanner.to_batches():
124
+ lengths = pc.utf8_length(batch["text"])
125
+ batch_sum = pc.sum(lengths).as_py()
126
+ count += batch.num_rows
127
+ char_sum += batch_sum
128
+ char_sq_sum += pc.sum(pc.multiply(pc.cast(lengths, "float64"), pc.cast(lengths, "float64"))).as_py()
129
+ batch_min = pc.min(lengths).as_py()
130
+ batch_max = pc.max(lengths).as_py()
131
+ min_len = batch_min if min_len is None else min(min_len, batch_min)
132
+ max_len = max(max_len, batch_max)
133
+
134
+ mean = char_sum / count
135
+ variance = max(0.0, (char_sq_sum / count) - mean**2)
136
+ print(f"docs={count:,} chars={char_sum:,} mean={mean:,.0f} std={sqrt(variance):,.0f} min={min_len:,} max={max_len:,}")
137
+ PY
138
+ ```
139
+
140
+ ### Sampling Languages
141
+ If `langdetect` is available, you can reproduce the language profile on a lightweight subset:
142
+
143
+ ```bash
144
+ python - <<'PY'
145
+ import random
146
+ import glob
147
+ from collections import Counter
148
+ import pyarrow.parquet as pq
149
+ from langdetect import detect, DetectorFactory
150
+
151
+ DetectorFactory.seed = 0
152
+ files = sorted(glob.glob("shard_*.parquet"))
153
+ indices = [round(i) for i in [k * (len(files) - 1) / 9 for k in range(10)]]
154
+
155
+ lang_counts = Counter()
156
+ for idx in indices:
157
+ table = pq.read_table(files[idx], columns=["text"], use_threads=True)
158
+ for row in random.sample(range(table.num_rows), 20):
159
+ snippet = table.column(0)[row].as_py()[:2000]
160
+ try:
161
+ lang = detect(snippet)
162
+ except Exception:
163
+ lang = "unknown"
164
+ lang_counts[lang] += 1
165
+
166
+ print(lang_counts)
167
+ PY
168
+ ```
169
+
170
+ ## Acknowledgements
171
+ - Thanks to the Internet Archive for maintaining open access to historical texts.
172
+ - The acquisition pipeline is based on the `download_texts_improved.py` script (London v0) tuned for high-concurrency environments.
173
+
README.md ADDED
@@ -0,0 +1,173 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Internet Archive Historical Texts (0001-1899)
2
+
3
+ ## TL;DR
4
+ - 711,680 cleaned public-domain style documents harvested from the Internet Archive using `download_texts_improved.py`.
5
+ - Coverage targets items that contain textual content dated between 0001 and 1899, ranked by download counts; ~715k IDs were attempted, ~4.1k were filtered during preprocessing.
6
+ - Stored in 620 Zstandard-compressed Parquet shards (`shard_00000.parquet` ... `shard_00619.parquet`) occupying ~240 GB on disk and ~622 billion characters uncompressed.
7
+ - Texts underwent aggressive OCR cleanup (disclaimer removal, page-number stripping, ASCII ratio checks, min length=100) to match the fineweb/nanochat training format.
8
+ - Sample-based language detection shows the collection is overwhelmingly English (~97%), with trace amounts of French, Dutch, Slovene, and Czech.
9
+
10
+ ## Repository Layout
11
+ - `shard_#####.parquet` – text-only Parquet shards with string column `text`; row groups are sized at 1024 documents, and many shards contain two groups (2048 docs).
12
+
13
+ ## Dataset Card
14
+
15
+ ### Dataset Description
16
+ - **Dataset summary**: Internet Archive texts retrieved via the query “text documents between 0001 and 1899 sorted by download counts.” The run used a 1 M ID candidate list and stored ~700k+ successful texts aimed at training large language models on historical/public-domain style prose.
17
+ - **Supported tasks**: Language modeling, context pretraining, OCR research, historical text analysis. No structured metadata accompanies the texts.
18
+ - **Languages**: Dominantly English. Spot checks reveal minority representation in several other European languages (see Language Profile below).
19
+
20
+ ### Data Summary
21
+
22
+ | Metric | Value |
23
+ | --- | --- |
24
+ | Total documents kept | 711,680 |
25
+ | Processed Archive IDs (kept + filtered) | 715,776 |
26
+ | Filtered during preprocessing | 4,096 (~0.6%) |
27
+ | Parquet shards | 620 |
28
+ | Rows per shard | 1,024–2,048 (avg 1,148) |
29
+ | On-disk size (`shard_*.parquet`) | 240 GB |
30
+ | Total characters (uncompressed) | 622,091,938,957 |
31
+ | Mean characters per doc | 874,117 |
32
+ | Std deviation | 1,625,026 |
33
+ | Min / Max characters | 100 / 67,609,272 |
34
+ | P25 / P50 / P75 | 152,401 / 483,738 / 1,076,868 |
35
+ | P90 / P95 / P99 | 1,891,420 / 2,737,610 / 6,333,235 |
36
+
37
+ ### Language Profile (sample of 200 docs across evenly spaced shards)
38
+
39
+ | ISO code | Language | Count | Share |
40
+ | --- | --- | --- | --- |
41
+ | `en` | English | 195 | 97.5% |
42
+ | `fr` | French | 2 | 1.0% |
43
+ | `nl` | Dutch | 1 | 0.5% |
44
+ | `sl` | Slovene | 1 | 0.5% |
45
+ | `cs` | Czech | 1 | 0.5% |
46
+ | `unknown` | Detection failure | 0 | 0% |
47
+
48
+ Detection used `langdetect` on the first 2k characters per sampled document. Results are indicative, not exhaustive; rarer languages may be underrepresented due to the small sample.
49
+
50
+ ### Data Collection and Preprocessing
51
+ - **Acquisition pipeline**: `download_texts_improved.py` queues Archive.org identifiers, downloads OCR’d text files with high concurrency (default 256 threads), and writes batched Parquet shards while checkpointing processed IDs.
52
+ - **Filters applied**:
53
+ - Removal of common Internet Archive, Google Books, JSTOR disclaimers.
54
+ - Page-number and bracketed page annotation stripping.
55
+ - OCR artifact smoothing (single-letter noise, em/en dash normalization, whitespace compaction).
56
+ - Printable-character filtering and ASCII ratio threshold (≥70% ASCII).
57
+ - Length filter: documents shorter than 100 characters dropped.
58
+ - **Shard writing**: Zstandard compression level 1, Arrow row group size 1024. Shard size targets `500M` characters but varies with doc length distribution.
59
+
60
+ ### Known Issues and Limitations
61
+ - Residual OCR errors remain, especially in very long volumes where heuristic cleaning is limited.
62
+ - The dataset stores plain text only; metadata such as author, title, year, or download counts are not preserved in the shards.
63
+ - Some public-domain disclaimers survive when pattern variants were not recognized.
64
+ - Documents can be extremely long (max >67M characters) leading to significant memory pressure when loaded naively.
65
+ - Language balance is skewed toward English due to query bias (download count ranking).
66
+
67
+ ### Ethical Considerations
68
+ - All texts were sourced from the Internet Archive. Users must ensure their downstream use complies with the Archive’s Terms of Use and the legal status of individual works in their jurisdiction.
69
+ - The dataset targets historical materials; nevertheless, manual review is advised before deploying outputs in production settings.
70
+
71
+ ### Suggested Citation
72
+ > “Internet Archive Historical Texts (0001-1899) dataset, assembled via `download_texts_improved.py` from Archive.org items sorted by download counts.”
73
+
74
+ Please also cite the Internet Archive and the original works when appropriate.
75
+
76
+ ## Working With the Dataset
77
+
78
+ ### High-throughput Reading Tips
79
+ The environment used to build this dataset offered 52 CPU cores, ~700 GB RAM, and NVMe storage rated around 10 GB/s. To exploit similar hardware when loading the data:
80
+
81
+ ```python
82
+ import pyarrow.dataset as ds
83
+ import pyarrow.compute as pc
84
+
85
+ dataset = ds.dataset("shard_*.parquet", format="parquet")
86
+
87
+ scanner = dataset.scanner(
88
+ columns=["text"],
89
+ use_threads=True, # leverage multi-core CPU
90
+ batch_size=4096 # larger batches amortize I/O
91
+ )
92
+
93
+ for batch in scanner.to_batches():
94
+ # operate on Arrow arrays without converting to Python when possible
95
+ lengths = pc.utf8_length(batch["text"])
96
+ # ... downstream processing ...
97
+ ```
98
+
99
+ Additional practical tips:
100
+ - Enable Arrow memory mapping (`dataset = ds.dataset(files, format="parquet", filesystem=...)`) if the filesystem supports it; this avoids copying data into Python space.
101
+ - For PyTorch/NumPy pipelines, stream batches instead of materializing the entire dataset (`scanner.to_reader()`).
102
+ - When sampling large texts, slice in Arrow before conversion (`pc.utf8_slice_codeunits`) to avoid pulling multi-megabyte strings into Python.
103
+ - Use `pyarrow.parquet.ParquetFile` to inspect shard metadata (row counts, row groups) before launching heavy jobs.
104
+
105
+ ### Quick Statistics Script
106
+ Recompute the headline metrics directly from the shards:
107
+
108
+ ```bash
109
+ python - <<'PY'
110
+ import pyarrow.dataset as ds
111
+ import pyarrow.compute as pc
112
+ from math import sqrt
113
+
114
+ dataset = ds.dataset("shard_*.parquet", format="parquet")
115
+ scanner = dataset.scanner(columns=["text"], use_threads=True)
116
+
117
+ count = 0
118
+ char_sum = 0
119
+ char_sq_sum = 0
120
+ min_len = None
121
+ max_len = 0
122
+
123
+ for batch in scanner.to_batches():
124
+ lengths = pc.utf8_length(batch["text"])
125
+ batch_sum = pc.sum(lengths).as_py()
126
+ count += batch.num_rows
127
+ char_sum += batch_sum
128
+ char_sq_sum += pc.sum(pc.multiply(pc.cast(lengths, "float64"), pc.cast(lengths, "float64"))).as_py()
129
+ batch_min = pc.min(lengths).as_py()
130
+ batch_max = pc.max(lengths).as_py()
131
+ min_len = batch_min if min_len is None else min(min_len, batch_min)
132
+ max_len = max(max_len, batch_max)
133
+
134
+ mean = char_sum / count
135
+ variance = max(0.0, (char_sq_sum / count) - mean**2)
136
+ print(f"docs={count:,} chars={char_sum:,} mean={mean:,.0f} std={sqrt(variance):,.0f} min={min_len:,} max={max_len:,}")
137
+ PY
138
+ ```
139
+
140
+ ### Sampling Languages
141
+ If `langdetect` is available, you can reproduce the language profile on a lightweight subset:
142
+
143
+ ```bash
144
+ python - <<'PY'
145
+ import random
146
+ import glob
147
+ from collections import Counter
148
+ import pyarrow.parquet as pq
149
+ from langdetect import detect, DetectorFactory
150
+
151
+ DetectorFactory.seed = 0
152
+ files = sorted(glob.glob("shard_*.parquet"))
153
+ indices = [round(i) for i in [k * (len(files) - 1) / 9 for k in range(10)]]
154
+
155
+ lang_counts = Counter()
156
+ for idx in indices:
157
+ table = pq.read_table(files[idx], columns=["text"], use_threads=True)
158
+ for row in random.sample(range(table.num_rows), 20):
159
+ snippet = table.column(0)[row].as_py()[:2000]
160
+ try:
161
+ lang = detect(snippet)
162
+ except Exception:
163
+ lang = "unknown"
164
+ lang_counts[lang] += 1
165
+
166
+ print(lang_counts)
167
+ PY
168
+ ```
169
+
170
+ ## Acknowledgements
171
+ - Thanks to the Internet Archive for maintaining open access to historical texts.
172
+ - The acquisition pipeline is based on the `download_texts_improved.py` script (London v0) tuned for high-concurrency environments.
173
+
checkpoint_processed_ids.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d29a53cbd1ced0bc43bccaf86d81e15080ed7dd4a938ca1afb0ebd950961f1d2
3
+ size 15783092
data/shard_00000.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:350cd6c3ef1ba31b4ec0df2029ff3da0ef30d5b10e4b92e6908cb4a6173104a0
3
+ size 623350879
data/shard_00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2c7919291a47eeda66157a09241643bd502bb5f9e9d7d88ed494cb55447d6bbc
3
+ size 624237879
data/shard_00002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0d1dd4b37b537f07ff27cd28109126ec33ed7f0ea58badcbf4feb22a6f9c83d9
3
+ size 707044677
data/shard_00003.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:860e697e1af302ec950f25e4bb975b93f41fd474d49a150e02d607e124ce22b1
3
+ size 723988027
data/shard_00004.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:18cee0c00f9b8d0575577b4cb3d3f2082bb560f3bb8874ce596b5b63ee9e8a69
3
+ size 704553539
data/shard_00005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bf6ad77a932e903f7f0e80c5e60213e8bd1bf2e998318b8a1bf1fd1c5ada07fb
3
+ size 794814232
data/shard_00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bfd370a42ea596d9748806bea321cbfc49fa0747c114223569872389eefb90ef
3
+ size 712177507
data/shard_00007.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3bc27f2668b9081f71b3e55b970e282bbfddda66e42553aed09b27d87d62dc49
3
+ size 779912869
data/shard_00008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:76df537f509800a5af6d711a5924a87f846d0628f5896cfbe280403d078f488f
3
+ size 826240700
data/shard_00009.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9b4f131991045db5490f736bfd7e9aca26682ce5bc065d2858c4a4d80d340e3b
3
+ size 850289766
data/shard_00010.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dfee9cdba0715cf45a378bb0c0a385ae78dcb8ab210e928430f106428c819194
3
+ size 754496119
data/shard_00011.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:917d19485f2f48fa68175e31ac43298b62a38f24a9109e25914b742443e8b678
3
+ size 720685373
data/shard_00012.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:49be61595050a34aef60f6e0c1a8952e34ea64d373cc6ccdaa89c67a1ba69a12
3
+ size 754268228
data/shard_00013.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0633942f75912a2c33945e41aa4dd71f9c586e2545df9f2c0cf740bddda509d3
3
+ size 736990035
data/shard_00014.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8ed15804d98d9aebd4ba56f01efdbef6963c66ac42714d95b4353a29b5a59c30
3
+ size 779265927
data/shard_00015.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:40151b00fb17e7733dd4ffc5bbc776642976ad99f6690156aebd17b89c690839
3
+ size 680606128
data/shard_00016.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9f92426feed4fee16be1e3e3f9b5d10da779f40c110463c3c8687905afc39b56
3
+ size 779554505
data/shard_00017.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6ff564e29adf5bd934d7d6f3c1ea5db1bc3737f3098915d5fce8f4e6fda00b4f
3
+ size 733115029
data/shard_00018.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d780032665a0e3e92d145cb805428d229c4917b995110a06fc01203abaa6b991
3
+ size 745527440
data/shard_00019.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9a7001f6a9479e4e6137f721d78b7682f2393a73297ec4f77da29680699e30b5
3
+ size 805148430
data/shard_00020.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fd9c55bf679e2a151afb9001528b6ea3f5c242071c35f769be4663c696435757
3
+ size 711440704
data/shard_00021.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3385c9b6c71ce7c67b302838c0c62c15787fc2c573b16fb984e0bc0885740b84
3
+ size 743287603
data/shard_00022.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c98e117a009eccfd5a2e9a95d95b7bfda6b9dc5ab1e6d2804cdeb033681ce914
3
+ size 718556645
data/shard_00023.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea82b6fbecd515713ff78d5ff16fe36bbf6cb3457be03b01fe4da541e8b19231
3
+ size 716140530
data/shard_00024.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:20ccfe77a189ba368d624d45ebe6c4a5d27ff3341eb8d6a71cda118e72659438
3
+ size 661422980
data/shard_00025.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b7218d8d49d4f752ca477f317472ba57cb02eda6844321eb74921c6847cfd11b
3
+ size 631341780
data/shard_00026.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:45ad1d5a741bdbb1ea56e90b596dbfc2a9db9a6a4b4f1187657292c8a7666684
3
+ size 693257317
data/shard_00027.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b8c10965608ca836525f79fbaf34f5a6c54ef0fa162273bbbe0dd507b96793cb
3
+ size 722101112
data/shard_00028.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8bd10f07df9b9251656788438f6e7b9ea63e73bf4bdfa67e585ce72a8894551e
3
+ size 714293962
data/shard_00029.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d013f344dda1c829c719728997e3015901227fca70c3f5d8084d630b23c34fe
3
+ size 707653113
data/shard_00030.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:13b07a9be63ddce8037da61583198400c2e9d4fc13baaeb6bc543f90275d4a74
3
+ size 670080098
data/shard_00031.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc4645114989ec995115fc7c4f8ca6851e8551a2df6d730156ba7b8c838d0a50
3
+ size 626929786
data/shard_00032.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d75894726f22640b021cb8c8ea9ebc9a4bd4896d9d69929db44c5610357a7e0d
3
+ size 637763912
data/shard_00033.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2d27add8697d13db070fbd7386d24ae6b4e6d150c87fa9a7c09ef111677132b4
3
+ size 678514694
data/shard_00034.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:55fb50419f52fe0c6659b2f0ed8d1fca8a5e7445c64225cda4c75b2ee3a88caf
3
+ size 535151829
data/shard_00035.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:db835534a585f0f3efb7bee6b5af5429a8c2d31ab23becc63e30cc30e16730ef
3
+ size 750926736
data/shard_00036.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f4f0c36fbb23af6b94da7243e61be00d3fca2a433bd7e2a44e3244d0199d0507
3
+ size 724854296
data/shard_00037.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aed2c74bf3f971744d8fea83e5306ffb03fbec67700e1ae7a4623f52d998d202
3
+ size 653001873
data/shard_00038.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0232a80338042c880849a74a9b7b687a39b72872426d3233965d185f93f913a3
3
+ size 647878520
data/shard_00039.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:58f49074bce8fd8e1c8536ee40783c4d37838d0443573925f8817390d2b343f0
3
+ size 645833145
data/shard_00040.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:84ba1dfb8fee32a1484c2c3fcecf5a2ac33c8f5e66da091fb2148ce1b875e086
3
+ size 712912275
data/shard_00041.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c65a1c3edc50575ec8fc7164cbd125a6a103208988e15ddf318319383169bc70
3
+ size 621168723
data/shard_00042.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2af4143782e04c66af0443e3520919ef6d7b987e1735946e3f1b1873d142c46b
3
+ size 626736239
data/shard_00043.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cd0f5760eea8901f32a72ae9dfcbfc6ec7930cb5337023eb84ae2d36a5252c80
3
+ size 632133885
data/shard_00045.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b3f3293c9864556d2f94784391975c85f0af815a1cdab212475d0d4882717fe2
3
+ size 664291326
data/shard_00046.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:595c612996ab57961969d8caefc45e59f3dff8c7018a80e343ee88178d1ad62a
3
+ size 665925399