Update README.md
Browse files
README.md
CHANGED
|
@@ -54,3 +54,62 @@ configs:
|
|
| 54 |
- split: train
|
| 55 |
path: kaggle/train-*
|
| 56 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 54 |
- split: train
|
| 55 |
path: kaggle/train-*
|
| 56 |
---
|
| 57 |
+
|
| 58 |
+
# 📝 GitHub Issues & Notebooks
|
| 59 |
+
## Description
|
| 60 |
+
📝 GitHub Issues & Notebooks is a collection of code datasets intended for language models training, they are sourced from GitHub issues, Kaggle notebooks, and Jupyter notebooks. These datasets are part of the [StarCoder2](https://arxiv.org/abs/2402.19173) model training corpus and a modified subset of [bigcode/StarCoder2-Extras](https://huggingface.co/datasets/bigcode/starcoder2data-extras) dataset. We reformat the samples to remove StarCoder2's special tokens and use natural text to delimit comments in issues and display kaggle notebooks in markdown and code blocks.
|
| 61 |
+
|
| 62 |
+
The dataset includes three subsets:
|
| 63 |
+
|
| 64 |
+
- 🐛 GitHub Issues – 11B tokens of technical discussions and issue tracking from GitHub repositories.
|
| 65 |
+
- 📊 Kaggle Notebooks – 2B tokens of data analysis notebooks curated from Kaggle.
|
| 66 |
+
- 💻 Jupyter Notebooks – 16B tokens of Jupyter notebooks converted to Python scripts for better processing.
|
| 67 |
+
These subsets have undergone filtering to remove low-quality content, duplicates, more details in StarCoder2 [paper](https://arxiv.org/abs/2402.19173)
|
| 68 |
+
|
| 69 |
+
## How to load the dataset
|
| 70 |
+
|
| 71 |
+
You can load a specific subset using the following code:
|
| 72 |
+
|
| 73 |
+
```python
|
| 74 |
+
from datasets import load_dataset
|
| 75 |
+
|
| 76 |
+
data = load_dataset("HuggingFaceTB/github-issues-notebooks", "issues", split="train") # GitHub Issues
|
| 77 |
+
data = load_dataset("HuggingFaceTB/github-issues-notebooks", "kaggle", split="train") # Kaggle Notebooks
|
| 78 |
+
data = load_dataset("HuggingFaceTB/github-issues-notebooks", "jupyter", split="train") # Jupyter Notebooks
|
| 79 |
+
```
|
| 80 |
+
|
| 81 |
+
## Dataset curation
|
| 82 |
+
These curation details are from the SterCoder2 pipeline. The original datasets can be found at: https://huggingface.co/datasets/bigcode/starcoder2data-extras
|
| 83 |
+
|
| 84 |
+
### 🐛 GitHub Issues
|
| 85 |
+
The GitHub Issues dataset consists of discussions from GitHub repositories, sourced from GHArchive. It contains issue reports, bug tracking, and technical Q&A discussions.
|
| 86 |
+
|
| 87 |
+
To ensure high-quality data, the StarCoder2 processing pipeline included:
|
| 88 |
+
|
| 89 |
+
- Removing bot-generated comments and auto-replies from email responses.
|
| 90 |
+
- Filtering out short issues (<200 characters) and extremely long comments.
|
| 91 |
+
- Keeping only discussions with multiple users (or highly detailed single-user reports).
|
| 92 |
+
- Anonymizing usernames while preserving the conversation structure.
|
| 93 |
+
- This cleaning process removed 38% of issues, ensuring a high-quality dataset with technical depth.
|
| 94 |
+
- More details can be found in the StarCoder2 paper.
|
| 95 |
+
|
| 96 |
+
## 💻 Jupyter Notebooks
|
| 97 |
+
The Jupyter Notebooks dataset consists of 4M deduplicatedstructured notebooks, converted to Python scripts using Jupytext.
|
| 98 |
+
|
| 99 |
+
## 📊 Kaggle Notebooks
|
| 100 |
+
The Kaggle Notebooks are sourced from the [Meta Kaggle Code](https://www.kaggle.com/datasets/kaggle/meta-kaggle-code) dataset. They were cleaned using a multi-step filtering process, which included:
|
| 101 |
+
|
| 102 |
+
- Removing notebooks with syntax errors or less than 100 characters.
|
| 103 |
+
- Extracting metadata for notebooks that reference Kaggle datasets, when possible we retrieve the datasets and append information about the data to the beginning of the notebooks (description, `ds.info()`output and 4 examples)
|
| 104 |
+
- Filtering out duplicates, which reduced the dataset volume by 78%
|
| 105 |
+
|
| 106 |
+
|
| 107 |
+
## Citation
|
| 108 |
+
```
|
| 109 |
+
@article{lozhkov2024starcoder,
|
| 110 |
+
title={Starcoder 2 and the stack v2: The next generation},
|
| 111 |
+
author={Lozhkov, Anton and Li, Raymond and Allal, Loubna Ben and Cassano, Federico and Lamy-Poirier, Joel and Tazi, Nouamane and Tang, Ao and Pykhtar, Dmytro and Liu, Jiawei and Wei, Yuxiang and others},
|
| 112 |
+
journal={arXiv preprint arXiv:2402.19173},
|
| 113 |
+
year={2024}
|
| 114 |
+
}
|
| 115 |
+
```
|