| --- |
| dataset_info: |
| - config_name: issues |
| features: |
| - name: repo_name |
| dtype: string |
| - name: issue_id |
| dtype: string |
| - name: text |
| dtype: string |
| splits: |
| - name: train |
| num_bytes: 30986711842 |
| num_examples: 15549682 |
| download_size: 16370074732 |
| dataset_size: 30986711842 |
| - config_name: kaggle |
| features: |
| - name: file_id |
| dtype: string |
| - name: text |
| dtype: string |
| splits: |
| - name: train |
| num_bytes: 5209133899 |
| num_examples: 580195 |
| download_size: 2222724371 |
| dataset_size: 5209133899 |
| configs: |
| - config_name: issues |
| data_files: |
| - split: train |
| path: issues/train-* |
| - config_name: kaggle |
| data_files: |
| - split: train |
| path: kaggle/train-* |
| --- |
| |
| # GitHub Issues & Kaggle Notebooks |
| ## Description |
| GitHub Issues & Kaggle Notebooks is a collection of two code datasets intended for language models training, they are sourced from GitHub issues and notebooks in Kaggle platform. These datasets are a modified part of the [StarCoder2](https://arxiv.org/abs/2402.19173) model training corpus, precisely the [bigcode/StarCoder2-Extras](https://huggingface.co/datasets/bigcode/starcoder2data-extras) dataset. We reformat the samples to remove StarCoder2's special tokens and use natural text to delimit comments in issues and display kaggle notebooks in markdown and code blocks. |
|
|
| The dataset includes: |
|
|
| - 🐛 GitHub Issues – 11B tokens of discussions from GitHub issues sourced from [GH Archive](https://www.gharchive.org/). |
| - 📊 Kaggle Notebooks – 1.7B tokens of data analysis notebooks in markdonw format, curated from Kaggle's [Meta Kaggle Code](https://www.kaggle.com/datasets/kaggle/meta-kaggle-code) dataset. |
| These datasets have undergone filtering to remove low-quality content, duplicates and PII. More details in StarCoder2 [paper](https://arxiv.org/abs/2402.19173) |
|
|
| ## How to load the dataset |
|
|
| You can load a specific subset using the following code: |
|
|
| ```python |
| from datasets import load_dataset |
| |
| issues = load_dataset("HuggingFaceTB/github-issues-notebooks", "issues", split="train") # GitHub Issues |
| kaggle_notebooks = load_dataset("HuggingFaceTB/github-issues-notebooks", "kaggle", split="train") # Kaggle Notebooks |
| ``` |
|
|
| ## Dataset curation |
| These curation details are from the StarCoder2 pipeline. The original datasets can be found at: https://huggingface.co/datasets/bigcode/starcoder2data-extras and more details can be found in the StarCoder2 paper. |
|
|
| ### 🐛 GitHub Issues |
| The GitHub Issues dataset consists of discussions from GitHub repositories, sourced from GHArchive. It contains issue reports, bug tracking, and technical Q&A discussions. |
|
|
| To ensure high-quality data, the StarCoder2 processing pipeline included: |
|
|
| - Removing bot-generated comments and auto-replies from email responses. |
| - Filtering out short issues (<200 characters) and extremely long comments. |
| - Keeping only discussions with multiple users (or highly detailed single-user reports). |
| - Anonymizing usernames while preserving the conversation structure, names, emails, keys, passwords, IP addresses using [StarPII](https://huggingface.co/bigcode/starpii). |
|
|
| We format the conversatiosn using this template: |
|
|
| ``` |
| Title: [Issue title] |
| |
| Question: |
| username_0: [Issue content] |
| |
| Answers: |
| username_1: [Answer from user 1] |
| username_0: [Author reply] |
| username_2: [Answer from user 2] |
| ... |
| Status: Issue closed (optional) |
| ``` |
|
|
| ## 📊 Kaggle Notebooks |
| The Kaggle Notebooks are sourced from the [Meta Kaggle Code](https://www.kaggle.com/datasets/kaggle/meta-kaggle-code) dataset, licensed under Apache 2.0. They were cleaned using a multi-step filtering process, which included: |
|
|
| - Removing notebooks with syntax errors or less than 100 characters. |
| - Extracting metadata for notebooks that reference Kaggle datasets. When possible, we retrieve the datasets references in the notebook and add information about them to the beginning of the notebook (description, `ds.info()` output and 4 examples) |
| - Filtering out duplicates, which reduced the dataset volume by 78%, and redacting PII. |
| Each notebook is formatted in Markdown format, where we start with the notebook title, dataset description when available and put the notebook (converted to a Python script) in a code block. |
|
|
| Below is an example of a kaggle notebook: |
|
|
| ```` |
| # Iris Flower Dataset |
| |
| ### Context |
| The Iris flower data set is a multivariate data set introduced ... (truncated) |
| |
| ```python |
| import pandas as pd |
| |
| df = pd.read_csv('iris-flower-dataset/IRIS.csv') |
| df.info() |
| ``` |
| ``` |
| <class 'pandas.core.frame.DataFrame'> |
| RangeIndex: 150 entries, 0 to 149 |
| Data columns (total 5 columns): |
| # Column Non-Null Count Dtype |
| --- ------ -------------- ----- |
| 0 sepal_length 150 non-null float64 |
| 1 sepal_width 150 non-null float64 |
| 2 petal_length 150 non-null float64 |
| 3 petal_width 150 non-null float64 |
| 4 species 150 non-null object |
| dtypes: float64(4), object(1) |
| memory usage: 6.0+ KB |
| ``` |
| |
| Examples from the dataset: |
| ``` |
| { |
| "sepal_length": 5.1, |
| "sepal_width": 3.5, |
| "petal_length": 1.4, |
| "petal_width": 0.2, |
| "species": "Iris-setosa" |
| } |
| ... (truncated) |
| ``` |
| |
| Code: |
| ```python |
| import numpy as np # linear algebra |
| import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) |
| |
| # Input data files are available in the read-only "../input/" directory |
| import os |
| |
| for dirname, _, filenames in os.walk("/kaggle/input"): |
| for filename in filenames: |
| print(os.path.join(dirname, filename)) |
| # You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session |
| import matplotlib.pyplot as plt |
| |
| data = pd.read_csv("/kaggle/input/iris-flower-dataset/IRIS.csv") |
| data.head() |
| X = data.drop("species", axis=1) |
| ... (truncated) |
| ```` |
|
|
| ## Citation |
| ``` |
| @article{lozhkov2024starcoder, |
| title={Starcoder 2 and the stack v2: The next generation}, |
| author={Lozhkov, Anton and Li, Raymond and Allal, Loubna Ben and Cassano, Federico and Lamy-Poirier, Joel and Tazi, Nouamane and Tang, Ao and Pykhtar, Dmytro and Liu, Jiawei and Wei, Yuxiang and others}, |
| journal={arXiv preprint arXiv:2402.19173}, |
| year={2024} |
| } |
| ``` |
|
|