Datasets:
File size: 3,522 Bytes
79e5ed3 d6300d1 79e5ed3 7566516 79e5ed3 13c7033 7566516 79e5ed3 7566516 79e5ed3 acb13de 79e5ed3 a699dca 79e5ed3 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 | ---
license: mit
task_categories:
- text-generation
language:
- en
- code
tags:
- devops
- docker
- ci-cd
- github-actions
- build-systems
- configuration
size_categories:
- 10K<n<100K
---
# Build/CI Configuration Corpus
A curated dataset of build, CI/CD, and project configuration files from top GitHub repositories.
Repositories are sourced from [ronantakizawa/github-top-projects](https://huggingface.co/datasets/ronantakizawa/github-top-projects), which tracks GitHub's top repositories from 2013–2025.
## Use Cases
- Fine-tuning LLMs for DevOps/infrastructure code generation
- Training code completion models for configuration files
- Benchmarking LLM performance on build/CI tasks

### Schema
| Field | Type | Description |
|-------|------|-------------|
| `content` | string | Full file content |
| `file_path` | string | Path within repository |
| `file_name` | string | Filename only |
| `category` | string | High-level category (see above) |
| `config_type` | string | Specific config type (e.g., "docker-compose", "tsconfig") |
| `repo_name` | string | Repository (owner/name) |
| `repo_stars` | int64 | Star count |
| `repo_language` | string | Primary language of repository |
| `license` | string | SPDX license identifier |
| `quality_score` | float32 | Quality score (0.0–1.0), see below |
| `is_generated` | bool | Whether file appears auto-generated (lower signal for training) |
### Quality Filtering
The dataset undergoes three quality filtering stages:
1. **Minimum size**: Files with fewer than 5 lines or 50 characters are removed (trivial configs like 2-line `.nvmrc` files add no training signal).
2. **Near-deduplication**: MinHash LSH (128 permutations, Jaccard threshold 0.85) removes near-duplicate files. Within each duplicate cluster, the version from the highest-starred repository is kept. This eliminates hundreds of copies of common starter templates (e.g., default `tsconfig.json`, boilerplate `Dockerfile`).
3. **Makefile scoping**: Makefiles are restricted to root-level and 1 directory deep, preventing large C/C++ repos from flooding the dataset with subdirectory Makefiles.
### Quality Score
Each file receives a quality score (0.0–1.0) based on four equally-weighted factors:
- **Comment density** (0–0.25): Files with comments/annotations teach intent, not just syntax
- **Content length** (0–0.25): Longer files are more substantive (log-scaled, capped at 500 lines)
- **Repository quality** (0–0.25): Higher-starred repos signal better engineering practices (log-scaled)
- **Non-trivial ratio** (0–0.25): Ratio of meaningful lines vs blank/bracket-only lines
Use `quality_score` to filter for higher-quality examples during training:
```python
high_quality = ds["train"].filter(lambda x: x["quality_score"] >= 0.5)
```
### Splits
- **train** (90%): For training
- **test** (10%): For evaluation
Splits are deterministic by repository (all files from a repo go to the same split).
## Usage
```python
from datasets import load_dataset
ds = load_dataset("ronantakizawa/codeconfig")
# Filter by category
dockerfiles = ds["train"].filter(lambda x: x["category"] == "dockerfile")
github_actions = ds["train"].filter(lambda x: x["category"] == "github_actions")
# Filter by specific config type
tsconfigs = ds["train"].filter(lambda x: x["config_type"] == "tsconfig")
```
|