electrocampbell commited on
Commit
36bca41
·
verified ·
1 Parent(s): 98fdbbc

Initial model/dataset card

Browse files
Files changed (1) hide show
  1. README.md +55 -0
README.md ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ - code
6
+ task_categories:
7
+ - translation
8
+ - text-generation
9
+ tags:
10
+ - code
11
+ - code-translation
12
+ - nebula
13
+ size_categories:
14
+ - 100K<n<1M
15
+ ---
16
+
17
+ # nebula-8lang-203k
18
+
19
+ Training pairs for fine-tuning code translation models on [Nebula](https://github.com/colinc86/nebula), a universal code intermediate language. Each example is a (Nebula → target language) pair across 8 languages: Python, JavaScript, TypeScript, Go, Swift, Kotlin, Rust, C.
20
+
21
+ ## Pipeline
22
+
23
+ Source code is harvested from [StarCoderData](https://huggingface.co/datasets/bigcode/starcoderdata) (and [The Stack](https://huggingface.co/datasets/bigcode/the-stack) for Swift), parsed into individual functions, then converted to Nebula via the Nebula compiler's `from-{lang}` ingesters. Pairs that fail validation (trivial, error markers, length filters) are dropped.
24
+
25
+ ## Format
26
+
27
+ Each line in `train.jsonl` / `val.jsonl` is a chat-formatted SFT example:
28
+
29
+ ```json
30
+ {"messages": [
31
+ {"role": "system", "content": "You are a code translator. Given code in Nebula (a universal intermediate language), produce the equivalent idiomatic <Language> code. Output only the <Language> code, no explanations."},
32
+ {"role": "user", "content": "<nebula source>"},
33
+ {"role": "assistant", "content": "<target language source>"}
34
+ ]}
35
+ ```
36
+
37
+ ## Sizes
38
+
39
+ | Split | Examples |
40
+ |---|---|
41
+ | Train | 203,336 |
42
+ | Val | ~22,600 |
43
+ | Per language | ~30,000 (8 langs) |
44
+
45
+ ~30% of examples are multi-function programs (vs the single-function pairs in `nebula-8lang-68k`).
46
+
47
+ Split: 90% train / 10% val.
48
+
49
+ ## Models trained on this dataset
50
+
51
+ - [`electrocampbell/nebula-8lang-14b`](https://huggingface.co/electrocampbell/nebula-8lang-14b)
52
+
53
+ ## License
54
+
55
+ Apache 2.0. Source data is from StarCoderData / The Stack, used under their respective licenses (permissively-licensed code only).