doi stringlengths 10 10 | chunk-id int64 0 916 | chunk stringlengths 384 2.02k | id stringlengths 12 14 | title stringlengths 8 139 | summary stringlengths 236 1.92k | source stringlengths 31 31 | authors stringclasses 998
values | categories stringclasses 269
values | comment stringclasses 577
values | journal_ref stringclasses 36
values | primary_category stringclasses 31
values | published stringclasses 748
values | updated stringclasses 752
values | references listlengths 0 269 | metadata dict | embeddings listlengths 1.54k 1.54k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2407.21783 | 0 | The Llama 3 Herd of Models
Llama Team, AI @ Meta1
1A detailed contributor list can be found in the appendix of this paper.
Modern artificial intelligence (AI) systems are powered by foundation models. This paper presents a
new set of foundation models, called Llama 3. It is a herd of language models that natively suppo... | 2407.21783#0 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.019791821,
-0.03444159,
0.029808648,
-0.035688918,
-0.0043083807,
-0.01433157,
-0.004995685,
0.05595167,
-0.068068594,
0.020071834,
-0.014178836,
-0.03859087,
-0.025824828,
-0.019257251,
-0.042307407,
-0.027924925,
-0.00565117,
-0.03377974,
-0.018442668,
0.032608777,
-0.010... |
2407.21783 | 1 | resulting models are not yet being broadly released as they are still under development.
Date:July 23, 2024
Website: https://llama.meta.com/
1 Introduction
Foundation models are general models of language, vision, speech, and/or other modalities that are designed
to support a large variety of AI tasks. They form the ba... | 2407.21783#1 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
-0.0043875957,
-0.030706856,
0.050908733,
-0.021098336,
-0.014974643,
-0.0049968082,
0.0017787123,
0.027979601,
-0.06338339,
0.045833014,
-0.010833258,
-0.02613618,
-0.039772447,
-0.031211903,
-0.035100766,
-0.017411495,
0.010366089,
-0.028585657,
-0.01999986,
0.03871185,
0.0... |
2407.21783 | 2 | which we will refer to as Llama 3 throughout for brevity.
We believe there are three key levers in the development of high-quality foundation models: data, scale, and
managing complexity. We seek to optimize for these three levers in our development process:
•Data.Compared to prior versions of Llama (Touvron et al., 20... | 2407.21783#2 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
-0.007840018,
0.001403109,
0.05988382,
-0.053595755,
0.020202095,
-0.0028463546,
0.01159948,
0.015211776,
-0.056779925,
0.03954794,
-0.0077396766,
0.0011137909,
-0.06421858,
0.0057495693,
-0.0022861145,
-0.0004302143,
0.03542725,
-0.03210929,
-0.02343978,
0.024938215,
0.00273... |
2407.21783 | 3 | 1arXiv:2407.21783v2 [cs.AI] 15 Aug 2024
Finetuned Multilingual Long context Tool use Release
Llama 3 8B ✗ ✗1✗ ✗ April 2024
Llama 3 8B Instruct ✓ ✗ ✗ ✗ April 2024
Llama 3 70B ✗ ✗1✗ ✗ April 2024
Llama 3 70B Instruct ✓ ✗ ✗ ✗ April 2024
Llama 3.1 8B ✗ ✓ ✓ ✗ July 2024
Llama 3.1 8B Instruct ✓ ✓ ✓ ✓ July 2024
Llama 3.1 70B ... | 2407.21783#3 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.019919509,
-0.027884431,
0.06642717,
-0.04286367,
-0.007280775,
-0.004464966,
0.013574936,
0.0020884518,
-0.045513846,
0.008440225,
-0.011983393,
-0.005217529,
-0.053406753,
0.012833176,
0.008404218,
-0.010067778,
-0.018191135,
-0.0411353,
-0.030563412,
0.03730407,
0.003233... |
2407.21783 | 4 | size for our training budget, we also train our smaller models for much longer than is compute-optimal.
The resulting models perform better than compute-optimal models at the same inference budget. We
use the flagship model to further improve the quality of those smaller models during post-training.
•Managing complexit... | 2407.21783#4 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.022203146,
-0.029076157,
0.05764131,
-0.038989644,
-0.0020184678,
-0.016454345,
0.018587789,
0.035514813,
-0.04903088,
0.019763099,
-0.0024128994,
-0.008220785,
-0.05518849,
0.007511766,
0.016428795,
-0.040139403,
-0.00010878807,
-0.023915011,
-0.022164822,
0.033572998,
0.0... |
2407.21783 | 5 | parameters. We evaluate the performance of Llama 3 on a plethora of benchmark datasets that span a wide
range of language understanding tasks. In addition, we perform extensive human evaluations that compare
Llama 3 with competing models. An overview of the performance of the flagship Llama 3 model on key
benchmarks is... | 2407.21783#5 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.026583716,
-0.005369328,
0.060263183,
-0.054586582,
0.006348161,
-0.017625334,
-0.023504669,
0.021806758,
-0.0093575185,
0.009154784,
-0.02391014,
-0.018144844,
-0.014292866,
0.0036207326,
-0.025291277,
-0.02635564,
-0.0031503225,
-0.049543533,
-0.013215832,
0.03659379,
0.0... |
2407.21783 | 6 | language model and a new version of our Llama Guard model (Inan et al., 2023) for input and output safety.
We hope that the open release of a flagship model will spur a wave of innovation in the research community,
and accelerate a responsible path towards the development of artificial general intelligence (AGI).
As pa... | 2407.21783#6 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.034450162,
-0.0018265087,
0.057512272,
-0.047372226,
-0.028496135,
-0.0047352724,
-0.020488098,
0.05249425,
-0.021489102,
0.011732556,
-0.03642617,
-0.019487092,
-0.040794194,
-0.010062048,
-0.036452174,
-0.010816052,
-0.0058532776,
-0.027794132,
-0.0016201327,
0.028444136,
... |
2407.21783 | 7 | GeneralMMLU (5-shot) 69.4 72.361.1 83.676.970.787.382.6 85.189.1 89.9
MMLU (0-shot, CoT) 73.0 72.3△60.5 86.079.969.888.6 78.7◁85.4 88.7 88.3
MMLU-Pro (5-shot, CoT) 48.3 –36.9 66.456.349.273.362.7 64.874.0 77.0
IFEval 80.4 73.657.6 87.572.769.9 88.6 85.1 84.385.6 88.0
CodeHumanEval (0-shot) 72.6 54.340.2 80.575.668.089.... | 2407.21783#7 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.017833233,
0.010421157,
0.03100534,
-0.0016271535,
0.007172382,
0.0150085315,
0.028468272,
0.021889593,
0.039914582,
-0.0018290495,
0.0061214156,
0.012972976,
-0.035518963,
-0.010163025,
0.009263251,
-0.009086247,
0.029397547,
-0.023349883,
-0.044074193,
0.043277673,
-0.017... |
2407.21783 | 8 | MATH (0-shot, CoT) 51.9 44.313.0 68.054.143.173.841.1 64.5 76.6 71.1
ReasoningARC Challenge (0-shot) 83.4 87.674.2 94.888.783.7 96.9 94.6 96.496.7 96.7
GPQA (0-shot, CoT) 32.8 –28.8 46.733.330.851.1 –41.453.6 59.4
Tool useBFCL 76.1 –60.484.8– 85.988.586.5 88.380.5 90.2
Nexus 38.5 30.024.7 56.748.537.2 58.7 –50.356.1 45... | 2407.21783#8 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
-0.0022544828,
-0.009589273,
0.03295251,
-0.0022872963,
0.024907403,
0.0006060853,
-0.01689318,
0.030512726,
-0.01978077,
0.017773354,
0.026328037,
0.011588968,
-0.037276175,
0.021710979,
0.05293402,
0.015148272,
0.015997563,
-0.028736936,
-0.050370704,
0.05558999,
0.01235333... |
2407.21783 | 9 | NIH/Multi-needle 98.8 ––97.5––98.1 – 100.0 100.0 90.8
Multilingual MGSM (0-shot, CoT) 68.9 53.229.9 86.971.151.4 91.6 –85.990.5 91.6
Table 2 Performance of finetuned Llama 3 models on key benchmark evaluations. The table compares the performance of
the 8B, 70B, and 405B versions of Llama 3 with that of competing models... | 2407.21783#9 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.014880117,
-0.012521074,
0.047880795,
-0.050058372,
-0.012022046,
0.01541155,
0.008826968,
0.034296855,
-0.04528844,
-0.0025243056,
-0.012203511,
0.0035612476,
-0.035670802,
0.0061892476,
-0.021205463,
-0.019300083,
0.006908626,
-0.045962453,
0.0020447199,
0.031445265,
0.03... |
2407.21783 | 10 | amounts of knowledge about the world from the text it is “reading”. To do this effectively, pre-training
is performed at massive scale: we pre-train a model with 405B parameters on 15.6T tokens using a
context window of 8K tokens. This standard pre-training stage is followed by a continued pre-training
stage that incre... | 2407.21783#10 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.0070403107,
0.035878025,
0.056322485,
-0.010328712,
0.0021609494,
0.008130181,
0.016260361,
0.052013114,
-0.040337723,
0.066594824,
0.041765828,
-0.005919122,
-0.06990202,
-0.0071029468,
0.016661234,
-0.029213531,
-0.02499185,
0.023488581,
-0.0040650903,
0.025605684,
0.0528... |
2407.21783 | 11 | described in Section 5.4.
The resulting models have a rich set of capabilities. They can answer questions in at least eight languages,
write high-quality code, solve complex reasoning problems, and use tools out-of-the-box or in a zero-shot way.
We also perform experiments in which we add image, video, and speech capab... | 2407.21783#11 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
-0.0014693857,
0.008935416,
-0.01022061,
-0.03831205,
0.015145337,
0.0054039047,
0.023997659,
0.04440564,
-0.024773207,
0.025659546,
0.020308267,
-0.009312111,
-0.041635826,
-0.010757953,
-0.03270595,
-0.031664502,
0.02295621,
-0.011588898,
0.036694482,
0.047463514,
0.0161424... |
2407.21783 | 12 | out parts via a discrete-token representation. As a result, the model learns the structure of speech
signals. See Section 7 for details on the image encoder and Section 8 for details on the speech encoder.
•Vision adapter training. We train an adapter that integrates the pre-trained image encoder into the
pre-trained l... | 2407.21783#12 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
-0.0011580724,
-0.0024400027,
0.021854058,
-0.031553518,
-0.034041688,
0.015215268,
0.021094395,
0.03091496,
-0.027435927,
0.024089007,
0.029968133,
-0.0005257084,
-0.0063690543,
-0.002317521,
-0.03335909,
-0.046328403,
0.011218783,
0.037784953,
0.024771603,
0.043003503,
0.01... |
2407.21783 | 13 | to enable high-quality speech understanding. We do not change the language model during speech
adapter training. We also integrate a text-to-speech system. See Section 8 for details.
Our multimodal experiments lead to models that can recognize the content of images and videos, and support
interaction via a speech inter... | 2407.21783#13 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.021043353,
0.019361317,
0.063702665,
-0.014887814,
-0.019194305,
-0.007736176,
-0.00784354,
0.043565944,
-0.05769028,
0.025528785,
0.04077448,
-0.060839627,
-0.020947918,
0.013241566,
-0.015448493,
-0.026602425,
0.022844683,
0.018669415,
-0.004562972,
0.026960304,
0.0488625... |
2407.21783 | 14 | 3.1.1 Web Data Curation
Much of the data we utilize is obtained from the web and we describe our cleaning process below.
PII and safety filtering. Among other mitigations, we implement filters designed to remove data from websites
are likely to contain unsafe content or high volumes of PII, domains that have been ranke... | 2407.21783#14 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
-0.0069198366,
0.031201696,
0.054780398,
-0.0044785077,
0.02126552,
-0.013261378,
-0.01645515,
0.040086426,
0.0005528312,
0.050127745,
0.084799215,
-0.0507849,
-0.043240767,
-0.005355809,
0.026509613,
0.03522348,
0.06298169,
0.0052178064,
-0.042347036,
0.022080392,
0.02448557... |
2407.21783 | 15 | evaluate different cleaning configurations. We find markdown is harmful to the performance of a model that
is primarily trained on web data compared to plain text, so we remove all markdown markers.
De-duplication. We apply several rounds of de-duplication at the URL, document, and line level:
•URL-level de-duplication... | 2407.21783#15 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.0023554033,
0.041016627,
0.06016512,
-0.023172451,
0.0028583982,
0.024532272,
0.024074374,
0.063217774,
-0.008498878,
0.08364283,
0.06299577,
-0.021826506,
-0.05944358,
0.0075275777,
0.028112207,
0.0356606,
0.04398603,
-0.009719942,
-0.012987673,
0.053393766,
0.034495037,
... |
2407.21783 | 16 | Heuristic filtering. We develop heuristics to remove additional low-quality documents, outliers, and documents
with excessive repetitions. Some examples of heuristics include:
•We use duplicated n-gram coverage ratio (Rae et al., 2021) to remove lines that consist of repeated
content such as logging or error messages. ... | 2407.21783#16 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
-0.026748274,
0.02133338,
0.029436149,
-0.046972577,
0.0015111143,
-0.015527046,
-0.017392902,
0.048251275,
-0.00335332,
0.070406675,
0.027661629,
-0.040265936,
-0.05814161,
0.033768065,
0.032254502,
0.009694618,
0.04986922,
0.008820406,
-0.048825383,
0.04665943,
0.031341147,... |
2407.21783 | 17 | quality classifier based on Llama 2, we create a training set of cleaned web documents, describe the quality
requirements, and instruct Llama 2’s chat model to determine if the documents meets these requirements. We
use DistilRoberta (Sanh et al., 2019) to generate quality scores for each document for efficiency reason... | 2407.21783#17 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.00042431042,
0.012879532,
0.02088913,
-0.047138963,
0.0271307,
-0.0034479646,
-0.02025994,
0.029949473,
-0.042885635,
0.03624138,
0.031308524,
-0.054613747,
-0.049328547,
0.014647558,
0.04099806,
0.023342973,
0.030452827,
0.0074244495,
-0.04801983,
0.04590575,
0.025431884,
... |
2407.21783 | 18 | pipeline has several unique features:
•We use a fasttext-based language identification model to categorize documents into 176 languages.
•We perform document-level and line-level de-duplication within data for each language.
5
•We apply language-specific heuristics and model-based filters to remove low-quality document... | 2407.21783#18 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
-0.01013973,
0.001403184,
0.05407856,
-0.023113431,
0.05569797,
-0.04198204,
-0.020377606,
0.04325794,
-0.06752459,
0.04311072,
0.019948216,
-0.04239916,
-0.038178872,
0.017531365,
0.03761453,
0.0024996612,
0.011513777,
0.0004707951,
-0.038595997,
0.009636731,
0.0271865,
-0... |
2407.21783 | 19 | over-represented on the web, for example, arts and entertainment.
Scaling laws for data mix. To determine the best data mix, we perform scaling law experiments in which we
train several small models on a data mix and use that to predict the performance of a large model on that mix
(see Section 3.2.1). We repeat this pr... | 2407.21783#19 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.033894744,
-0.005006872,
0.037687726,
-0.035670184,
0.06547604,
-0.0227848,
0.0021772664,
0.030935679,
-0.06552984,
0.03472866,
-0.005783627,
-0.031581294,
-0.037580125,
-0.009408481,
0.025286553,
-0.012233043,
0.01688012,
-0.018467255,
-0.022690648,
-0.01728363,
-0.0584818... |
2407.21783 | 20 | few-shot learning capabilities and out-of-domain generalization of Llama 3.
Following OpenAI (2023a), we evaluate the efficacy of annealing on the GSM8k (Cobbe et al., 2021) and
MATH (Hendrycks et al., 2021b) training sets in annealing. We find that annealing improved the performance
of a pre-trained Llama 3 8B model o... | 2407.21783#20 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.028627405,
-0.004992279,
0.04181503,
-0.052850597,
-0.006662628,
0.017804544,
-0.029678412,
0.0031405066,
-0.0137882,
0.007788706,
-0.0018908727,
-0.018405119,
-0.027176017,
0.030028747,
0.008526913,
-0.033632196,
0.035208706,
-0.005264415,
-0.0012691838,
0.022934455,
0.011... |
2407.21783 | 21 | evaluate new data sources is more efficient than performing scaling law experiments for every small dataset.
3.2 Model Architecture
Llama 3 uses a standard, dense Transformer architecture (Vaswani et al., 2017). It does not deviate significantly
from Llama and Llama 2 (Touvron et al., 2023a,b) in terms of model archite... | 2407.21783#21 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
-0.0023990078,
-0.006935928,
0.048655935,
-0.059566658,
0.034870084,
-0.0007160931,
-0.01194282,
-0.019351792,
-0.040325448,
0.012538732,
-0.019290358,
-0.006708621,
-0.05037609,
0.031872094,
-0.009473163,
-0.0072492426,
0.02521262,
-0.036393654,
-0.0064444537,
0.02056819,
-0... |
2407.21783 | 22 | Key/Value Heads 8 8 8
Peak Learning Rate 3×10−41.5×10−48×10−5
Activation Function SwiGLU
Vocabulary Size 128,000
Positional Embeddings RoPE ( θ= 500 ,000)
Table 3 Overview of the key hyperparameters of Llama 3. We display settings for 8B, 70B, and 405B language models.
•We use a vocabulary with 128K tokens. Our token v... | 2407.21783#22 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.012513629,
0.0042756433,
0.026145315,
-0.03265707,
0.006275244,
-0.0027229583,
0.024830677,
0.03661327,
-0.02448666,
0.007943113,
-0.030150658,
-0.0032712359,
-0.043346178,
0.005648641,
-0.021046486,
-0.04528742,
0.004168138,
-0.0033910277,
-0.03356626,
0.02107106,
-0.01544... |
2407.21783 | 23 | Llama 3 405B uses an architecture with 126 layers, a token representation dimension of 16,384, and 128
attention heads; see Table 3 for details. This leads to a model size that is approximately compute-optimal
according to scaling laws on our data for our training budget of 3.8×1025FLOPs.
3.2.1 Scaling Laws
We develop ... | 2407.21783#23 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.024870992,
-0.029114852,
0.075797305,
-0.029410934,
0.056009077,
-0.0035653356,
0.0051351935,
0.011140131,
-0.039773848,
0.0048236893,
-0.028744746,
-0.013484124,
-0.07688294,
0.001431069,
-0.019652525,
-0.026425429,
0.032001663,
-0.04650974,
0.014717804,
0.010578807,
-0.03... |
2407.21783 | 24 | 2.Next, we correlate the negative log-likelihood on downstream tasks with task accuracy, utilizing both the
scaling law models and older models trained with higher compute FLOPs. In this step, we specifically
leverage the Llama 2 family of models.
This approach enables us to predict downstream task performance given a ... | 2407.21783#24 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.038974717,
-0.028074013,
0.07419057,
-0.055725154,
0.038810268,
-0.0060964073,
0.008627767,
0.0186886,
-0.04952303,
0.035756193,
-0.019440373,
0.0003030217,
-0.07503632,
-0.015552299,
-0.018841304,
0.009455891,
0.037024807,
-0.009972735,
-0.019734034,
0.038434383,
-0.009679... |
2407.21783 | 25 | We use a fixed batch size for each compute scale, ranging between 250K and 4M.
3https://github.com/openai/tiktoken/tree/main
7
101010111012
Training Tokens0.700.750.800.850.900.95Validation Loss
Compute
6e18
1e19
3e19
6e19
1e20
3e20
6e20
1e21
3e21
1e22Figure 2 Scaling law IsoFLOPs curves between 6×1018
and 1022FLOPs. T... | 2407.21783#25 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.0011009345,
-0.0023689072,
0.06074121,
-0.045021385,
0.04754822,
-0.036177464,
0.012044982,
0.02757651,
-0.02687191,
-0.007033832,
0.0039542527,
-0.007963172,
-0.062150408,
-0.04662495,
0.0078052455,
-0.016217902,
0.02169676,
-0.019570818,
-0.041304022,
0.011431496,
-0.0099... |
2407.21783 | 26 | the parabola minimums in Figure 2.
These experiments give rise to the IsoFLOPs curves in Figure 2. The loss in these curves is measured on
a separate validation set. We fit the measured loss values using a second-degree polynomial and identify
the minimums of each parabola. We refer to minimum of a parabola as the comp... | 2407.21783#26 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
-0.010754187,
0.006952321,
0.03797128,
-0.03474976,
0.040813796,
-0.041026983,
0.016747158,
0.02766716,
0.006833883,
0.0057975487,
0.0038255532,
0.013478264,
-0.06845727,
-0.019625206,
0.015242994,
-0.031007115,
0.04448538,
-0.026127461,
-0.04818065,
0.02686178,
-0.015515401,... |
2407.21783 | 27 | An important observation is that IsoFLOPs curves become flatteraround the minimum as the compute
budget increases. This implies that performance of the flagship model is relatively robust to small changes in
the trade-off between model size and training tokens. Based on this observation, we ultimately decided to
train ... | 2407.21783#27 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.04636543,
-0.02684852,
0.047157343,
-0.03034828,
0.049507547,
0.010799441,
-0.00057956786,
0.021381743,
-0.02258239,
0.014701544,
-0.011195399,
0.029684091,
-0.068922274,
-0.012440751,
-0.017141158,
-0.012945279,
0.027512709,
-0.036658064,
-0.02468991,
0.030424917,
0.004454... |
2407.21783 | 28 | 3.3 Infrastructure, Scaling, and Efficiency
We describe our hardware and infrastructure that powered Llama 3 405B pre-training at scale and discuss
several optimizations that leads to improvements in training efficiency.
3.3.1 Training Infrastructure
The Llama 1 and 2 models were trained on Meta’s AI Research SuperClus... | 2407.21783#28 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.02958795,
-0.015091591,
0.059275106,
-0.05247955,
0.046527237,
-0.00105018,
0.028372686,
0.0067273523,
-0.04828813,
0.017497316,
-0.015984436,
0.007986018,
-0.07812409,
0.021539928,
-0.015389206,
-0.01603404,
0.050123427,
-0.044617537,
-0.019171404,
0.053124383,
0.009294286... |
2407.21783 | 29 | function of the normalized negative log-likelihood of the correct answer. This analysis enables us to predict model
performance on the ARC Challenge benchmark before pre-training commences. See text for details.
setup optimizes for production-grade reliability, which is essential as we scale up training.
Compute. Llama... | 2407.21783#29 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.020153938,
-0.0044157878,
0.065671034,
-0.023983855,
0.020113764,
-0.014382279,
0.013652451,
0.024077594,
-0.032005258,
0.004573136,
0.013806451,
0.019350458,
-0.048610494,
-0.027452206,
0.017663151,
0.014583148,
0.016190106,
-0.018613936,
-0.05182441,
0.030130472,
0.004164... |
2407.21783 | 30 | major challenge is supporting the highly bursty checkpoint writes that saturate the storage fabric for short
durations. Checkpointing saves each GPU’s model state, ranging from 1 MB to 4 GB per GPU, for recovery
and debugging. We aim to minimize GPU pause time during checkpointing and increase checkpoint frequency
to r... | 2407.21783#30 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.009484215,
-0.012691819,
0.032472048,
-0.012150618,
0.046939272,
-0.00029926922,
-0.019390829,
0.020526031,
-0.013807221,
-0.025779638,
0.021806434,
0.022796435,
-0.037989657,
-0.014968823,
-0.025370438,
-0.015510024,
-0.00056141336,
-0.010395016,
-0.01985283,
0.02043363,
-... |
2407.21783 | 31 | connected by a single Minipack2 top-of-the-rack (ToR) switch. In the middle layer, 192 such racks are
connected by Cluster Switches to form a pod of 3,072 GPUs with full bisection bandwidth, ensuring no
oversubscription. At the top layer, eight such pods within the same datacenter building are connected via
Aggregation... | 2407.21783#31 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
-0.0063978946,
-0.012603681,
0.030435452,
0.0055917255,
0.031039221,
-0.015931271,
-0.016068492,
0.013694582,
-0.01984205,
-0.04994817,
0.012507626,
0.02865159,
-0.019416666,
-0.012576236,
0.017454416,
-0.05840094,
0.0279106,
-0.020871202,
-0.0007971639,
0.02235318,
-0.021721... |
2407.21783 | 32 | 5Note that we use only up to 16K of these 24K GPUs for Llama 3 pre-training.
9
GPUs TP CP PP DP Seq. Len. Batch size/DP Tokens/Batch TFLOPs/GPU BF16 MFU
8,192 8 1 16 64 8,192 32 16M 430 43%
16,384 8 1 16 128 8,192 16 16M 400 41%
16,384 8 16 16 4 131,072 16 16M 380 38%
Table 4 Scaling configurations and MFU for each sta... | 2407.21783#32 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.030099135,
-0.028662179,
0.06132731,
-0.0071142246,
0.051961422,
-0.014549199,
-0.005815189,
0.051781803,
-0.031536095,
-0.022811707,
0.016589165,
0.017795185,
-0.043390997,
-0.012470742,
-0.0018811959,
-0.023209436,
0.008191943,
-0.02686598,
-0.030484036,
0.04154348,
-0.00... |
2407.21783 | 33 | common in training. Finally, better load balancing through E-ECMP significantly reduces the chance
of congestion. With these optimizations, we successfully run a 24K GPU cluster without traditional
congestion control methods such as Data Center Quantized Congestion Notification (DCQCN).
3.3.2 Parallelism for Model Scal... | 2407.21783#33 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.037989628,
-0.00065715774,
0.07846698,
0.027209489,
0.011661209,
0.01617021,
0.0019791664,
0.02452741,
-0.02961947,
0.025188211,
0.017945305,
0.045064095,
-0.017336331,
0.0009847244,
0.006368316,
-0.028893884,
0.011285459,
0.0070550316,
0.01487452,
0.03739361,
-0.01633865,
... |
2407.21783 | 34 | data parallelism (DP; Rajbhandari et al. (2020); Ren et al. (2021); Zhao et al. (2023b)).
Tensor parallelism splits individual weight tensors into multiple chunks on different devices. Pipeline parallelism
partitions the model vertically into stages by layers, so that different devices can process in parallel different... | 2407.21783#34 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.013028691,
-0.034335006,
0.052212723,
-0.007726601,
0.04773105,
-0.0019546098,
-0.008069461,
0.009906213,
-0.006544958,
-0.017228728,
-0.003434725,
0.032620706,
-0.038033,
-0.016702194,
-0.021196112,
-0.042245284,
0.0059020948,
-0.0016821582,
0.004876575,
0.03781259,
-0.004... |
2407.21783 | 35 | an overall BF16 Model FLOPs Utilization (MFU; Chowdhery et al. (2023)) of 38-43% for the configurations
shown in Table 4. The slight drop in MFU to 41% on 16K GPUs with DP=128 compared to 43% on 8K
GPUs with DP=64 is due to the lower batch size per DP group needed to keep the global tokens per batch
constant during tra... | 2407.21783#35 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.0010557815,
-0.006713687,
0.098807134,
-0.0045543513,
0.0020988234,
-0.011408173,
0.0032597054,
0.041887295,
-0.014229959,
0.020090103,
0.019949969,
0.014930629,
-0.07042365,
-0.025147663,
0.003000139,
-0.016994419,
-0.0008814103,
0.00984122,
-0.00848447,
0.014166262,
0.011... |
2407.21783 | 36 | •Memory imbalance. Existing pipeline parallelism implementations lead to imbalanced resource consumption. The first stage consumes more memory due to the embedding and the warm-up micro-batches.
•Computation imbalance. After the last layer of the model, we need to calculate output and loss, making
this stage the execut... | 2407.21783#36 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.018771889,
-0.025134623,
0.07338434,
0.016618535,
-0.011216903,
-0.01312694,
0.005644949,
-0.011776532,
-0.028370736,
0.009057467,
0.02639987,
0.012396989,
-0.007384664,
-0.036740832,
-0.010097646,
-0.029708978,
-0.008868896,
0.019051703,
0.023151591,
0.013577076,
-0.007135... |
2407.21783 | 37 | GPU0 and GPU8 are in the same DP group.
To address these issues, we modify our pipeline schedule as shown in Figure 6, which allows setting N
flexibly—in this case N= 5, which can run a arbitrary number of micro-batches in each batch. This allows
us to run: (1) fewer micro-batches than the number of stages when we have... | 2407.21783#37 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.0101568205,
0.0010903833,
0.073134184,
0.004104599,
-0.03281142,
-0.005570074,
0.011495415,
0.045677144,
-0.040297393,
0.010125101,
0.026340488,
0.027380912,
-0.030832073,
-0.0023504698,
-0.02918262,
-0.046032414,
0.0023108195,
0.026238983,
0.0015749099,
0.01872763,
0.00221... |
2407.21783 | 38 | training, especially in cases when the document mask introduces extra computation imbalance. We enable
TORCH_NCCL_AVOID_RECORD_STREAMS to reduce memory usage from asynchronous point-to-point
communication. Finally, to reduce memory cost, based on detailed memory allocation profiling, we proactively
deallocate tensors t... | 2407.21783#38 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.039528206,
-0.009290818,
0.05187265,
0.0033248784,
0.02150532,
0.014930279,
0.04269878,
0.026534056,
-0.033888746,
0.015333098,
0.021895144,
0.040567737,
-0.0099405255,
0.00895297,
0.0015958442,
-0.011246437,
0.052860204,
-0.026819928,
-0.033109095,
0.05951321,
0.0003879972... |
2407.21783 | 39 | structure (Liu et al., 2023a), our CP implementation adopts an all-gather based method where we first
all-gather the key (K) and value (V) tensors, and then compute attention output for the local query (Q)
tensor chunk. Although the all-gather communication latency is exposed in the critical path, we still adopt
this a... | 2407.21783#39 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.025344282,
0.0046108128,
0.042839855,
-0.0026076303,
-0.02213676,
-0.054819465,
0.016766591,
0.024433054,
-0.033533182,
-0.001857386,
0.012684291,
0.009701539,
0.009136577,
0.014591794,
-0.014859088,
-0.017811466,
0.009179101,
-0.021990964,
-0.00051522336,
-0.0011701684,
-0... |
2407.21783 | 40 | insight is to make Ntunable.
is small as the communicated K and V tensors are much smaller than Q tensor due to the use of GQA (Ainslie
et al., 2023). Hence, the time complexity of attention computation is an order of magnitude larger than
all-gather (O(S2)versus O(S), where Srepresents the sequence length in the full ... | 2407.21783#40 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.015106849,
-0.024590595,
0.07027482,
0.0004886996,
-0.00983344,
-0.014771142,
-0.0013428311,
0.018813623,
-0.023149848,
-0.0022992485,
0.031388678,
0.027542025,
0.013966842,
0.029542284,
0.019289209,
-0.0713379,
-0.010875532,
-0.03482968,
0.01954099,
0.026087292,
-0.0360046... |
2407.21783 | 41 | with minimal communication overhead while avoiding GPU memory overflow is challenging. We develop a
memory consumption estimator and a performance-projection tool which helped us explore various parallelism
configurations and project overall training performance and identify memory gaps effectively.
Numerical stability... | 2407.21783#41 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.047737293,
-0.033020694,
0.082119666,
-0.012084894,
0.0067298324,
0.014664226,
0.014454737,
-0.01211108,
-0.0063468604,
-0.029642686,
0.029040406,
0.025047023,
-0.051586654,
-0.0130014075,
0.00034737523,
-0.027678728,
0.015855694,
-0.06289906,
0.03359679,
0.031999435,
-0.01... |
2407.21783 | 42 | of microseconds. The original NCCL collectives— all-gather andreduce-scatter in FSDP, and point-to-point
in PP—require data chunking and staged data copy. This approach incurs several inefficiencies, including
(1) requiring a large number of small control messages to be exchanged over the network to facilitate data
tra... | 2407.21783#42 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
-0.025527913,
0.0051591475,
0.07803563,
0.0126300445,
0.0060013854,
-0.012460892,
0.017112583,
0.0056172684,
-0.027247628,
0.01257366,
0.040117305,
0.03402782,
-0.042344477,
0.014533008,
-0.004084325,
-0.013405326,
0.013271414,
-0.026091754,
-0.021228623,
0.025668873,
0.00035... |
2407.21783 | 43 | Host MaintenanceUnplanned
Maintenance32 7.6%
GPU SRAM Memory GPU 19 4.5%
GPU System Processor GPU 17 4.1%
NIC Host 7 1.7%
NCCL Watchdog Timeouts Unknown 7 1.7%
Silent Data Corruption GPU 6 1.4%
GPU Thermal Interface + Sensor GPU 6 1.4%
SSD Host 3 0.7%
Power Supply Host 3 0.7%
Server Chassis Host 2 0.5%
IO Expansion Boa... | 2407.21783#43 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
-0.022752251,
0.02859557,
0.059957545,
-0.010719811,
-0.0061644204,
-0.01681013,
0.001571098,
0.011919526,
-0.015215215,
0.0020412805,
0.05374725,
0.03810861,
-0.019788248,
-0.037233524,
-0.019520076,
0.0010241689,
-0.013768499,
-0.004696533,
-0.036132608,
0.018997848,
0.0183... |
2407.21783 | 44 | than 90% effective training time while supporting automated cluster maintenance, such as firmware and Linux
kernel upgrades (Vigraham and Leonhardi, 2024), which resulted in at least one training interruption daily.
The effective training time measures the time spent on useful training over the elapsed time.
During a 5... | 2407.21783#44 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
-0.02626157,
0.03711746,
0.064001344,
-0.00697681,
-0.006727885,
0.0024114593,
0.05268909,
0.030147564,
0.020066109,
0.06156741,
0.03559625,
0.017452396,
-0.031751744,
-0.03988329,
0.029815665,
0.010067627,
-0.015862044,
0.002909309,
-0.03971734,
0.0050580143,
-0.006295724,
... |
2407.21783 | 45 | et al., 2024), a feature that captures collective metadata and stack traces into a ring buffer, and hence allowing
us to diagnose hangs and performance issues quickly at scale, particularly with regard to NCCLX. Using
this, we efficiently record every communication event and the duration of each collective operation, a... | 2407.21783#45 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.0044329804,
0.011426477,
0.04482958,
0.002349993,
0.016184656,
0.017074676,
0.005795394,
0.0020932567,
-0.024016825,
0.03009634,
0.06298597,
0.03297179,
-0.044802196,
-0.0012391806,
0.007558317,
-0.013309211,
-0.0080238655,
0.011536018,
-0.031848993,
0.033629034,
0.02470145... |
2407.21783 | 46 | internal state and track relevant information. While stalls due to NVLink failures cannot be completely
prevented, our system monitors the state of the communication library and automatically times out when
such a stall is detected. Additionally, NCCLX traces the kernel and network activities of each NCCLX
communicatio... | 2407.21783#46 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.0059509017,
0.020980187,
0.05866923,
-0.007804224,
0.0060775937,
0.0068703233,
-0.0128719015,
-0.019097906,
-0.013248358,
0.03524208,
0.03889081,
0.033591464,
-0.0862374,
0.010989621,
0.017403854,
-0.025265994,
0.010663842,
0.010265667,
-0.03648728,
0.040020175,
0.019734986... |
2407.21783 | 47 | for example, due to all GPUs waiting for checkpointing or collective communications to finish, or the startup
or shutdown of the entire training job. When this happens, it can result in instant fluctuations of power
consumption across the data center on the order of tens of megawatts, stretching the limits of the power... | 2407.21783#47 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.011486856,
-0.012556183,
0.06886733,
-0.04645855,
0.0029238353,
0.000098988385,
0.022879558,
0.05864484,
-0.028569182,
-0.008884155,
0.013141286,
0.0013450651,
-0.059936102,
-0.027842848,
-0.017337888,
-0.0055383057,
0.029187912,
-0.00122485,
-0.038280554,
0.032577477,
0.00... |
2407.21783 | 48 | we use an initial batch size of 4M tokens and sequences of length 4,096, and double these values to a batch
size of 8M sequences of 8,192 tokens after pre-training 252M tokens. We double the batch size again to 16M
after pre-training on 2.87T tokens. We found this training recipe to be very stable: we observed few loss... | 2407.21783#48 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.017081209,
-0.034441102,
0.084395796,
-0.02410645,
0.017359896,
-0.02417612,
0.02659141,
0.05230031,
-0.06985761,
0.02870479,
-0.006909122,
-0.03200259,
-0.043777127,
-0.025941141,
0.0012852999,
-0.023258775,
0.031514887,
0.0038667861,
-0.036949288,
0.02452448,
-0.021412473... |
2407.21783 | 49 | We do not train on long sequences earlier because the compute in self-attention layers grows quadratically in
the sequence length. We increase the supported context length in increments, pre-training until the model has
successfully adapted to the increased context length. We assess successful adaptation by measuring w... | 2407.21783#49 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.0032270998,
-0.0077278935,
0.05589557,
-0.03404682,
0.00007357803,
-0.019362599,
0.034610186,
0.020048434,
-0.04303616,
0.056630395,
0.007525817,
-0.005272358,
-0.030127762,
0.0074094697,
0.0016487602,
-0.014794446,
0.034487713,
-0.004326273,
-0.021983469,
0.036888137,
-0.0... |
2407.21783 | 50 | of very high quality; see Section 3.1.3. Finally, we compute the average of model checkpoints (Polyak (1991)
averaging) during annealing to produce the final pre-trained model.
4 Post-Training
We produce the aligned Llama 3 models by applying several rounds of post-training,6or aligning the model
with human feedback (O... | 2407.21783#50 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.013461381,
0.006327243,
0.048754867,
-0.038442347,
0.004585529,
-0.00359823,
0.0010176398,
0.039754372,
-0.03954445,
0.022396276,
0.036211904,
0.004172241,
-0.046655625,
-0.023340933,
0.01081765,
-0.015901748,
0.0011947632,
-0.048754867,
0.0022878444,
0.031829737,
0.0130808... |
2407.21783 | 51 | then finetune pre-trained checkpoints with supervised finetuning (SFT; see Section 4.1.3), and further align
the checkpoints with Direct Preference Optimization (DPO; see Section 4.1.4). This process is illustrated
in Figure 7. Unless otherwise noted, our modeling procedure applies to Llama 3 405B, and we refer to
Llam... | 2407.21783#51 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
-0.007258338,
-0.021380953,
0.031626523,
-0.0352112,
0.0021450866,
-0.00223089,
0.0070740194,
0.04240598,
-0.010340907,
0.0032891328,
0.0062954323,
-0.013474323,
-0.0038516223,
-0.004941644,
-0.008357894,
-0.030965518,
-0.009908712,
-0.02058012,
0.00672445,
0.011033691,
0.014... |
2407.21783 | 52 | tokens indicate when it is the time to alternate between human and AI to speak.
4.1.2 Reward Modeling
We train a reward model (RM) covering different capabilities on top of the pre-trained checkpoint. The
training objective is the same as Llama 2 except that we remove the margin term in the loss, as we observe
diminish... | 2407.21783#52 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.018053418,
0.008126988,
0.052242815,
-0.02821584,
-0.019454623,
-0.0003341689,
0.02044284,
0.044868052,
0.0021313063,
0.026608141,
-0.003276238,
-0.045753025,
-0.0031490235,
0.00089695543,
0.013237698,
-0.046578996,
0.0006982978,
-0.020900076,
-0.0033868595,
0.030885505,
-0... |
2407.21783 | 53 | 4.1.3 Supervised Finetuning
The reward model is then used to perform rejection sampling on our human annotation prompts, the details
of which are described in Section 4.2. Together with this rejection-sampled data and other data sources
(including synthetic data), we finetune the pre-trained language model using a stan... | 2407.21783#53 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
-0.009508251,
0.013165271,
0.0739517,
-0.025396312,
-0.019065673,
-0.013042346,
0.014628079,
0.058118954,
-0.032206357,
0.016004838,
0.010479359,
-0.008869041,
-0.02293781,
-0.005848158,
0.03353395,
-0.05507041,
0.017037408,
0.00020570736,
-0.010663746,
0.008930503,
-0.013706... |
2407.21783 | 54 | the best performing models from the previous alignment rounds. As a result, our training data conforms better
to the distribution of the policy model that is being optimized in each round. We also explored on-policy
algorithms such as PPO (Schulman et al., 2017), but found that DPO required less compute for large-scale... | 2407.21783#54 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.017523987,
0.0088754585,
0.051891178,
-0.03482105,
0.016893629,
0.02392844,
-0.01652802,
0.024962228,
-0.004935713,
0.039889134,
0.010407232,
0.00978948,
-0.043923434,
-0.026096875,
-0.0020266052,
-0.00396496,
0.04815945,
-0.012985401,
-0.007293257,
0.044906795,
-0.01010465... |
2407.21783 | 55 | increase and reduce the likelihood of these tokens simultaneously.
•Regularization with NLL loss : We add an additional negative log-likelihood (NLL) loss term with a scaling
coefficient of 0.2on the chosen sequences, similar to Pang et al. (2024). This helps further stabilize DPO
training by maintaining desired format... | 2407.21783#55 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.0054318192,
0.014965084,
0.04789346,
-0.00019742668,
0.01769073,
0.010727356,
0.029410997,
0.06323495,
-0.006453936,
0.04934714,
0.0064604254,
0.004322093,
-0.03826285,
0.0067232554,
0.0074890316,
-0.00785894,
0.016821118,
-0.009948601,
-0.03763985,
-0.0063533466,
-0.007086... |
2407.21783 | 56 | Reasoning and tools 5.89% 1.6 707.7 46.6 129.9
Total 100% 3.8 1,041.6 44.5 284.0
Table 6 Statistics of human preference data. We list statistics of the internally collected human preference data used for
Llama 3 alignment. We ask annotators to perform multi-turn dialogues with the models and make comparisons among
resp... | 2407.21783#56 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
-0.004492837,
0.0020939244,
0.055566885,
-0.04276394,
0.029541224,
-0.014626317,
0.0147968475,
0.041137334,
-0.010212186,
0.000489047,
0.0004878172,
-0.038382605,
-0.021250792,
-0.017538462,
0.008231402,
-0.027416145,
0.02786215,
-0.03310926,
-0.00020619499,
0.042396642,
0.01... |
2407.21783 | 57 | 4.2.1 Preference Data
Our preference data annotation process is similar to Llama 2. We deploy multiple models for annotation after
each round and sample two responses from two different models for each user prompt. These models can
be trained with different data mixes and alignment recipes, allowing for different capab... | 2407.21783#57 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.030726468,
0.029941855,
0.048570093,
-0.03626938,
0.041128922,
-0.0030704322,
0.0167806,
0.013401701,
0.0021086482,
0.026044099,
-0.024892488,
-0.029081311,
-0.0018223276,
0.013553562,
0.005732739,
-0.01915975,
0.04535571,
-0.008554815,
-0.019627986,
0.041280784,
-0.0146292... |
2407.21783 | 58 | average length of prompt and response, suggesting that we train Llama 3 on more complex tasks. In addition,
we implement a quality analysis and human evaluation process to rigorously assess the data collected, allowing
us to refine our prompts and provide systematic, actionable feedback to annotators. For example, as L... | 2407.21783#58 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.0118884845,
0.0009710975,
0.056584544,
-0.043679778,
0.026232421,
-0.015960444,
0.015660333,
0.047553938,
0.021294232,
0.024513604,
0.014855489,
-0.024650019,
-0.03816865,
-0.016642513,
0.01418706,
-0.009924122,
0.026969057,
-0.038768873,
-0.017911164,
0.022303697,
0.005122... |
2407.21783 | 59 | General English 52.66% 6.3 974.0 656.7 317.1
Code 14.89% 2.7 753.3 378.8 374.5
Multilingual 3.01% 2.7 520.5 230.8 289.7
Exam-like 8.14% 2.3 297.8 124.4 173.4
Reasoning and tools 21.19% 3.1 661.6 359.8 301.9
Long context 0.11% 6.7 38,135.6 37,395.2 740.5
Total 100% 4.7 846.1 535.7 310.4
Table 7 Statistics of SFT data. W... | 2407.21783#59 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
-0.0009211252,
-0.017862517,
0.07938262,
-0.03817899,
0.009965629,
-0.024396889,
0.014210117,
0.029533075,
-0.025738003,
0.057782102,
0.027963685,
-0.033128407,
-0.0021668288,
0.022527887,
0.02433982,
0.013175746,
0.00586738,
-0.05840986,
-0.046967577,
0.022642024,
0.01271206... |
2407.21783 | 60 | rejection-sampling procedure and overall composition of our final SFT datamix.
Rejection sampling. During rejection sampling (RS), for each prompt collected during human annotation
(Section 4.2.1) we sample K(typically between 10 and 30) outputs from the latest chat model policy (usually
the best performing checkpoint ... | 2407.21783#60 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.052223377,
-0.0033195966,
0.05376634,
-0.01462848,
-0.015251599,
-0.006071709,
0.01244756,
0.013426749,
-0.012670103,
0.020963531,
0.014480118,
-0.048425313,
-0.0107859075,
0.043351337,
0.015548323,
-0.044241507,
0.012106328,
-0.025280861,
-0.011549971,
0.0075219464,
-0.032... |
2407.21783 | 61 | also enables us to share the key-value cache pages for a prompt across all corresponding outputs. Together,
this leads to a throughput improvement of over 2×during rejection sampling.
Overall data composition. Table 7 shows data statistics for each broad category of our “helpfulness” mix. While
SFT and preference data ... | 2407.21783#61 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.04368542,
0.01437322,
0.06571433,
0.013406411,
0.036552522,
-0.011780741,
0.009789831,
0.07006855,
-0.007927829,
0.03514886,
0.051076133,
-0.015655136,
-0.043742713,
0.022057556,
0.021398695,
-0.0024474578,
0.023518512,
-0.00033636877,
-0.013420734,
0.0073298407,
0.00358793... |
2407.21783 | 62 | tonal issues, we identify overused phrases (such as “I’m sorry” or “I apologize”) and carefully balance the
proportion of such samples in our dataset.
Data pruning. We also apply a collection of model-based techniques to remove low-quality training samples
and improve overall model performance:
•Topic classification: W... | 2407.21783#62 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.030419163,
0.028500587,
0.047019016,
-0.009043722,
0.03784322,
-0.0021323306,
-0.022286069,
0.029306944,
-0.0469356,
0.04790879,
-0.0051544355,
-0.02453831,
-0.06506475,
0.03903885,
0.021813376,
0.008362489,
0.00300994,
-0.029779637,
-0.018852096,
0.045934603,
0.012331718,
... |
2407.21783 | 63 | score as high quality. The RM and Llama-based scores have high disagreement rates, and we find that
combining these signals yield the best recall on our internal test set. Ultimately, we select examples
that are marked as high quality by the RM orthe Llama-based filter.
•Difficulty scoring: Because we are also interest... | 2407.21783#63 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.029804876,
0.014587695,
0.02223734,
-0.040013526,
0.04272306,
0.000090446105,
-0.0041224607,
0.042257786,
-0.019979395,
0.023455262,
-0.009818641,
-0.038125064,
-0.05610652,
0.014314004,
0.010229176,
0.00869651,
-0.00005238604,
-0.04239463,
-0.026561646,
0.009976013,
0.0319... |
2407.21783 | 64 | examples, and only keeping the ones that have maximum cosine similarity less than a threshold to the
examples seen so far in the cluster.
4.3 Capabilities
We highlight special efforts to improve performance for specific capabilities such as code (Section 4.3.1),
multilinguality (Section 4.3.2), math and reasoning (Sect... | 2407.21783#64 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.016572947,
0.020986252,
0.037704114,
-0.020920383,
0.057386138,
-0.019234104,
-0.0025442373,
0.030959003,
-0.0067385235,
0.037967596,
0.010282341,
-0.04231503,
-0.031169789,
-0.016032811,
-0.0045351647,
0.027955322,
0.032276407,
-0.02782358,
0.0022313537,
-0.0035075892,
-0.... |
2407.21783 | 65 | these coding capabilities via training a code expert, generating synthetic data for SFT, improving formatting
with system prompt steering, and creating quality filters to remove bad samples from our training data.
Expert training. We train a code expert which we use to collect high quality human annotations for code
th... | 2407.21783#65 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.012408457,
0.014485765,
0.052126598,
-0.014361126,
0.022878092,
-0.009908762,
0.018612685,
0.029580874,
-0.013246304,
0.03556352,
0.07838377,
-0.026174087,
-0.04306953,
-0.021008514,
0.016479982,
-0.00933404,
0.027268138,
-0.034677204,
-0.027877482,
0.010483484,
0.03816708,... |
2407.21783 | 66 | Synthetic data generation. During development, we identified key issues in code generation, including difficulty
in following instructions, code syntax errors, incorrect code generation, and difficulty in fixing bugs. While
intensive human annotation could theoretically resolve these issues, synthetic data generation o... | 2407.21783#66 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.012097952,
0.00824493,
0.04162881,
-0.022781333,
0.02090871,
-0.020854821,
0.017944846,
0.0069650793,
-0.0023694069,
0.01809304,
0.025731726,
-0.04806848,
-0.029450027,
-0.052837607,
0.0075713242,
-0.012535796,
0.0047152378,
-0.03976966,
-0.019211225,
-0.015816255,
0.046829... |
2407.21783 | 67 | •Problem description generation: First, we generate a large collection of programming problem
descriptions that span a diverse range of topics, including those in the long tail distribution. To
achieve this diversity, we sample random code snippets from various sources and prompt the model
to generate programming probl... | 2407.21783#67 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.014493746,
-0.0017288504,
0.048043843,
-0.032185216,
-0.0019254583,
-0.033875067,
-0.0043058773,
-0.004244133,
0.011023046,
0.063018546,
0.03384907,
-0.025022835,
-0.053867336,
-0.019485313,
-0.02252705,
-0.019446317,
0.008059302,
-0.03743676,
-0.032289207,
0.03132729,
0.05... |
2407.21783 | 68 | correctness, catching errors such as syntax errors, use of uninitialized variables or non-imported
functions, code style issues, typing errors, and others.
–Unit test generation and execution : For each problem and solution, we prompt the model
to generate unit tests, executed in a containerized environment together wi... | 2407.21783#68 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
-0.018314276,
0.032591175,
0.07299148,
-0.012610156,
-0.0043793563,
-0.0012400983,
-0.0070919674,
0.022936007,
0.023108657,
0.09450644,
0.036230125,
-0.010604749,
-0.06645732,
-0.032298997,
-0.0061689494,
-0.00934307,
0.014250339,
-0.021873541,
-0.028181938,
0.031103723,
0.05... |
2407.21783 | 69 | with each round building on the previous one. After each round, the model is improved, generating
higher-quality synthetic data for the next round. This iterative process allows for progressive
refinement and enhancement of the model’s performance.
2.Synthetic data generation: programming language translation. We obser... | 2407.21783#69 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.032259602,
0.018544134,
0.054091338,
-0.026891563,
0.03487941,
-0.0135742035,
0.024708388,
0.0152436895,
-0.032388024,
0.0050823,
0.01652791,
-0.017388336,
-0.056043353,
-0.05170269,
0.0039650286,
-0.01010681,
0.023847962,
-0.016759068,
-0.01115987,
-0.009259225,
0.01904498... |
2407.21783 | 70 | explanations) where execution feedback is less informative for determining quality, we employ an
alternative multi-step approach. Using this procedure, we generated approximately 1.2M synthetic
20
Figure 8 Code translation example. We display an example of using Llama 3 to translate Python code (left) to PHP
code (righ... | 2407.21783#70 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.014185717,
0.014223212,
0.007367824,
-0.05744278,
-0.0122234635,
-0.021309821,
0.008423941,
0.015073106,
-0.010686157,
0.033520788,
0.006374199,
-0.024846878,
-0.054393165,
-0.032820877,
0.001034245,
0.0033714513,
0.028821377,
-0.042294685,
-0.03714533,
0.014060733,
0.04574... |
2407.21783 | 71 | the output (e.g., we ask the model how faithful the backtranslated code is to the original). We
then use the generated examples that have the highest self-verification scores in SFT.
System prompt steering during rejection sampling. During the rejection sampling process, we used code specific
system prompts to improve ... | 2407.21783#71 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.021181846,
0.01634515,
0.021378461,
-0.019228818,
-0.010302557,
-0.011626422,
-0.029413406,
-0.012491522,
-0.0034898927,
0.07738715,
0.059560835,
-0.0477378,
-0.047921307,
0.004849804,
0.004515561,
-0.029413406,
0.021994516,
-0.03426321,
-0.012891304,
-0.0045614373,
0.03048... |
2407.21783 | 72 | only a very small snippet of an executable program.) To address this, we utilize the “model-as-judge” approach,
where earlier versions of Llama 3 assess and assign a binary (0/1) score based on two criteria: code correctness
and code style. We retain only those samples that achieve a perfect score of 2. Initially, this... | 2407.21783#72 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.025593754,
0.024948072,
0.06517272,
-0.03893328,
0.03244898,
-0.020400818,
0.015743662,
0.029591491,
-0.031817034,
0.027462112,
0.020496985,
-0.024934335,
-0.034647048,
0.011986339,
-0.015716186,
0.01351125,
0.022942334,
-0.043164562,
-0.021073977,
0.017268572,
0.020565674,... |
2407.21783 | 73 | tokens. To collect higher quality human annotations in non-English languages, we train a multilingual expert by
branching off the pre-training run and continuing to pre-train on a data mix that consists of 90% multilingual
tokens. We then perform post-training on this expert following Section 4.1. This expert model is ... | 2407.21783#73 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
-0.009447236,
0.015491198,
0.0796114,
-0.038646065,
-0.003964184,
-0.045755126,
0.028461453,
0.073762804,
-0.056418717,
0.019738987,
0.023154866,
-0.01517608,
-0.036755357,
-0.0026028743,
0.033705015,
0.011123664,
0.00036908188,
-0.05223395,
-0.023469985,
-0.006642686,
0.0398... |
2407.21783 | 74 | and Conic10k (Wu et al., 2023). To improve language alignment, we also use parallel texts from
GlobalVoices (Prokopidis et al., 2016) and Wikimedia (Tiedemann, 2012). We use LID based filtering
and Blaser2.0 (Seamless Communication et al., 2023) to remove low quality data. For parallel text data,
instead of using the b... | 2407.21783#74 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.011729455,
0.0035883093,
0.06976969,
-0.029488888,
-0.006933801,
-0.024673,
-0.016929805,
0.029650766,
-0.01983013,
0.0143802175,
0.0015926495,
-0.019452414,
-0.03037922,
0.013618039,
0.028787414,
-0.013908071,
0.013570824,
-0.008883089,
-0.053365983,
0.013840622,
0.0012199... |
2407.21783 | 75 | format, structure and general readability.
–Selection : Prior to reward model based selection, we implement multilingual-specific checks to
ensure high language-match rate between the prompt and response (e.g., a romanized Hindi prompt
should not expect a response in Hindi Devanagari script).
•Translated data : We try ... | 2407.21783#75 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.012338201,
0.018418238,
0.028618926,
-0.019095117,
0.011869135,
-0.018156987,
-0.011827573,
0.024890153,
-0.057807855,
0.03944899,
0.011643509,
-0.011020068,
-0.03897399,
0.022598265,
0.025650159,
0.009844435,
0.023655146,
-0.040113997,
-0.03529272,
0.018192612,
0.03895024,... |
2407.21783 | 76 | issues. We observed strong gains on MGSM (Shi et al., 2022) from adding this translated data.
4.3.3 Math and Reasoning
We define reasoning as the ability to perform multi-step computations and arrive at the correct final answer.
Several challenges guide our approach to training models that excel in mathematical reasoni... | 2407.21783#76 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.020212112,
-0.000642254,
0.05009581,
-0.014620723,
-0.0051821084,
0.00889081,
0.007738531,
0.05984296,
-0.017718654,
0.06840635,
0.028208803,
-0.02024989,
-0.019683195,
-0.0141673675,
0.039920494,
-0.0028444927,
0.0014891478,
-0.04007161,
0.0022872428,
0.042288017,
0.048408... |
2407.21783 | 77 | reach the final answer (Zelikman et al., 2022).
•Incorrect intermediate steps : When using model-generated chains of thought, the intermediate steps
may not always be correct (Cobbe et al., 2021; Uesato et al., 2022; Lightman et al., 2023; Wang et al.,
2023a). This inaccuracy can lead to incorrect final answers and nee... | 2407.21783#77 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.026642023,
0.007347055,
0.059088804,
-0.005202808,
0.00883729,
-0.0066897897,
-0.021644203,
0.06752262,
-0.021930536,
0.052503135,
0.049900103,
-0.014537928,
-0.024650704,
-0.010308003,
0.025080204,
-0.02938822,
0.008908873,
0.0033579099,
-0.02767022,
0.058359955,
0.0162559... |
2407.21783 | 78 | converted it into a question-answer format which can then be used for supervised finetuning. Additionally,
we identify mathematical skills where the model under-performs and actively sourced prompts from
humans to teach models such skills. To facilitate this process, we create a taxonomy of mathematical
skills (Didolka... | 2407.21783#78 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.031534277,
-0.0049295677,
0.046541102,
-0.020914828,
0.009298249,
0.0070734,
-0.026199626,
0.0542938,
-0.019269563,
0.053496093,
0.017138194,
-0.024940746,
-0.032007914,
-0.00026700526,
0.034600455,
-0.0228717,
0.01872114,
-0.009715798,
-0.037816204,
0.06885192,
0.03430132,... |
2407.21783 | 79 | reward models are used to eliminate data with invalid step-by-step reasoning, ensuring high-quality
data for finetuning. For more challenging prompts, we use Monte Carlo Tree Search (MCTS) with
learned step-wise reward models to generate valid reasoning traces, further enhancing the collection of
high-quality reasoning... | 2407.21783#79 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.025045268,
-0.027049959,
0.05773509,
-0.021824397,
-0.004290038,
0.003242587,
0.0026395093,
0.045787133,
-0.035576575,
0.07660591,
0.037875287,
-0.024898257,
-0.03103261,
-0.021837762,
0.016652295,
-0.029883254,
0.04151046,
-0.025847144,
-0.027718188,
0.06831986,
0.03138009... |
2407.21783 | 80 | 4.3.4 Long Context
During the final pre-training stage, we extend the context length of Llama 3 from 8K tokens to 128K tokens
(see Section 3.4 for more details). Similar to pre-training, we find that during finetuning we must carefully
tune the recipe to balance short and long-context capabilities.
SFT and synthetic da... | 2407.21783#80 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
-0.0037163226,
-0.015742669,
0.05750586,
-0.041713055,
0.0073386403,
-0.02742433,
0.01972847,
0.02083146,
-0.045473248,
0.046225287,
0.014614611,
-0.028201437,
-0.022836896,
0.0097200945,
-0.0073135723,
-0.0058251633,
0.02729899,
-0.007557985,
-0.019139374,
0.033114754,
-0.00... |
2407.21783 | 81 | these documents into chunks of 8K tokens, and prompted an earlier version of the Llama 3 model to
generate QA pairs conditional on randomly selected chunks. During training, the whole document is
used as context.
•Summarization: We applied hierarchical summarization of long-context documents by first summarizing
the ch... | 2407.21783#81 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.0021296532,
0.017436633,
0.045407638,
-0.030804303,
0.003382482,
-0.021143634,
-0.002093769,
0.005700918,
-0.038992155,
0.048577935,
-0.008281465,
-0.014403632,
-0.029755857,
-0.0031437736,
-0.0040440005,
0.013143002,
0.031228673,
-0.01808567,
-0.017935893,
0.04238712,
0.01... |
2407.21783 | 82 | Through careful ablations, we observe that mixing 0.1% of synthetically generated long-context data with the
original short-context data optimizes the performance across both short-context and long-context benchmarks.
DPO.We observe that using only short context training data in DPO did not negatively impact long-conte... | 2407.21783#82 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.007823432,
-0.01880443,
0.029743137,
-0.013574711,
0.034564063,
-0.020171767,
-0.004588337,
0.079954065,
-0.012242613,
0.035804536,
0.013074294,
-0.00794325,
-0.004228882,
-0.011262923,
0.026712438,
0.006910698,
0.002412225,
-0.034366716,
0.011812678,
0.050182734,
-0.001961... |
2407.21783 | 83 | beyond its knowledge cutoff or that require retrieving a particular piece of information from the web.
•Python interpreter. Llama 3 can generate and execute code to perform complex computations, read files
uploaded by the user and solve tasks based on them such as question answering, summarization, data
analysis or vis... | 2407.21783#83 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
-0.01311848,
-0.008471189,
0.00923876,
-0.03449881,
0.032572906,
-0.04876167,
-0.030535357,
0.04133717,
0.00799669,
0.041085966,
0.00016550743,
-0.021282641,
-0.02135242,
-0.01004122,
-0.01101115,
0.008164161,
0.002922002,
-0.02784188,
-0.016188763,
0.056353644,
0.028428026,
... |
2407.21783 | 84 | and the model only needs the function’s signature and docstring as context to generate the appropriate call.
We also convert function definitions and calls to JSON format, e.g., for web API calls. All tool calls are
executed by the Python interpreter, that must be enabled in the Llama 3 system prompt. Core tools can be... | 2407.21783#84 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
-0.017994396,
0.029818865,
0.0104550095,
-0.036194947,
0.019437483,
-0.006158884,
-0.00026827774,
0.010705341,
-0.0004896184,
0.021057272,
-0.0063797645,
-0.03316152,
0.007745542,
-0.010020611,
-0.013620962,
0.008953022,
0.021469582,
-0.01199381,
-0.024930043,
0.0503902,
0.04... |
2407.21783 | 85 | reasoning about the tool outputs. Annotators cannot rank or edit the tool outputs.
•We do not perform rejection sampling, as we did not observe gains in our tool benchmarks.
To accelerate the annotation process, we start by bootstrapping basic tool use capabilities by finetuning on
synthetically generated data from pre... | 2407.21783#85 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.016182981,
0.065064736,
0.03608236,
-0.051724523,
-0.0027422318,
-0.013929566,
0.020759145,
0.040797196,
-0.0011206402,
0.015461888,
0.0072386595,
-0.029703466,
-0.004860441,
-0.042488992,
0.012528983,
-0.012674588,
0.0064586317,
-0.03735814,
-0.054997172,
0.04562297,
0.042... |
2407.21783 | 86 | final answer to the user’s query based on the tool output. We end up with trajectories of the following
form: system prompt, user prompt, tool call, tool output, final answer. We also filter around 30%this
dataset to remove tool calls that cannot be executed or other formatting issues.
•Multi-step tool use: We follow a... | 2407.21783#86 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
-0.0036442508,
0.041878838,
0.041200183,
-0.033763222,
0.008985146,
-0.040493246,
-0.0024353915,
0.063341394,
-0.007910605,
0.056639645,
0.0161464,
-0.0237813,
-0.0050545866,
-0.036590964,
-0.03489432,
-0.00083153247,
0.044650026,
-0.033989444,
-0.05712036,
0.040408414,
0.031... |
2407.21783 | 87 | contents of the file, find and fix bugs, optimize a piece of code, perform data analysis or visualization.
See Figure 11 for an example of Llama 3 performing a task involving a file upload.
After finetuning on this synthetic data, we gather human annotations in diverse and challenging scenarios
including multi-turn int... | 2407.21783#87 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
-0.002058959,
0.040353518,
0.026712697,
-0.05822674,
0.026227012,
-0.05539589,
0.012218457,
0.04840202,
-0.0038958907,
0.026046613,
0.041907713,
-0.0072575277,
-0.0056859883,
-0.027004108,
0.0041942406,
0.031694442,
0.029890466,
-0.032360524,
-0.014265274,
0.045071606,
0.0261... |
2407.21783 | 88 | by finetuning on a large and diverse set of partly synthetic (functions definitions, user query, corresponding
call) tuples. We evaluate our model on a set of unseen tools.
•Single, nested, and parallel function calling: Calls can be simple, nested, i.e.we pass a function call as an
argument of another function, or par... | 2407.21783#88 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
-0.025571456,
0.020144371,
0.065927036,
-0.054137163,
0.0054504764,
-0.03577063,
-0.015465852,
0.021467725,
-0.019877028,
0.013540975,
0.025370948,
-0.080844834,
-0.01895469,
-0.052239023,
-0.003428687,
-0.007793078,
-0.017511033,
-0.009918463,
-0.01618768,
-0.0077195587,
0.0... |
2407.21783 | 89 | data covers a set of diverse domains and realistic APIs. All agents are variants of Llama 3 prompted in
different ways depending on their roles and collaborate in a step-by-step manner.
4.3.6 Factuality
Hallucinations remain a major challenge for large language models. Models tend to be overconfident, even in
domains w... | 2407.21783#89 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
-0.034723513,
0.02543002,
0.018039541,
-0.025873186,
0.009475973,
-0.014363855,
0.0053180153,
0.0057611833,
-0.021232959,
0.039911184,
0.03777355,
-0.018508779,
-0.018169886,
-0.019916488,
-0.010166794,
0.032611948,
0.04077145,
-0.016006183,
-0.014142271,
0.031204237,
0.04009... |
2407.21783 | 90 | 1.Extract a data snippet from the pre-training data.
2.Generate a factual question about these snippets (context) by prompting Llama 3.
3.Sample responses from Llama 3 to the question.
4.Score the correctness of the generations using the original context as a reference and Llama 3 as a judge.
5.Score the informativenes... | 2407.21783#90 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.022208685,
0.011879367,
0.0062978882,
-0.014380286,
-0.0068710158,
-0.015057619,
-0.022417095,
0.015787054,
-0.004747839,
0.038894508,
0.027353806,
-0.020880071,
-0.029854726,
-0.0018854588,
-0.026676474,
-0.023993196,
0.02198725,
-0.051581465,
0.001372575,
0.06455498,
0.05... |
2407.21783 | 91 | instructions, especially around response length, format, tone and character/persona.
Data collection. We collect steerability preference samples within the general English category by asking
annotators to design different system prompts for Llama 3. Annotators then engage in conversations with the
models to evaluate th... | 2407.21783#91 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.0231013,
0.0074405805,
-0.00090033584,
-0.021505583,
-0.007151448,
-0.021095674,
0.0030761475,
0.020963917,
0.005028705,
0.03689181,
0.024111433,
-0.05589402,
-0.0027705457,
-0.022237564,
-0.0439481,
0.0133293625,
0.011067543,
-0.021403106,
-0.014332175,
0.018782616,
0.0523... |
2407.21783 | 92 | and then suggest meals based on that location’s culture. Weekend meals can be more complex.
Weekday meals should be quick and easy. For breakfast and lunch, easy food like cereal, English
muffins with pre-cooked bacon, and other quick easy foods are preferred. The family is busy. Be
sure to ask if they have essentials ... | 2407.21783#92 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.014134824,
-0.0065757353,
0.040607553,
-0.06050101,
-0.0037707014,
-0.00957178,
-0.008616725,
0.041541386,
-0.0021983967,
0.019610476,
-0.03248604,
-0.041399896,
-0.050342046,
0.0033197028,
-0.02300623,
-0.015677063,
0.032429446,
-0.044937138,
0.010873485,
0.0413716,
0.0240... |
2407.21783 | 93 | reproduced ourselves. The specifics of these evaluations, including configurations such as the number of shots,
metrics, and other pertinent hyperparameters and settings, can be accessed on our Github repository here.
Additionally, we are releasing the data generated as part of evaluations with publicly available bench... | 2407.21783#93 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.029307729,
-0.006184773,
0.077627294,
-0.03259925,
0.034547832,
0.013508414,
-0.0049471604,
0.039945934,
-0.014166718,
0.013166095,
-0.0011322842,
-0.032177936,
-0.05182175,
-0.00019780001,
0.0071031083,
0.0039662863,
0.02397546,
-0.039103303,
-0.03860299,
0.07415145,
-0.00... |
2407.21783 | 94 | 28
Reading ComprehensionSQuAD V2 (Rajpurkar et al., 2018), QuaC (Choi et al., 2018),
RACE (Lai et al., 2017),
Code HumanEval (Chen et al., 2021), MBPP (Austin et al., 2021),
Commonsense
reasoning/understandingCommonSenseQA (Talmor et al., 2019), PiQA (Bisk et al., 2020),
SiQA (Sap et al., 2019), OpenBookQA (Mihaylov et... | 2407.21783#94 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.016188622,
0.01410022,
0.050962623,
-0.0013989842,
0.0071201907,
0.0027541681,
-0.045300107,
0.033274274,
-0.023869457,
0.0223417,
0.031283986,
-0.048888236,
-0.03271363,
0.03083547,
0.043730304,
0.013840921,
0.018515298,
-0.03885269,
-0.020603701,
0.04602895,
0.045384206,
... |
2407.21783 | 95 | Long context QuALITY (Pang et al., 2022), many-shot GSM8K (An et al., 2023a)
AggregateMMLU (Hendrycks et al., 2021a),
MMLU-Pro (Wang et al., 2024b),
AGIEval (Zhong et al., 2023),
BIG-Bench Hard (Suzgun et al., 2023)
Table 8 Pre-training benchmarks by category. Overview of all benchmarks we use to evaluate pre-trained L... | 2407.21783#95 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.01060862,
-0.03126112,
0.051494617,
-0.04408619,
0.0016532323,
-0.022322431,
0.0048367293,
0.023172578,
-0.041875813,
-0.016796475,
-0.03575475,
-0.0031060728,
-0.03917963,
0.008774731,
0.013140843,
0.01563056,
0.007068365,
-0.02237101,
-0.03133399,
0.046612345,
0.016492851... |
2407.21783 | 96 | log-probabilities. In particular, this is true for all models comparable to Llama 3 405B. Thus, we do not
report category averages for Llama 3 405B, which requires that all numbers are available for all benchmarks.
Significance estimates. Benchmark scores are estimates of a model’s true performance. These estimates
hav... | 2407.21783#96 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.038518436,
-0.02956998,
0.08350114,
-0.011392589,
0.0054291766,
0.016601393,
-0.016868511,
-0.004167043,
-0.017442815,
0.026284426,
-0.028341236,
-0.0019182422,
-0.029650114,
0.01657468,
-0.0008042761,
0.016801732,
0.029222725,
-0.024067346,
-0.053343493,
0.0139035,
0.03050... |
2407.21783 | 97 | commonsense reasoning, knowledge, reading comprehension, math and reasoning, and code benchmarks. The
results show that Llama 3 8B outperforms competing models in virtually every category, both in terms of
per-category win rate and in terms of average per-category performance. We also find that Llama 3 70B
outperforms ... | 2407.21783#97 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.057073276,
-0.021494789,
0.04467751,
-0.04441377,
0.02218051,
0.014690303,
-0.013503474,
0.023815697,
-0.015059539,
0.01470349,
-0.040431302,
-0.009408915,
-0.043517057,
0.021349732,
0.02140248,
0.013107865,
0.0048561078,
-0.043464307,
-0.014716677,
0.06825584,
0.029064119,... |
2407.21783 | 98 | CommonsenseKnowledge
Math and Reasoning
Reading ComprehensionCode30405060708090Model qualityModel
Llama 2 70B
Llama 3 70B
Mixtral 8x22BFigure 12 Performance of pre-trained Llama 3 8B and 70B models on pre-training benchmarks. Results are aggregated by
capability category by averaging accuracies across all benchmarks co... | 2407.21783#98 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.033814546,
0.013225184,
0.055879503,
-0.021361636,
0.04379894,
0.020341132,
-0.034283426,
0.048046444,
-0.00030145384,
0.013121754,
-0.03875158,
-0.0012351205,
-0.054693513,
-0.0015729901,
0.020479038,
0.008998365,
0.01798294,
-0.042364717,
-0.011335871,
0.047025938,
0.0381... |
2407.21783 | 99 | GPT-4 – – –
Nemotron 4 340B – – –
Gemini Ultra – – –
Table 9 Pre-trained model performance on reading comprehension tasks. Results include 95% confidence intervals.Code
HumanEval MBPP
Llama 3 8B 37.2±7.4 47.6±4.4
Mistral 7B 30.5±7.047.5±4.4
Gemma 7B 32.3±7.244.4±4.4
Llama 3 70B 58.5 ±7.566.2±4.1
Mixtral 8 ×22B 45.1±7.6... | 2407.21783#99 | The Llama 3 Herd of Models | Modern artificial intelligence (AI) systems are powered by foundation models.
This paper presents a new set of foundation models, called Llama 3. It is a
herd of language models that natively support multilinguality, coding,
reasoning, and tool usage. Our largest model is a dense Transformer with 405B
parameters and a ... | http://arxiv.org/pdf/2407.21783 | Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Spataru,Baptiste R... | cs.AI,cs.CL,cs.CV | null | null | cs.AI | 20240731 | 20240815 | [
{
"id": "2201.11903"
}
] | {
"authors": "Abhimanyu Dubey,Abhinav Jauhri,Abhinav Pandey,Abhishek Kadian,Ahmad Al-Dahle,Aiesha Letman,Akhil Mathur,Alan Schelten,Amy Yang,Angela Fan,Anirudh Goyal,Anthony Hartshorn,Aobo Yang,Archi Mitra,Archie Sravankumar,Artem Korenev,Arthur Hinsvark,Arun Rao,Aston Zhang,Aurelien Rodriguez,Austen Gregerson,Ava Sp... | [
0.006946876,
0.017396271,
0.05557501,
-0.0018352678,
0.031044459,
-0.000790813,
-0.015832415,
0.05562671,
0.0018255744,
-0.001650287,
-0.016142601,
0.0031277097,
-0.046295274,
-0.0054185633,
0.008833841,
-0.024104046,
0.010397697,
-0.05376559,
-0.019619271,
0.06544927,
0.0131... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.