YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Meta-Llama-3.1-8B-Instruct
The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.
Model Conversion Contributor: APLUX
Model developer: Meta
Model Stats:
- Input sequence length for Prompt Processor: 128
- Maximum context length: 4096
- Quantization Type: w4 + w8 (few layers) with fp16 activations and w4a16 + w8a16 (few layers) are supported
- Supported languages: English.
- TTFT: Time To First Token is the time it takes to generate the first response token. This is expressed as a range because it varies based on the length of the prompt. The lower bound is for a short prompt (up to 128 tokens, i.e., one iteration of the prompt processor) and the upper bound is for a prompt using the full context length (4096 tokens).
- Response Rate: Rate of response generation after the first response token.
Model Download
| Model | Chipset | Target Runtime | Precision | Primary Compute Unit | Target Model | Performance |
|---|---|---|---|---|---|---|
| Meta-Llama-3.1-8B-Instruct | QCS9075 | QNN 2.29 | W4A16 | NPU | Meta-Llama-3.1-8B-Instruct | Check in APLUX Model Farm |
Model Inference & Conversion
Please search model by model name in APLUX Model Farm
License
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support