Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Website
Tasks
HuggingChat
Collections
Languages
Organizations
Community
Blog
Posts
Daily Papers
Learn
Discord
Forum
GitHub
Solutions
Team & Enterprise
Hugging Face PRO
Enterprise Support
Inference Providers
Inference Endpoints
Storage Buckets
Log In
Sign Up
aws-neuron
/
optimum-neuron-cache
like
32
Follow
AWS Inferentia and Trainium
175
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
679
5f387c7
optimum-neuron-cache
/
neuronxcc-2.21.18209.0+043b1bf7
/
0_REGISTRY
/
0.4.0.dev0
/
llama
/
meta-llama
Ctrl+K
Ctrl+K
5 contributors
History:
18 commits
dacorvo
HF Staff
Synchronizing local compiler cache.
ce1c46e
verified
7 months ago
Llama-2-13b-hf
Synchronizing local compiler cache.
7 months ago
Llama-2-7b-hf
Synchronizing local compiler cache.
7 months ago
Llama-3.1-70B-Instruct
Synchronizing local compiler cache.
7 months ago
Llama-3.2-1B
Synchronizing local compiler cache.
7 months ago
Llama-3.2-3B
Synchronizing local compiler cache.
7 months ago
Llama-3.3-70B-Instruct
Synchronizing local compiler cache.
7 months ago
Meta-Llama-3-8B
Synchronizing local compiler cache.
7 months ago
Meta-Llama-3.1-8B
Synchronizing local compiler cache.
7 months ago