Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Website
Tasks
HuggingChat
Collections
Languages
Organizations
Community
Blog
Posts
Daily Papers
Learn
Discord
Forum
GitHub
Solutions
Team & Enterprise
Hugging Face PRO
Enterprise Support
Inference Providers
Inference Endpoints
Storage Buckets
Log In
Sign Up
aws-neuron
/
optimum-neuron-cache
like
32
Follow
AWS Inferentia and Trainium
175
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
679
43f3f59
optimum-neuron-cache
/
neuronxcc-2.13.66.0+6dfecc895
/
0_REGISTRY
/
0.0.23
/
inference
/
llama
43.7 kB
Ctrl+K
Ctrl+K
5 contributors
History:
51 commits
dacorvo
HF Staff
Synchronizing local compiler cache.
5d220aa
verified
almost 2 years ago
01-ai
Synchronizing local compiler cache.
almost 2 years ago
LargeWorldModel
Synchronizing local compiler cache.
almost 2 years ago
NousResearch
Synchronizing local compiler cache.
almost 2 years ago
abacusai
Synchronizing local compiler cache.
almost 2 years ago
defog
Synchronizing local compiler cache.
almost 2 years ago
gorilla-llm
Synchronizing local compiler cache.
almost 2 years ago
ibm
Synchronizing local compiler cache.
almost 2 years ago
m-a-p
Synchronizing local compiler cache.
almost 2 years ago
meta-llama
Delete neuronxcc-2.13.66.0+6dfecc895/0_REGISTRY/0.0.23/inference/llama/meta-llama/Meta-Llama-3-70B/3825d3e7288b5c5f14e2.json
almost 2 years ago
princeton-nlp
Synchronizing local compiler cache.
almost 2 years ago
sophosympatheia
Synchronizing local compiler cache.
almost 2 years ago