Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

nvidia
/
Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16

Any-to-Any
Transformers
Safetensors
PyTorch
NemotronH_Nano_Omni_Reasoning_V3
feature-extraction
nvidia
multimodal
custom_code
Model card Files Files and versions
xet
Community
11

Instructions to use nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • Transformers

    How to use nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16 with Transformers:

    # Load model directly
    from transformers import AutoModel
    model = AutoModel.from_pretrained("nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16", trust_remote_code=True, dtype="auto")
  • Notebooks
  • Google Colab
  • Kaggle
New discussion
Resources
  • PR & discussions documentation
  • Code of Conduct
  • Hub documentation

Switch vision encoder reference from C-RADIOv2-H to C-RADIOv4-H; remove now-redundant _patch_cpe_align_corners

2
#10 opened 3 days ago by
cuichenx

Update README.md

#9 opened 4 days ago by
Ramon4areazn

Detecting Bugs

#8 opened 6 days ago by
GENKAI007

Bake vLLM-parity patches into modeling_nemotron_h.py

#7 opened 7 days ago by
cuichenx

Fix pipeline_tag

#5 opened 9 days ago by
merve
Company
TOS Privacy About Careers
Website
Models Datasets Spaces Pricing Docs