Instructions to use KAERI-MLP/AtomicGPT-gemma2-9B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use KAERI-MLP/AtomicGPT-gemma2-9B with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("KAERI-MLP/AtomicGPT-gemma2-9B", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 8660e74136b5889f64b83561dcd80b29af8e47191104cacdfbfd2723bf871d1f
- Size of remote file:
- 1.59 GB
- SHA256:
- e3b477a77ccc2e0c78d13b5445cd7f856fb75cd857805735981bed3dd1c19f7f
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.