| --- |
| license: gemma |
| pipeline_tag: text-generation |
| tags: |
| - ONNX |
| - DML |
| - DirectML |
| - ONNXRuntime |
| - gemma |
| - google |
| - conversational |
| - custom_code |
| inference: false |
| language: |
| - en |
| --- |
| # Gemma-2B-Instruct-ONNX |
|
|
| ## Model Summary |
| This repository contains optimized versions of the [gemma-2b-it](https://huggingface.co/google/gemma-2b-it) model, designed to accelerate inference using ONNX Runtime. These optimizations are specifically tailored for CPU and DirectML. DirectML is a high-performance, hardware-accelerated DirectX 12 library for machine learning, offering GPU acceleration across a wide range of supported hardware and drivers, including those from AMD, Intel, NVIDIA, and Qualcomm. |
|
|
| ## ONNX Models |
|
|
| Here are some of the optimized configurations we have added: |
| - **ONNX model for int4 DirectML:** ONNX model for AMD, Intel, and NVIDIA GPUs on Windows, quantized to int4 using AWQ. |
| - **ONNX model for int4 CPU and Mobile:** ONNX model for CPU and mobile using int4 quantization via RTN. There are two versions uploaded to balance latency vs. accuracy. Acc=1 is targeted at improved accuracy, while Acc=4 is for improved performance. For mobile devices, we recommend using the model with acc-level-4. |
|
|
| ## Usage |
|
|
| ### Installation and Setup |
|
|
| To use the Gemma-2B-Instruct-ONNX model on Windows with DirectML, follow these steps: |
|
|
| 1. **Create and activate a Conda environment:** |
| ```sh |
| conda create -n onnx python=3.10 |
| conda activate onnx |
| ``` |
|
|
| 2. **Install Git LFS:** |
| ```sh |
| winget install -e --id GitHub.GitLFS |
| ``` |
|
|
| 3. **Install Hugging Face CLI:** |
| ```sh |
| pip install huggingface-hub[cli] |
| ``` |
|
|
| 4. **Download the model:** |
| ```sh |
| huggingface-cli download EmbeddedLLM/gemma-2b-it-onnx --include="onnx/directml/*" --local-dir .\gemma-2b-it-onnx |
| ``` |
|
|
| 5. **Install necessary Python packages:** |
| ```sh |
| pip install numpy==1.26.4 |
| pip install onnxruntime-directml |
| pip install --pre onnxruntime-genai-directml |
| ``` |
|
|
| 6. **Install Visual Studio 2015 runtime:** |
| ```sh |
| conda install conda-forge::vs2015_runtime |
| ``` |
|
|
| 7. **Download the example script:** |
| ```sh |
| Invoke-WebRequest -Uri "https://raw.githubusercontent.com/microsoft/onnxruntime-genai/main/examples/python/phi3-qa.py" -OutFile "phi3-qa.py" |
| ``` |
|
|
| 8. **Run the example script:** |
| ```sh |
| python phi3-qa.py -m .\gemma-2b-it-onnx |
| ``` |
|
|
| ### Hardware Requirements |
|
|
| **Minimum Configuration:** |
| - **Windows:** DirectX 12-capable GPU (AMD/Nvidia) |
| - **CPU:** x86_64 / ARM64 |
| |
| **Tested Configurations:** |
| - **GPU:** AMD Ryzen 8000 Series iGPU (DirectML) |
| - **CPU:** AMD Ryzen CPU |
| |
| ## Resources and Technical Documentation |
| |
| - [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) |
| - [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma) |
| - [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335?version=gemma-2b-it-gg-hf) |
| |
| ## Terms of Use |
| |
| - [Terms](https://www.kaggle.com/models/google/gemma/license/consent) |