Text Generation
Transformers
Safetensors
granite
finetune
unsloth
granite-4.1
reasoning
thinking
conversational
Instructions to use DavidAU/Granite-4.1-30B-Claude-4.6-Opus-Thinking-X with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use DavidAU/Granite-4.1-30B-Claude-4.6-Opus-Thinking-X with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="DavidAU/Granite-4.1-30B-Claude-4.6-Opus-Thinking-X") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("DavidAU/Granite-4.1-30B-Claude-4.6-Opus-Thinking-X") model = AutoModelForCausalLM.from_pretrained("DavidAU/Granite-4.1-30B-Claude-4.6-Opus-Thinking-X") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use DavidAU/Granite-4.1-30B-Claude-4.6-Opus-Thinking-X with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "DavidAU/Granite-4.1-30B-Claude-4.6-Opus-Thinking-X" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "DavidAU/Granite-4.1-30B-Claude-4.6-Opus-Thinking-X", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/DavidAU/Granite-4.1-30B-Claude-4.6-Opus-Thinking-X
- SGLang
How to use DavidAU/Granite-4.1-30B-Claude-4.6-Opus-Thinking-X with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "DavidAU/Granite-4.1-30B-Claude-4.6-Opus-Thinking-X" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "DavidAU/Granite-4.1-30B-Claude-4.6-Opus-Thinking-X", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "DavidAU/Granite-4.1-30B-Claude-4.6-Opus-Thinking-X" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "DavidAU/Granite-4.1-30B-Claude-4.6-Opus-Thinking-X", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Unsloth Studio new
How to use DavidAU/Granite-4.1-30B-Claude-4.6-Opus-Thinking-X with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for DavidAU/Granite-4.1-30B-Claude-4.6-Opus-Thinking-X to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for DavidAU/Granite-4.1-30B-Claude-4.6-Opus-Thinking-X to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for DavidAU/Granite-4.1-30B-Claude-4.6-Opus-Thinking-X to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="DavidAU/Granite-4.1-30B-Claude-4.6-Opus-Thinking-X", max_seq_length=2048, ) - Docker Model Runner
How to use DavidAU/Granite-4.1-30B-Claude-4.6-Opus-Thinking-X with Docker Model Runner:
docker model run hf.co/DavidAU/Granite-4.1-30B-Claude-4.6-Opus-Thinking-X
| {%- set tools_system_message_prefix = 'You are a helpful assistant with access to the following tools. You may call one or more tools to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>' %} | |
| {%- set tools_system_message_suffix = '\n</tools>\n\nFor each tool call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>. If a tool does not exist in the provided list of tools, notify the user that you do not have the ability to fulfill the request.' %} | |
| {%- set documents_system_message_prefix = 'You are a helpful assistant with access to the following documents. You may use one or more documents to assist with the user query.\n\nYou are given a list of documents within <documents></documents> XML tags:\n<documents>' %} | |
| {%- set documents_system_message_suffix = '\n</documents>\n\nWrite the response to the user\'s input by strictly aligning with the facts in the provided documents. If the information needed to answer the question is not available in the documents, inform the user that the question cannot be answered based on the available data.' %} | |
| {%- if available_tools is defined and available_tools %} | |
| {%- set tools = available_tools %} | |
| {%- endif %} | |
| {%- set ns = namespace(tools_system_message=tools_system_message_prefix, | |
| documents_system_message=documents_system_message_prefix, | |
| system_message='' | |
| ) %} | |
| {%- if tools %} | |
| {%- for tool in tools %} | |
| {%- set ns.tools_system_message = ns.tools_system_message + '\n' + (tool | tojson) %} | |
| {%- endfor %} | |
| {%- set ns.tools_system_message = ns.tools_system_message + tools_system_message_suffix %} | |
| {%- else %} | |
| {%- set ns.tools_system_message = '' %} | |
| {%- endif %} | |
| {%- if documents %} | |
| {%- for document in documents %} | |
| {%- set ns.documents_system_message = ns.documents_system_message + '\n' + (document | tojson) %} | |
| {%- endfor %} | |
| {%- set ns.documents_system_message = ns.documents_system_message + documents_system_message_suffix %} | |
| {%- else %} | |
| {%- set ns.documents_system_message = '' %} | |
| {%- endif %} | |
| {%- if messages[0].role == 'system' %} | |
| {%- if messages[0].content is string %} | |
| {%- set ns.system_message = messages[0].content %} | |
| {%- elif messages[0].content is iterable %} | |
| {%- for entry in messages[0].content %} | |
| {%- if entry.type== 'text' %} | |
| {%- if ns.system_message != '' %} | |
| {%- set ns.system_message = ns.system_message + '\n' %} | |
| {%- endif %} | |
| {%- set ns.system_message = ns.system_message + entry.text %} | |
| {%- endif %} | |
| {%- endfor %} | |
| {%- endif %} | |
| {%- if tools and documents %} | |
| {%- set ns.system_message = ns.system_message + '\n\n' + ns.tools_system_message + '\n\n' + ns.documents_system_message %} | |
| {%- elif tools %} | |
| {%- set ns.system_message = ns.system_message + '\n\n' + ns.tools_system_message %} | |
| {%- elif documents %} | |
| {%- set ns.system_message = ns.system_message + '\n\n' + ns.documents_system_message %} | |
| {%- endif %} | |
| {%- else %} | |
| {%- if tools and documents %} | |
| {%- set ns.system_message = ns.tools_system_message + '\n\n' + ns.documents_system_message %} | |
| {%- elif tools %} | |
| {%- set ns.system_message = ns.tools_system_message %} | |
| {%- elif documents %} | |
| {%- set ns.system_message = ns.documents_system_message %} | |
| {%- endif %} | |
| {%- endif %} | |
| {%- if ns.system_message %} | |
| {{- '<|start_of_role|>system<|end_of_role|>' + ns.system_message + '<|end_of_text|>\n' }} | |
| {%- endif %} | |
| {%- for message in messages %} | |
| {%- set content = namespace(val='') %} | |
| {%- if message.content is string %} | |
| {%- set content.val = message.content %} | |
| {%- else %} | |
| {%- if message.content is iterable %} | |
| {%- for entry in message.content %} | |
| {%- if entry.type== 'text' %} | |
| {%- if content.val != '' %} | |
| {%- set content.val = content.val + '\n' %} | |
| {%- endif %} | |
| {%- set content.val = content.val + entry.text %} | |
| {%- endif %} | |
| {%- endfor %} | |
| {%- endif %} | |
| {%- endif %} | |
| {%- if (message.role == 'user') or (message.role == 'system' and not loop.first) %} | |
| {{- '<|start_of_role|>' + message.role + '<|end_of_role|>' + content.val + '<|end_of_text|>\n' }} | |
| {%- elif message.role == 'assistant' %} | |
| {{- '<|start_of_role|>' + message.role + '<|end_of_role|>' + content.val }} | |
| {%- if message.tool_calls %} | |
| {%- for tool_call in message.tool_calls %} | |
| {%- if (loop.first and content.val) or (not loop.first) %} | |
| {{- '\n' }} | |
| {%- endif %} | |
| {%- if tool_call.function %} | |
| {%- set tool_call = tool_call.function %} | |
| {%- endif %} | |
| {{- '<tool_call>\n{"name": "' }} | |
| {{- tool_call.name }} | |
| {{- '", "arguments": ' }} | |
| {%- if tool_call.arguments is string %} | |
| {{- tool_call.arguments }} | |
| {%- else %} | |
| {{- tool_call.arguments | tojson }} | |
| {%- endif %} | |
| {{- '}\n</tool_call>' }} | |
| {%- endfor %} | |
| {%- endif %} | |
| {{- '<|end_of_text|>\n' }} | |
| {%- elif message.role == 'tool' %} | |
| {%- if loop.first or (messages[loop.index0 - 1].role != 'tool') %} | |
| {{- '<|start_of_role|>user<|end_of_role|>' }} | |
| {%- endif %} | |
| {{- '\n<tool_response>\n' }} | |
| {{- content.val }} | |
| {{- '\n</tool_response>' }} | |
| {%- if loop.last or (messages[loop.index0 + 1].role != 'tool') %} | |
| {{- '<|end_of_text|>\n' }} | |
| {%- endif %} | |
| {%- endif %} | |
| {%- endfor %} | |
| {%- if add_generation_prompt %} | |
| {{- '<|start_of_role|>assistant<|end_of_role|>' }} | |
| {%- endif %} |