Problem running GGUF
Trying to run the model with Ollama (version is 0.13.5) results in:
ollama run hf.co/LiquidAI/LFM2.5-1.2B-Instruct-GGUF:Q8_0
pulling manifest
...
verifying sha256 digest
writing manifest
success
Error: 500 Internal Server Error: llama runner process has terminated: error loading model: missing tensor 'output_norm'
llama_model_load_from_file_impl: failed to load model
Hey! Weโre aware of this issue. Itโs related to a recent fix in llama.cpp and will be resolved in the next sync/version update for Ollama. For now we recommend using v0.13.4
Hey! Weโre aware of this issue. Itโs related to a recent fix in llama.cpp and will be resolved in the next sync/version update for Ollama. For now we recommend using v0.13.4
Will you get bartowski and/or unsloth quants done as well?
Found it sorry, did something wrong with the search ๐
still this error
ollama --version
ollama version is 0.15.2
Any work arounds for this? I just installed the newest copy of Ollama but ended up with the same error when trying to pull LFM2 models.
ping.
ollama --version
ollama version is 0.16.3
ollama run hf.co/LiquidAI/LFM2.5-Audio-1.5B-GGUF:F16
Error: 500 Internal Server Error: llama runner process has terminated: error loading model: missing tensor 'output_norm'
ollama run hf.co/LiquidAI/LFM2-24B-A2B-GGUF:Q8_0
Error: 500 Internal Server Error: llama runner process has terminated: error loading model: missing tensor 'output_norm.weight'```
Same here:
ollama --version
ollama version is 0.17.0
(base) nise@localhost Documents % ollama run hf.co/LiquidAI/LFM2.5-Audio-1.5B-GGUF:F16
pulling manifest
pulling 60c8b3c36e52: 100% โโโโโโโโโโโโโโโโโโโโ 2.3 GB
...
verifying sha256 digest
writing manifest
success
Error: 500 Internal Server Error: llama runner process has terminated: error loading model: missing tensor 'output_norm'
I designed a chat template that supports tool calls. The modelfile:
FROM <path_to_gguf_file>
TEMPLATE """<|startoftext|>{{- if or .System .Tools }}<|im_start|>system
You are a helpful and precise assistant.
{{- if .System }}
The following system configuration is your fundamental guideline.
- Instruction Compliance: Follow the user's instructions.
- Response Style: Be clear and direct. Use simple language. Always response in Simplified Chinese. Answer what you know. Say "I don't know" if you're uncertain.
- Expression: Keep it concise and practical.
- Math: Use simple Latex math notation when needed.
- Tool Use: Call a tool ONLY when necessary. For web search task, use English to query in default, and response in native Chinese.
{{ .System }}{{ end }}
{{- if .Tools }}
# Tools
You may call one or more functions to assist with the user query.
You are provided with function signatures within the following list:
[TOOL_DEFINITIONS]
{{- range .Tools }}
{"type": "function", "function": {
"name": "{{ .Function.Name }}",
"description": "{{ .Function.Description }}",
"parameters": {{ .Function.Parameters }}
}}
{{- end }}
[/TOOL_DEFINITIONS]
For each function call, return a JSON object with "name" and "arguments" within <tool_call></tool_call> tags.
Example:
<tool_call>
{"name": "get_weather", "arguments": {"location": "Beijing"}}
</tool_call>
{{- end }}<|im_end|>
{{ end }}{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 -}}
{{- if eq .Role "user" }}<|im_start|>user
{{ .Content }}<|im_end|>
{{- else if eq .Role "assistant" }}<|im_start|>assistant
{{- if .Content }}{{ .Content }}{{- end }}
{{- if .ToolCalls }}<tool_call>
{{- range .ToolCalls }}
{"name": "{{ .Function.Name }}", "arguments": {{ .Function.Arguments }}}
{{- end }}</tool_call>
{{- end }}{{ if not $last }}<|im_end|>
{{ end }}
{{- else if eq .Role "tool" }}<|im_start|>user
<tool_response>
{{ .Content }}
</tool_response><|im_end|>
{{- end }}
{{- if and (ne .Role "assistant") $last }}<|im_start|>assistant
{{ end }}
{{- end }}
{{ .Response }}"""
PARAMETER top_k 50
PARAMETER repeat_penalty 1.05
PARAMETER temperature 0.1
PARAMETER num_ctx 128000