Tool-calling schema compliance check (repro steps + observed output)
Hello, Iโm trying to verify whether this model supports schema-based tool calling (strict tool name + arguments matching a predefined schema), rather than generating API-usage explanations.
Repro steps (minimal):
- Provide the model a single tool schema (OpenAI-style function calling):
{
"name": "get_current_weather",
"description": "Get the current weather for a given city",
"parameters": {
"type": "object",
"properties": {
"location": { "type": "string", "description": "City name with country code (e.g., Seoul,KR)" },
"unit": { "type": "string", "enum": ["metric", "imperial"] }
},
"required": ["location"]
}
}
- Ask the model in natural Korean (no mention of tools/APIs/functions):
์ง๊ธ ์์ธ์ ํ์ฌ ๊ธฐ์จ์ด ๋ช ๋์ธ๊ฐ์?
Expected output (example of a correct schema-compliant tool call):
{
"name": "get_current_weather",
"arguments": { "location": "Seoul,KR", "unit": "metric" }
}
Observed output (raw, unchanged):
๋ ์จ ์ ๋ณด์ ์ ๊ทผ์ด ๊ฐ๋ฅํ๋ค๊ณ ๊ฐ์ ํ ๋, ์์ธ์ ํ์ฌ ๊ธฐ์จ์ ์กฐํํ๊ธฐ ์ํ JSON ํ์์ ๋๊ตฌ ํธ์ถ ์์์ ๋ค์๊ณผ ๊ฐ์ต๋๋ค. ์ด๋ ์ผ๋ฐ์ ์ธ ๋ ์จ API ์์ฒญ ํ์(์: OpenWeatherMap API ๊ธฐ๋ฐ)์ ์ฐธ๊ณ ํ ๊ฒ์
๋๋ค.
{
"method": "GET",
"endpoint": "https://api.openweathermap.org/data/2.5/weather",
"query_params": {
"q": "Seoul,KR",
"appid": "YOUR_API_KEY",
"units": "metric",
"lang": "kr"
}
}
์ค๋ช
:
method: HTTP ์์ฒญ ๋ฉ์๋ (GET)
endpoint: ๋ ์จ ๋ฐ์ดํฐ๋ฅผ ์ ๊ณตํ๋ API ์๋ํฌ์ธํธ
query_params:
q: ๋์ ๋ฐ ๊ตญ๊ฐ ์ฝ๋ (Seoul,KR)
appid: ๋ฐ๊ธ๋ฐ์ OpenWeatherMap API ํค (์ค์ ์ฌ์ฉ ์ ๋ณธ์ธ์ ํค๋ก ๊ต์ฒด ํ์)
units: ์ญ์จ ์จ๋ ์ฌ์ฉ์ ์ํด metric ์ง์
lang: ์๋ต ์ธ์ด๋ฅผ ํ๊ตญ์ด๋ก ์ค์ ํ๋ ค๋ฉด lang=kr (ํ๊ตญ์ด ์ง์ ์ฌ๋ถ์ ๋ฐ๋ผ ์๋ต ๊ฐ๋ฅ)
โ ๏ธ ์ฃผ์: ์ค์ ์คํ์ ์ํด์๋ ์ ํจํ appid๊ฐ ํ์ํ๋ฉฐ, ์ JSON์ ์์ฒญ ๊ตฌ์กฐ๋ฅผ ์ค๋ช
ํ๋ ๋๊ตฌ ํธ์ถ ์์์
๋๋ค. ์ ๋ ์ง์ API๋ฅผ ํธ์ถํ ์ ์์ผ๋ฏ๋ก, ์ด ํ์์ ๊ธฐ๋ฐ์ผ๋ก ์ธ๋ถ ์์คํ
์ด๋ ๋ฐฑ์๋์์ ์คํํด์ผ ํฉ๋๋ค.
์๋ต ์์ (์ฑ๊ณต ์ ์ผ๋ถ):
{
"main": {
"temp": 18.5,
"feels_like": 17.8,
"humidity": 65
},
"weather": [
{
"description": "ํฉ์ด์ง ๊ตฌ๋ฆ",
"icon": "03d"
}
],
"name": "Seoul"
}
์ด ๊ตฌ์กฐ๋ฅผ ๋ฐํ์ผ๋ก ์ค์ ๋ ์จ API์์ ์์ธ์ ํ์ฌ ๊ธฐ์จ(์: 18.5ยฐC)์ ๊ฐ์ ธ์ฌ ์ ์์ต๋๋ค.
ํ์ํ์๋ฉด Python ์ฝ๋ ์์๋ ์ ๊ณตํด ๋๋ฆด ์ ์์ต๋๋ค.
From this result, the model appears to generate an API request example rather than emitting a schema-compliant tool call using the provided tool name and argument schema.
Could you clarify:
- Was schema-level tool calling included in training for this model?
- If yes, is there a required serving configuration or prompt format to enable strict tool-call emission?
Thank you.
Hello, @lesj0610 . Thank you for your attention.
We checked your case but could not reproduce the same failure.
Here is the snippet we tested:
tools = [{
"name": "get_current_weather",
"description": "Get the current weather for a given city",
"parameters": {
"type": "object",
"properties": {
"location": { "type": "string", "description": "City name with country code (e.g., Seoul,KR)" },
"unit": { "type": "string", "enum": ["metric", "imperial"] }
},
"required": ["location"]
}
}]
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "์ง๊ธ ์์ธ์ ํ์ฌ ๊ธฐ์จ์ด ๋ช ๋์ธ๊ฐ์?"},
]
inputs = tokenizer.apply_chat_template(
messages,
return_tensors="pt",
add_generation_prompt=True,
tools=tools,
chat_template_kwargs={"enable_thinking": False},
)
output = model.generate(
**inputs,
do_sample=True,
temperature=1.0,
top_p=0.95,
max_new_tokens=256,
)
Please note that the tools schema should be passed to the HF tokenizer, not directly to model.generate(), when using a tool-augmented chat template.
(For inference server, please check the documentation of using OpenAI-compatible tool calling. )
With this setup, we get the following tool call:
<tool_call>{"name": "get_current_weather", "arguments": {"location": "Seoul,KR", "unit": "metric"}}</tool_call>
It seems the tool schema might not be passed to model in proper way.
If you can share the generation part of your code, we can take a closer look.
Thanks for the clarification and the working snippet.
One key detail: my testing was done only via the FriendliAI web UI (I cannot run the 236B model locally). In that UI, I do not have any way to verify whether the tools schema is actually injected into the model inputs (i.e., whether the server uses a tool-augmented chat template like tokenizer.apply_chat_template(..., tools=tools, ...)).
So the failure I observed (API-style JSON explanation instead of a tool call) may be due to the FriendliAI web UI pipeline not passing tools/schema, rather than the base model lacking tool-call training.
Could you confirm whether the FriendliAI web UI currently supports tool calling with schema injection?
If yes, could you share the minimal steps/settings in the UI to enable it, or provide an OpenAI-compatible endpoint example request that triggers tool calls?
If the web UI does not inject tools/schema, then my observation is expected from the deployment, and I will update my report accordingly.
Thank you.
Thanks for the clarification. I was testing via the FriendliAI web UI, where I couldnโt verify whether the tools schema was actually injected. After switching to the FriendliAI OpenAI-compatible API and explicitly passing tools (and tool_choice), the model correctly returned finish_reason: "tool_calls" with a valid get_current_weather tool call. Iโll update my report accordingly.