YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Intel TTS Benchmark
This provider runs the shared TTS workload on Intel hardware using Kokoro with OpenVINO.
Model source:
- Hugging Face source repo:
hexgrad/Kokoro-82M - Hugging Face artifact repo:
mweinbach1/ai-pc-benchmarks-tts-intel-openvino - Provider-local staged payloads:
model/ - Provider-local vendored runtime:
runtime/kokoro/ - Shared prompt:
../assets/test_prompt.txt
Canonical run:
python -m pip install -r workloads\TTS\Intel\requirements.txt
python workloads\TTS\Intel\benchmark.py --device NPU
Warmup example:
python workloads\TTS\Intel\benchmark.py --device NPU --warmup-seconds 10
CPU and GPU examples:
python workloads\TTS\Intel\benchmark.py --device CPU
python workloads\TTS\Intel\benchmark.py --device GPU
Suite run:
python benchmark_all.py TTS Intel --warmup-seconds 10
Expected outputs:
results/TTS/Intel/test_prompt.npu.result.jsonresults/TTS/Intel/test_prompt.npu.transcript.txtresults/TTS/Intel/test_prompt.npu.wavresults/TTS/Intel/test_prompt.npu.run.log
Notes:
- The default run path stages
config.json, the Kokoro weights, requested voice files, and pre-exported OpenVINO IRs frommweinbach1/ai-pc-benchmarks-tts-intel-openvino. - If the artifact repo is incomplete or unavailable, the benchmark falls back to
hexgrad/Kokoro-82Mfor source assets and regenerates any missing OpenVINO exports locally. - The benchmark vendors the Kokoro Python runtime under
runtime/kokoro/, so benchmark execution does not depend on the installedkokorowheel or an editable upstream clone. - The benchmark keeps the staged source payload under
model/, while the default downloaded or regenerated OpenVINO IRs and NPU static variants live under the machine-local cache root in%LOCALAPPDATA%\AI-PC-Benchmarks\model-artifacts\TTS\Intel\kokoro-openvino\. - NPU runs use split staged OpenVINO models under
npu_static/staged/plus a CPU torch decoder. Those staged IRs are regenerated automatically when the stage export versions change. - If you want the provider-local tree to hold the generated IRs instead of the machine-local cache, pass
--compiled-model-dir workloads\TTS\Intel\model. workloads/TTS/Intel/.cache/npuw/is machine-local compiler cache. It can be regenerated on each machine and does not need to be copied.- This benchmark is intended to validate the NPU path and does not treat CPU fallback as success.
- This machine was verified with
openvino==2026.1.0rc2from the official OpenVINO nightly wheel index. The benchmark relies on that2026.1line for the current NPU-correct path. requirements.txtinstalls the supporting runtime dependencies plus theen_core_web_smspaCy model explicitly so Misaki does not need to fetch it during the benchmark run.- The default voice is
af_heartand the default language code isa(American English). - The transcript output stores the synthesized prompt text so the suite can keep a consistent text sidecar alongside the JSON and WAV outputs.
- Downloads last month
- 33
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support