This only works with the token ID directly. The tokenizer is completely busted.

Update: the f16 and q8 now has the bias head. The q6 and q4 doesn't but I'm not sure how usable they are...

CosyVoice also has a rich pre- and post- processing on top of the LLM step, so you can't do TTS out of the box with llamacpp. Nevertheless, the LLM step is the slowest, and switching from pytorch to llamacpp yields 10x perf gain.

Downloads last month
226
GGUF
Model size
0.6B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Tinysoft/Cosyvoice2-0.5B-GGUF

Quantized
(5)
this model