Qwen3-Coder-30B-A3B-Instruct-nvfp4

Note: This model (NVFP4 quantization) was tested on an NVIDIA RTX PRO 6000 (Blackwell, sm_120, CUDA 12.9, Driver 575.64.03) using vLLM 0.11.0 and NVIDIA's NGC container (v0.10.1, 25.09-py3). It fails to run due to issues with NVFP4 MoE kernel initialization, specifically "no kernel image is available" in ops.shuffle_rows (vLLM 0.11.0) and "Failed to initialize GEMM" in cutlass_fp4_moe_mm (vLLM 0.10.1). See related vLLM GitHub issues #20522, #23826, and #18153 for details. A source build with TORCH_CUDA_ARCH_LIST="12.0" or a future vLLM release (e.g., v0.12.0) may resolve this.

Format: NVFP4 — weights & activations quantized to FP4 with dual scaling.
Base model: Qwen/Qwen3-Coder-30B-A3B-Instruct
How it was made: One-shot calibration with LLM Compressor (NVFP4 recipe), long-seq calibration with nvidia/OpenCodeInstruct.

Notes: Keep lm_head in high precision; calibrate on long, domain-relevant sequences.

Check the original model card for information about this model.

If there are other models you're interested in seeing quantized to NVFP4 for use on the DGX Spark, or other modern Blackwell (or newer) cards let me know. I'm trying to make more NVFP4 models available to spread the adoption.

Downloads last month
80
Safetensors
Model size
17B params
Tensor type
F32
·
BF16
·
F8_E4M3
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Firworks/Qwen3-Coder-30B-A3B-Instruct-nvfp4

Quantized
(104)
this model

Dataset used to train Firworks/Qwen3-Coder-30B-A3B-Instruct-nvfp4