gpt-oss-120b-1024-Calibration-FP8

Premium FP8 quantization with 1,024-sample calibration across 4 diverse datasets

This is a premium FP8 quantized version of openai/gpt-oss-120b featuring rigorous multi-dataset calibration for production-grade reliability. Quantized by TevunahAi on enterprise-grade hardware.

🎯 Recommended Usage: vLLM

For optimal performance with full FP8 benefits and efficient MoE routing, use vLLM or TensorRT-LLM:

Quick Start with vLLM

pip install vllm

Python API:

from vllm import LLM, SamplingParams

# vLLM auto-detects FP8 from model config
llm = LLM(model="TevunahAi/gpt-oss-120b-1024-Calibration-FP8", dtype="auto")

# Generate
messages = [{"role": "user", "content": "Explain quantum computing"}]
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("TevunahAi/gpt-oss-120b-1024-Calibration-FP8")
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)

sampling_params = SamplingParams(temperature=0.7, max_tokens=512)
outputs = llm.generate([prompt], sampling_params)

for output in outputs:
    print(output.outputs[0].text)

OpenAI-Compatible API Server:

vllm serve TevunahAi/gpt-oss-120b-1024-Calibration-FP8 \
    --dtype auto \
    --max-model-len 8192

Then use with OpenAI client:

from openai import OpenAI

client = OpenAI(
    base_url="http://localhost:8000/v1",
    api_key="token-abc123",  # dummy key
)

response = client.chat.completions.create(
    model="TevunahAi/gpt-oss-120b-1024-Calibration-FP8",
    messages=[
        {"role": "user", "content": "Explain quantum computing"}
    ],
    temperature=0.7,
    max_tokens=512,
)

print(response.choices[0].message.content)

vLLM Benefits

  • Weights, activations, and KV cache in FP8
  • ~60GB VRAM (for 120B MoE model!)
  • Native FP8 tensor core acceleration on Ada/Hopper GPUs
  • Efficient MoE routing - only 5B active per token
  • 120B model capability at 5B model speed
  • Premium 1024-sample calibration for production reliability

⚠️ Transformers: Not Practical

This model can be loaded with transformers, but will decompress FP8 → BF16 during inference, requiring significant VRAM. For large MoE models, vLLM is strongly recommended.

Transformers Example (Not Recommended - Click to expand)
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

# Loads FP8 weights but decompresses to BF16 during compute
model = AutoModelForCausalLM.from_pretrained(
    "TevunahAi/gpt-oss-120b-1024-Calibration-FP8",
    device_map="auto",
    torch_dtype="auto",
    low_cpu_mem_usage=True,
)
tokenizer = AutoTokenizer.from_pretrained("TevunahAi/gpt-oss-120b-1024-Calibration-FP8")

# Generate
messages = [{"role": "user", "content": "Explain quantum computing"}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer([text], return_tensors="pt").to(model.device)

outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Requirements:

pip install torch>=2.1.0 transformers>=4.40.0 accelerate compressed-tensors

System Requirements:

  • ~120GB+ VRAM (decompressed to BF16)
  • Multi-GPU setup or H100 NVL
  • Not practical for most deployments

⚠️ Critical: vLLM is the recommended deployment method for large MoE models.

📊 Model Details

Property Value
Base Model openai/gpt-oss-120b
Architecture Mixture of Experts (MoE)
Total Parameters 120B
Active per Token 5B
Quantization Method FP8 E4M3 weight-only
Framework llm-compressor + compressed_tensors
Calibration Samples 1,024 (4x industry standard)
Calibration Datasets 4 diverse sources
Storage Size ~60GB (sharded safetensors)
VRAM (vLLM) ~60GB
VRAM (Transformers) ~120GB+ (decompressed to BF16)
Target Hardware NVIDIA H100, A100 80GB, 2x RTX 4090
Quantization Time 78.7 minutes

🏆 Premium Calibration

This model was quantized using TevunahAi's premium multi-dataset calibration process:

Calibration Details

  • Total Samples: 1,024 (4x industry standard)
  • Datasets Used: 4 complementary sources
  • Coverage: Comprehensive across all use cases
Dataset Samples Purpose
Open-Platypus 256 STEM reasoning and logic
UltraChat-200k 256 Natural conversations
OpenHermes-2.5 256 Instruction following
SlimOrca 256 Diverse general tasks

Why Premium Calibration?

Most FP8 quantizations use 128-512 samples from a single dataset. TevunahAi uses 1,024 samples across 4 diverse datasets, ensuring:

  • Superior robustness across task types
  • Better statistical coverage for quantization scales
  • Minimal quality loss compared to FP16
  • Production-grade reliability
  • Consistent performance on edge cases

When quality matters, choose TevunahAi premium calibration quantizations.

🚀 MoE Architecture

GPT-OSS-120B uses an advanced Mixture of Experts (MoE) architecture:

How it works:

  1. 120B total parameters split across expert networks
  2. Router network selects which experts to activate
  3. 5B active parameters per token (sparse activation)
  4. Result: 120B model knowledge with 5B model speed

Benefits:

  • ✅ Massive parameter count without massive compute
  • ✅ Specialist experts for different types of knowledge
  • ✅ Better quality-per-parameter ratio than dense models
  • ✅ More accessible than equivalent dense models

With FP8 + MoE:

  • ~60GB VRAM (vs ~240GB for FP16 dense equivalent)
  • Inference speed comparable to 5B dense models
  • Performance approaching 120B dense models

🔧 Why FP8 for Large MoE Models?

With vLLM/TensorRT-LLM:

  • 50% memory reduction vs BF16 (~120GB → ~60GB)
  • Dual RTX 4090 deployment or single A100 80GB / H100 80GB
  • Faster inference via native FP8 tensor cores
  • Efficient MoE routing - optimal for sparse activation
  • 120B capability at 5B speed - best of both worlds

The MoE Advantage:

  • Total Parameters: 120B (full model capability)
  • Active Parameters: 5B per token (fast inference)
  • Memory: ~60GB with FP8 (accessible on high-end prosumer hardware)
  • Speed: Similar to dense 5B models
  • Quality: Comparable to dense 120B models

FP8 + Premium Calibration + MoE = flagship model performance on workstation hardware.

💾 Model Files

This model is sharded into multiple safetensors files (all required for inference). The compressed format enables efficient storage and faster downloads.

🔬 Quantization Infrastructure

Professional hardware pushing the limits:

  • CPUs: Dual Intel Xeon Max 9480 (224 threads, 128GB HBM2e @ 2000 GB/s)
  • Memory: 256GB DDR5-4800 (16 DIMMs, 8-channel per socket, ~614 GB/s)
  • Total Memory Bandwidth: ~2,614 GB/s aggregate
  • Peak Memory Usage: ~310GB during quantization
  • GPU: NVIDIA RTX 5000 Ada Generation (32GB VRAM, native FP8 support)
  • Software: Ubuntu 25.10 | Python 3.12 | PyTorch 2.8 | CUDA 13.0 | llm-compressor

Why This Matters:

  • This 120B MoE quantization required ~310GB of RAM during calibration
  • The 1,024-sample multi-dataset calibration process is impossible on consumer hardware
  • Professional infrastructure enables production-grade quantization quality

📚 About GPT-OSS

GPT-OSS-120B is OpenAI's flagship open-source model release, featuring:

  • State-of-the-art performance across benchmarks
  • Efficient MoE architecture (120B total, 5B active)
  • Strong reasoning and instruction following
  • Apache 2.0 license

🔧 Hardware Requirements

Minimum (vLLM):

  • GPU: 2x RTX 4090 (48GB total) or A100 80GB
  • VRAM: 60GB minimum
  • CUDA: 11.8 or newer

Recommended (vLLM):

  • GPU: H100 80GB / H100 NVL / 2x RTX 4090
  • VRAM: 60GB+
  • CUDA: 12.0+

Transformers:

  • GPU: Multi-GPU setup or H100 NVL
  • VRAM: 120GB+ total
  • Not recommended - use vLLM instead

📖 Additional Resources

📄 License

This model inherits the Apache 2.0 License from the original GPT-OSS model.

🙏 Acknowledgments

  • Original Model: OpenAI
  • Quantization Framework: Neural Magic's llm-compressor
  • Quantized by: TevunahAi

📝 Citation

If you use GPT-OSS, please cite the original work:

@misc{gptoss2024,
  title={GPT-OSS: OpenAI's Open-Source Model Release},
  author={OpenAI},
  year={2024},
  url={https://huggingface.co/openai/gpt-oss-120b}
}

🌟 Why TevunahAi Premium Calibration FP8?

The Difference is in the Details

Aspect Standard FP8 TevunahAi Premium FP8
Calibration Samples 128-256 1,024
Datasets Single 4 diverse
Edge Case Handling Adequate Superior
Output Consistency Good Excellent
Production Ready Maybe Absolutely
Infrastructure Consumer/Prosumer Enterprise-grade

Professional Infrastructure

  • 2.6 TB/s aggregate memory bandwidth
  • 310GB peak usage during 120B quantization
  • 1,024 samples across 4 complementary datasets
  • Quality-first approach over speed
  • Enterprise-ready results

Pushing the Limits

This 120B MoE model required ~310GB of RAM during quantization — pushing our professional hardware to its limits. This level of rigorous calibration would be impossible on consumer hardware.


Professional AI Model Quantization by TevunahAi

Premium multi-dataset calibration on enterprise-grade infrastructure

View all models | Contact for custom quantization

Downloads last month
44
Safetensors
Model size
117B params
Tensor type
BF16
·
F8_E4M3
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TevunahAi/gpt-oss-120b-1024-Calibration-FP8

Quantized
(62)
this model

Collection including TevunahAi/gpt-oss-120b-1024-Calibration-FP8