Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
zai-org
/
GLM-4.5-Air
like
531
Follow
Z.ai
5.56k
Text Generation
Transformers
Safetensors
English
Chinese
glm4_moe
conversational
arxiv:
2508.06471
License:
mit
Model card
Files
Files and versions
xet
Community
15
Deploy
Use this model
Quantize to 4 bits with bitsandbytesconfig
#7
by
Day1Kim
- opened
Aug 4
Discussion
Day1Kim
Aug 4
Would it be okay to quantize to 4 bits with bitsandbytesconfig?
See translation
Edit
Preview
Upload images, audio, and videos by dragging in the text input, pasting, or
clicking here
.
Tap or paste here to upload images
Comment
·
Sign up
or
log in
to comment