Update README.md
Browse files
README.md
CHANGED
|
@@ -7,6 +7,10 @@ tags:
|
|
| 7 |
---
|
| 8 |
This is [allenai/Olmo-3.1-32B-Think](https://huggingface.co/allenai/Olmo-3.1-32B-Think) quantized with [LLM Compressor](https://github.com/vllm-project/llm-compressor) with the recipe in the "recipe.yaml" file. **Not Tested**
|
| 9 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
|
| 11 |
- **Developed by:** [The Kaitchup](https://kaitchup.substack.com/)
|
| 12 |
- **License:** Apache 2.0 license
|
|
|
|
| 7 |
---
|
| 8 |
This is [allenai/Olmo-3.1-32B-Think](https://huggingface.co/allenai/Olmo-3.1-32B-Think) quantized with [LLM Compressor](https://github.com/vllm-project/llm-compressor) with the recipe in the "recipe.yaml" file. **Not Tested**
|
| 9 |
|
| 10 |
+
How the models perform (token efficiency, accuracy per domain, ...) and how to use them:
|
| 11 |
+
[Quantizing Olmo 3: Most Efficient and Accurate Formats](https://kaitchup.substack.com/p/quantizing-olmo-3-most-efficient)
|
| 12 |
+
|
| 13 |
+

|
| 14 |
|
| 15 |
- **Developed by:** [The Kaitchup](https://kaitchup.substack.com/)
|
| 16 |
- **License:** Apache 2.0 license
|