100 Coder/Programming - MOE, Reasoning, Reg, Imatrix, Fused.
Models (0.8B to 87B) in regular, "reasoning", "Brainstorm", MOE (1x to 8x / 128 experts), and expanded to create better and stronger code, faster.
Text Generation • 53B • Updated • 47 • 9Note 128 experts (MOE) - Mixture of experts. All experts are coders. 256K context ; using Brainstorm 40x to enhance performance. Non-Thinking model (you can activate this using a system prompt) Links to GGUFs on this page.
DavidAU/Qwen3-Coder-42B-A3B-Instruct-TOTAL-RECALL-MASTER-CODER-M
Text Generation • 42B • Updated • 39 • 3Note 128 experts (MOE) - Mixture of experts. All experts are coders. 256K context ; using Brainstorm 20x to enhance performance. Non-Thinking model (you can activate this using a system prompt) Links to GGUFs on this page.
DavidAU/Qwen3-Coder-42B-A3B-Instruct-TOTAL-RECALL-MASTER-CODER-M-512k-ctx
Text Generation • 42B • Updated • 37 • 2Note 128 experts (MOE) - Mixture of experts. All experts are coders. 512K context ; using Brainstorm 20x to enhance performance. Non-Thinking model (you can activate this using a system prompt) Links to GGUFs on this page. Special note: Even you do not need the context, try this model as context changes will change generation.
DavidAU/Qwen3-Coder-42B-A3B-Instruct-TOTAL-RECALL-MASTER-CODER-M-1million-ctx
Text Generation • 42B • Updated • 10 • 6Note 128 experts (MOE) - Mixture of experts. All experts are coders. 1 million context ; using Brainstorm 20x to enhance performance. Non-Thinking model (you can activate this using a system prompt) Links to GGUFs on this page. Special note: Even you do not need the context, try this model as context changes will change generation.
DavidAU/Qwen3-53B-A3B-2507-THINKING-TOTAL-RECALL-v2-MASTER-CODER
Text Generation • 53B • Updated • 13 • 8Note 128 experts (MOE) - Mixture of experts. 256K context ; using Brainstorm 40x to enhance performance. Non-Thinking model (you can activate this using a system prompt) Links to GGUFs on this page.
DavidAU/Qwen3-53B-A3B-2507-TOTAL-RECALL-v2-MASTER-CODER
Text Generation • 53B • Updated • 14 • 8Note 128 experts (MOE) - Mixture of experts. 256K context ; using Brainstorm 40x to enhance performance. Links to GGUFs on this page. Non-thinking model => STRAIGHT to coding.
DavidAU/Qwen3-42B-A3B-2507-Thinking-TOTAL-RECALL-v2-Medium-MASTER-CODER
Text Generation • 42B • Updated • 214 • 4Note 128 experts (MOE) - Mixture of experts. 256K context ; using Brainstorm 20x to enhance performance. Links to GGUFs on this page. Enhanced Thinking model => Smarter thinking, fewer tokens, better code.
DavidAU/Qwen3-42B-A3B-2507-TOTAL-RECALL-v2-Medium-MASTER-CODER
Text Generation • 42B • Updated • 6 • 3Note 128 experts (MOE) - Mixture of experts. 256K context ; using Brainstorm 20x to enhance performance. Links to GGUFs on this page. Non-thinking model => STRAIGHT to coding.
DavidAU/Qwen3-53B-A3B-TOTAL-RECALL-MASTER-CODER-v1.4
Text Generation • 53B • Updated • 5 • 3Note 128 experts MOE Model. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Uses Brainstorm adapter (40x) by DavidAU to extend model function/performance.
DavidAU/Qwen2.5-2X7B-Coder-Soar-qwen-Coder-Instruct-OlympicCoder-19B
Text Generation • 19B • Updated • 2 • 2Note Specialized 2 model with MOE with additional shared expert. Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Mistral-Magistral-Devstral-Instruct-FUSED-CODER-Reasoning-36B
Text Generation • 36B • Updated • 15 • 4Note Newest Devstral version (1.1), with even better coding abilities. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Context at 128k. This is a fused model with 62 layers, 561 tensors. Short thinking blocks -> then straight to coding.
DavidAU/Mistral-Devstral-2507-CODER-Brainstorm40x-44B
Text Generation • 44B • Updated • 4 • 2Note Newest Devstral version, with even better coding abilities. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Context at 128k. Uses Brainstorm adapter by DavidAU to extend model function/performance.
DavidAU/Mistral-Devstral-2505-CODER-Brainstorm40x-44B
Text Generation • 44B • Updated • 113 • 2Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Context at 128k. Uses Brainstorm adapter by DavidAU to extend model function/performance.
DavidAU/Qwen3-Esper3-Reasoning-CODER-Instruct-21B-Brainstorm20x
Text Generation • 21B • Updated • 7 • 2Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Uses Brainstorm adapter by DavidAU to extend model function/performance.
DavidAU/Qwen2.5-Microsoft-NextCoder-Brainstorm20x-128k-ctx-12B
Text Generation • 12B • Updated • 3 • 4Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Context at 128k. Uses Brainstorm adapter by DavidAU to extend model function/performance.
DavidAU/Qwen3-Esper3-Reasoning-CODER-Instruct-12B-Brainstorm20x
Text Generation • 12B • Updated • 32 • 2Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen3-Jan-Nano-128k-6B-Brainstorm20x
Text Generation • 6B • Updated • 22 • 4Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. This is a general use AND coder/programming model.
DavidAU/Qwen3-Blitzar-Coder-F1-6B-Brainstorm20x
Text Generation • 6B • Updated • 3 • 3Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen3-Polaris-Preview-128k-6B-Brainstorm20x
Text Generation • 6B • Updated • 12 • 2Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. This is a general use AND coder/programming model.
DavidAU/Qwen3-Instruct-6B-Brainstorm20x-128k-ctx
Text Generation • 6B • Updated • 6 • 1Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. This is a general use AND coder/programming model. 128 k context.
DavidAU/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x
Text Generation • 6B • Updated • 48 • 1Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. This is a general use AND coder/programming model.
DavidAU/Qwen3-Esper3-Reasoning-Instruct-6B-Brainstorm20x-Enhanced-E32-128k-ctx
Text Generation • 6B • Updated • 32Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Float 32 enhanced, and 128k context.
DavidAU/Qwen3-Zero-Coder-Reasoning-0.8B-NEO-EX-GGUF
Text Generation • 0.8B • Updated • 11.9k • 15Note Uses NEO Imatrix dataset (by DavidAU) to augment model performance. 40k context. Good for draft, simple, or code blocks - including complex. Model has full thinking/reasoning too. Model is fused together from 2 coder models
DavidAU/Qwen3-Zero-Coder-Reasoning-V2-0.8B-NEO-EX-GGUF
Text Generation • 0.8B • Updated • 7.03k • 13Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. 40k context. Good for draft, simple, or code blocks - including complex. Model has full thinking/reasoning too. Stronger than V1. Model is fused together from 2 coder models.
DavidAU/Qwen3-Zero-Coder-Reasoning-0.8B
Text Generation • 0.8B • Updated • 4 • 2Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. 40k context. Good for draft, simple, or code blocks - including complex. Model has full thinking/reasoning too. Model is fused together from 2 coder models
DavidAU/Qwen3-Zero-Coder-Reasoning-V2-0.8B
Text Generation • 0.8B • Updated • 96Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. 40k context. Good for draft, simple, or code blocks - including complex. Model has full thinking/reasoning too. Stronger than V1. Model is fused together from 2 coder models.
-
DavidAU/Openai_gpt-oss-20b-CODER-NEO-CODE-DI-MATRIX-GGUF
Text Generation • 21B • Updated • 1.44k • 12 -
DavidAU/Openai_gpt-oss-20b-NEO-GGUF
Text Generation • 21B • Updated • 2.47k • 18 -
DavidAU/Openai_gpt-oss-120b-NEO-Imatrix-GGUF
Text Generation • 117B • Updated • 1.33k • 12 -
DavidAU/OpenAi-GPT-oss-20b-MODERATE-uncensored-NEO-Imatrix-gguf
Text Generation • 21B • Updated • 4.55k • 9 -
DavidAU/Qwen3-Jan-v1-256k-ctx-6B-Brainstorm20x
Text Generation • 6B • Updated • 30 • 4 -
DavidAU/Qwen3-42B-A3B-2507-Thinking-Abliterated-uncensored-TOTAL-RECALL-v2-Medium-MASTER-CODER
Text Generation • 42B • Updated • 952 • 26 -
DavidAU/Mistral-2x24B-MOE-Magistral-2506-Devstral-2507-1.1-Coder-Reasoning-Ultimate-44B
Text Generation • 44B • Updated • 4 • 2 -
DavidAU/Qwen3-MOE-4x4B-16B-Jan-Polaris-Instruct-Power-House-V1.1
Text Generation • 12B • Updated • 4 • 2 -
DavidAU/Qwen3-42B-A3B-2507-YOYO2-TOTAL-RECALL-Instruct
Text Generation • 42B • Updated • 2 • 1 -
DavidAU/Qwen3-54B-A3B-2507-YOYO2-TOTAL-RECALL-Instruct
Text Generation • 53B • Updated • 2 -
DavidAU/Qwen3-17B-QiMing-V1.0-Total-Recall-Medium
Text Generation • 17B • Updated • 2 -
DavidAU/Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL
Text Generation • 42B • Updated • 2 • 1 -
DavidAU/Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-IV
Text Generation • 42B • Updated • 1 • 1 -
DavidAU/Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL-PKDick-V
Text Generation • 42B • Updated • 4 • 1 -
DavidAU/Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL-TNG-IV-PKDick-V
Text Generation • 42B • Updated • 2 -
DavidAU/Qwen3-VL-12B-Instruct-Brainstorm20x-NEO-MAX-GGUF
Image-Text-to-Text • 12B • Updated • 1.43k -
DavidAU/Qwen3-VL-12B-Thinking-Brainstorm20x
Image-Text-to-Text • 12B • Updated • 40 -
DavidAU/Qwen3-VL-12B-Thinking-Brainstorm20x-NEO-MAX-GGUF
Image-Text-to-Text • 12B • Updated • 2.04k • 2 -
DavidAU/Qwen3-VL-42B-A3B-Thinking-Brainstorm20x-GGUF
Image-Text-to-Text • 42B • Updated • 1.09k • 5 -
DavidAU/Qwen3-VLTO-TNG-12B-256k-NEO-imatrix-GGUF
Text Generation • 12B • Updated • 622 • 1