Qwen3.5-35B-A3B — Claude 4.6 Opus Reasoning Distilled (GGUF)
A Mixture-of-Experts language model (35B total / 3B active parameters) fine-tuned with Claude 4.6 Opus reasoning distillation data using completion-only loss. Combines the inference speed of MoE with structured Chain-of-Thought reasoning.
Available Quantizations
| File | Quant | Size | Best For |
|---|---|---|---|
Qwen3.5-35B-A3B-Claude-4.6-Opus-Reasoning-Distilled-Q8_0.gguf |
Q8_0 | 35 GB | Maximum quality, high-VRAM GPUs |
Qwen3.5-35B-A3B-Claude-4.6-Opus-Reasoning-Distilled-Q4_K_M.gguf |
Q4_K_M | 20 GB | Recommended — best quality/speed/size balance |
Qwen3.5-35B-A3B-Claude-4.6-Opus-Reasoning-Distilled-Q3_K_M.gguf |
Q3_K_M | 16 GB | Constrained hardware, still excellent quality |
Why This Model?
This model was trained with completion-only loss — the model only learns from the reasoning and answer tokens, not from predicting the user's prompt. Compared to Jackrong's standard SFT version, this produces:
- More structured
<think>blocks with numbered step-by-step reasoning - Better formatted outputs (markdown tables, LaTeX math, execution traces)
- More thorough and rigorous proofs and analysis
- More concise answers when appropriate
See the head-to-head quality comparison below.
Model Details
| Property | Value |
|---|---|
| Base Model | Qwen/Qwen3.5-35B-A3B |
| Architecture | Mixture-of-Experts (256 experts, ~3B active per token) |
| Total Parameters | 35B |
| Active Parameters | ~3B per token |
| Fine-tuning | LoRA (bf16, rank 16, alpha 32) via Unsloth |
| Training Loss | completion-only (loss computed only on reasoning + answer) |
| Training Data | ~3,200 Claude Opus reasoning examples |
| Context Length | Up to 1M tokens (tested) |
| Thinking Mode | Yes (<think> blocks) |
Benchmarks
All benchmarks use identical prompts across 8 tests: coding (binary search, bug detection), math (probability, number theory), logic (knights & knaves), analysis (code review, system design), and instruction following (JSON generation).
Speed: Discrete GPU (NVIDIA RTX PRO 6000 Blackwell, 98GB VRAM)
High-bandwidth discrete GPU with 1.8 TB/s memory bandwidth. Model runs entirely in VRAM.
| Quant | Speed | vs Q8_0 | VRAM Used |
|---|---|---|---|
| Q8_0 | 160 tok/s | baseline | ~35 GB |
| Q4_K_M | 181 tok/s | +13% | ~20 GB |
| Q3_K_M | 182 tok/s | +14% | ~16 GB |
With 1.8 TB/s of bandwidth, even Q8_0 is fast. Smaller quants give a modest speedup since the GPU isn't bandwidth-bottlenecked.
Speed: Unified Memory (NVIDIA DGX Spark, 128GB, 273 GB/s)
Bandwidth-limited unified memory architecture. This is where quantization matters most.
| Quant | Speed | vs Q8_0 | Memory Used (262K ctx) |
|---|---|---|---|
| Q8_0 | 53.8 tok/s | baseline | ~44 GB |
| Q4_K_M | 71.5 tok/s | +33% | ~30 GB |
| Q3_K_M | 75.3 tok/s | +40% | ~26 GB |
On bandwidth-limited hardware, smaller quants translate directly to faster inference. Q4_K_M is 33% faster than Q8_0 with no visible quality loss.
Speed Summary
| Quant | Size | Discrete GPU (1.8 TB/s) | Unified Memory (273 GB/s) |
|---|---|---|---|
| Q8_0 | 35 GB | 160 tok/s | 54 tok/s |
| Q4_K_M | 20 GB | 181 tok/s | 72 tok/s |
| Q3_K_M | 16 GB | 182 tok/s | 75 tok/s |
Recommendation: Use Q4_K_M for the best balance of quality, speed, and size. Use Q8_0 only if you have abundant VRAM and want maximum fidelity. Use Q3_K_M if you need to fit in tight memory constraints.
Quality Comparison Across Quantizations
All three quantizations produce correct answers on every test. No visible quality degradation between Q8_0 and Q4_K_M/Q3_K_M:
| Test | Q8_0 | Q4_K_M | Q3_K_M |
|---|---|---|---|
| Binary Search | Correct | Correct | Correct |
| Bug Detection (missing list tail) | Correct | Correct | Correct |
| Probability (combinatorics) | Correct (1/4 = 25%) | Correct (1/4 = 25%) | Correct (1/4 = 25%) |
| n³-n divisible by 6 | Correct proof | Correct proof | Correct proof |
| Knights & Knaves | A=knave, B=knight, C=knave | A=knave, B=knight, C=knave | A=knave, B=knight, C=knave |
| SQL Injection Detection | Identified + examples | Identified + examples | Identified + examples |
| System Design | Complete API + data model | Complete API + data model | Complete API + data model |
| JSON Generation | Valid JSON | Valid JSON | Valid JSON |
Quality Comparison vs Jackrong's Version
Tested on the same hardware (DGX Spark) with identical prompts. Both models use the same base (Qwen3.5-35B-A3B) and the same training datasets.
| Test | Jackrong's (standard SFT) | Ours (completion-only loss) | Winner |
|---|---|---|---|
| Bug Detection | Identifies missing elements | Same + execution trace table | Ours |
| Probability | Correct with combinations | Full LaTeX step-by-step | Ours |
| Number Theory Proof | Correct factoring | Formal proof by division algorithm | Ours |
| Knights & Knaves | Correct answer | Clearer case-by-case analysis | Ours |
| Code Review | Identifies SQL injection | Same + specific attack examples table | Ours |
| JSON Generation | Explains then outputs | Outputs directly — concise | Ours |
Our model consistently produces more structured reasoning, better formatted output, more rigorous proofs, and more actionable analysis.
Training Details
Datasets
| Dataset | Examples | Source |
|---|---|---|
| nohurry/Opus-4.6-Reasoning-3000x-filtered | 2,326 | Claude 4.6 Opus reasoning traces |
| TeichAI/claude-4.5-opus-high-reasoning-250x | 250 | High-intensity structured reasoning |
| Jackrong/Qwen3.5-reasoning-700x | 633 | Curated step-by-step problem solving |
| Total | 3,204 | (after quality filtering) |
Hyperparameters
Method: LoRA (bf16 — QLoRA 4-bit NOT recommended for MoE models)
LoRA rank: 16
LoRA alpha: 32 (2x scaling factor)
Target modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
Trainable params: 931M / 36B (2.58%)
Loss: completion_only_loss (only on reasoning + answer tokens)
Learning rate: 1e-4 (cosine schedule)
Warmup: 5%
Weight decay: 0.01
Epochs: 3
Batch size: 1 × 8 gradient accumulation = 8 effective
Max sequence length: 8192
Optimizer: adamw_8bit
Training time: ~10 hours on NVIDIA RTX PRO 6000 Blackwell (98GB)
Framework: Unsloth 2026.3.4
Key Training Decision: completion_only_loss
Standard SFT computes loss on the entire sequence (prompt + response), which inflates the loss metric with easy-to-predict prompt tokens. Our completion_only_loss=True setting computes loss only on the assistant's reasoning and answer:
- The model focuses entirely on learning to reason well
- The resulting model produces more structured, higher-quality responses
- Loss numbers aren't directly comparable to standard SFT (ours looks higher but measures something harder)
Usage
llama.cpp (recommended)
# Q4_K_M at native 262K context
llama-server \
--model Qwen3.5-35B-A3B-Claude-4.6-Opus-Reasoning-Distilled-Q4_K_M.gguf \
--n-gpu-layers -1 \
--ctx-size 262144 \
--parallel 1 \
--host 0.0.0.0 --port 8000
Extended Context (up to 1M tokens)
The base model was trained with 262K context. To extend to 1M tokens, use YaRN rope scaling:
# Q4_K_M at 1M context (4x YaRN extrapolation)
llama-server \
--model Qwen3.5-35B-A3B-Claude-4.6-Opus-Reasoning-Distilled-Q4_K_M.gguf \
--n-gpu-layers -1 \
--ctx-size 1048576 \
--rope-scaling yarn \
--rope-scale 4 \
--override-kv qwen35moe.context_length=int:1048576 \
--parallel 1 \
--host 0.0.0.0 --port 8000
Note: The
--override-kvflag is required because the GGUF metadata advertises 262K. Without it, llama.cpp will cap the slot context to 262K even if you request more. Quality is best within the native 262K window and may degrade gradually beyond that range.
NVIDIA DGX Spark / Jetson
This model was specifically designed for bandwidth-limited unified memory hardware:
| Setup | Quant | Context | Memory Used | Speed |
|---|---|---|---|---|
| DGX Spark (128GB) | Q4_K_M | 262K | ~30 GB | 72 tok/s |
| DGX Spark (128GB) | Q4_K_M | 1M | ~49 GB | 72 tok/s |
| DGX Spark (128GB) | Q3_K_M | 262K | ~26 GB | 75 tok/s |
| DGX Spark (128GB) | Q8_0 | 262K | ~44 GB | 54 tok/s |
For 1M context on the Spark, add the YaRN flags shown above. Only ~3B parameters are active per token, so the MoE architecture is ideal for bandwidth-limited hardware.
Ollama
echo 'FROM ./Qwen3.5-35B-A3B-Claude-4.6-Opus-Reasoning-Distilled-Q4_K_M.gguf' > Modelfile
ollama create qwen35-opus-moe -f Modelfile
ollama run qwen35-opus-moe
Hardware Requirements
| Quant | Minimum VRAM/RAM | Recommended |
|---|---|---|
| Q3_K_M (16 GB) | 20 GB | 24+ GB GPU or 32 GB unified |
| Q4_K_M (20 GB) | 24 GB | 32+ GB GPU or 48 GB unified |
| Q8_0 (35 GB) | 40 GB | 48+ GB GPU or 64 GB unified |
Files
| File | Description | Size |
|---|---|---|
Qwen3.5-35B-A3B-Claude-4.6-Opus-Reasoning-Distilled-Q8_0.gguf |
Full precision quantization | 35 GB |
Qwen3.5-35B-A3B-Claude-4.6-Opus-Reasoning-Distilled-Q4_K_M.gguf |
Recommended — best balance | 20 GB |
Qwen3.5-35B-A3B-Claude-4.6-Opus-Reasoning-Distilled-Q3_K_M.gguf |
Smallest, for constrained hardware | 16 GB |
benchmark_results.json |
Dense vs MoE benchmark data | — |
v1_vs_v2_results.json |
Training method comparison data | — |
jackrong_benchmark.json |
Jackrong's model benchmark data | — |
quant_comparison.json |
GPU quantization benchmark data | — |
spark_quant_comparison.json |
DGX Spark quantization benchmark data | — |
Acknowledgments
- Jackrong for the original dense distilled model, the MoE variant, and the reasoning dataset
- nohurry for the Opus 4.6 reasoning dataset
- TeichAI for the Claude 4.5 high reasoning dataset
- Qwen Team for the base Qwen3.5-35B-A3B model
- Unsloth for the training framework
License
Apache 2.0 (following the base model license)
- Downloads last month
- 11,305
3-bit
4-bit
8-bit