Update README.md
Browse files
README.md
CHANGED
|
@@ -19,6 +19,8 @@ tags:
|
|
| 19 |
|
| 20 |
# Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-4bit-MLX
|
| 21 |
|
|
|
|
|
|
|
| 22 |
A high-performance **4-bit MLX quantization** of [Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled](https://huggingface.co/Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled). Specifically optimized for Apple Silicon (M-series chips) to provide deep, agentic-level reasoning locally.
|
| 23 |
|
| 24 |
The original BF16 weights are **55.6 GB**. This conversion reduces the footprint to **14 GB**, making it runnable on any Mac with 24 GB+ of unified memory with room to spare for large context windows.
|
|
|
|
| 19 |
|
| 20 |
# Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-4bit-MLX
|
| 21 |
|
| 22 |
+
Quantized by [BeastCode](https://huggingface.co/BeastCode)
|
| 23 |
+
|
| 24 |
A high-performance **4-bit MLX quantization** of [Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled](https://huggingface.co/Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled). Specifically optimized for Apple Silicon (M-series chips) to provide deep, agentic-level reasoning locally.
|
| 25 |
|
| 26 |
The original BF16 weights are **55.6 GB**. This conversion reduces the footprint to **14 GB**, making it runnable on any Mac with 24 GB+ of unified memory with room to spare for large context windows.
|