BeastCode commited on
Commit
5f690e9
·
verified ·
1 Parent(s): c082e70

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -19,6 +19,8 @@ tags:
19
 
20
  # Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-4bit-MLX
21
 
 
 
22
  A high-performance **4-bit MLX quantization** of [Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled](https://huggingface.co/Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled). Specifically optimized for Apple Silicon (M-series chips) to provide deep, agentic-level reasoning locally.
23
 
24
  The original BF16 weights are **55.6 GB**. This conversion reduces the footprint to **14 GB**, making it runnable on any Mac with 24 GB+ of unified memory with room to spare for large context windows.
 
19
 
20
  # Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-4bit-MLX
21
 
22
+ Quantized by [BeastCode](https://huggingface.co/BeastCode)
23
+
24
  A high-performance **4-bit MLX quantization** of [Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled](https://huggingface.co/Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled). Specifically optimized for Apple Silicon (M-series chips) to provide deep, agentic-level reasoning locally.
25
 
26
  The original BF16 weights are **55.6 GB**. This conversion reduces the footprint to **14 GB**, making it runnable on any Mac with 24 GB+ of unified memory with room to spare for large context windows.