AXE-BLADE-4B
Precision code specialist. Runs on your hardware. No cloud required.
AXE-BLADE is a distilled reasoning model purpose-built for fast, accurate code generation, refactoring, and tool-calling. It delivers production-quality code output in a 2.3GB package that runs at full speed on consumer hardware.
Part of the AXE Fleet โ sovereign AI designed to run entirely on local hardware with zero cloud dependency.
Model Details
| Property | Value |
|---|---|
| Base Architecture | Qwen3-4B |
| Training Method | Multi-stage distillation from frontier reasoning models |
| Parameters | 4 billion |
| Format | GGUF (Q4_K_M) |
| Download Size | 2.3 GB |
| Context Window | 32,768 tokens |
| Specialization | Code generation, refactoring, tool-calling |
| Target Hardware | Apple Silicon (M1/M2/M3/M4), CUDA GPUs, CPU |
What Makes BLADE Different
Most small models sacrifice quality for size. BLADE doesn't.
- Thinks before it codes. Step-by-step reasoning produces correct solutions, not plausible-looking ones.
- Native tool-calling. First-class
<tool_call>support for agentic workflows, IDE integrations, and autonomous coding pipelines. - Clean output by default. No filler, no preamble. Just the solution.
- Type-safe and idiomatic. Type annotations, proper naming conventions, and production patterns out of the box.
- Multi-language. Python, TypeScript, Rust, Go, C++, Bash, SQL, and more.
Benchmarks
Evaluated across our internal fleet of 21 models with 8 sampling profiles:
| Metric | Score |
|---|---|
| Overall Grade | A |
| Fleet Score | 97.5 / 100 |
| Ranking | Top 3 out of 21 models |
| Code Accuracy | Consistently correct across function-level and module-level tasks |
| Tool-Call Compliance | Structured output follows schema reliably |
BLADE outperforms several 7B and 8B models on code tasks while using less than half the memory.
Quick Start
Ollama (Recommended)
ollama run axe-blade-4b
llama.cpp
./llama-cli -m axe-blade-4b.gguf \
-p "Write a Python async web scraper with proper error handling" \
-n 1024 --temp 0.7
Python (llama-cpp-python)
from llama_cpp import Llama
llm = Llama(model_path="axe-blade-4b.gguf", n_ctx=8192, n_gpu_layers=-1)
response = llm.create_chat_completion(
messages=[
{"role": "system", "content": "You are a precise code assistant."},
{"role": "user", "content": "Write a Redis-backed rate limiter in Python"}
],
temperature=0.7,
max_tokens=1024
)
print(response["choices"][0]["message"]["content"])
Sample Output
Prompt: "Write a function to validate email addresses with proper regex"
BLADE Response:
import re
from typing import Optional
def validate_email(email: str) -> Optional[str]:
"""Validate an email address. Returns normalized email or None."""
pattern = r'^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$'
email = email.strip().lower()
if re.match(pattern, email):
return email
return None
No boilerplate. No explanation you didn't ask for. Just working code.
Hardware Requirements
| Setup | RAM Required | Speed |
|---|---|---|
| Apple Silicon (M1+) | 3 GB | ~40 tok/s |
| NVIDIA GPU (8GB+) | 3 GB VRAM | ~50 tok/s |
| CPU-only | 4 GB RAM | ~8 tok/s |
BLADE fits comfortably alongside your other applications. Run AI-assisted coding without sending your code to any cloud.
Use Cases
- Local coding assistant โ IDE integration without API keys or subscriptions
- Agentic pipelines โ Tool-calling support for autonomous code review, refactoring, and generation
- Air-gapped environments โ Full capability with zero network access
- Edge deployment โ Small enough for embedded systems and field devices
- CI/CD integration โ Automated code review and generation in your pipeline
The AXE Fleet
AXE Technology builds sovereign AI systems. Local models that run on your hardware, no cloud required.
The fleet includes specialized models for code, research, strategy, security, and general intelligence. Each model is distilled and optimized for its domain, then benchmarked against the full fleet to ensure quality.
- Website: axe.onl
- Mission: Free intelligence. No gatekeepers. No subscriptions.
License
Apache 2.0 โ use it however you want, commercially or otherwise.
Citation
@misc{axe-blade-4b,
title={AXE-BLADE-4B: Distilled Code Specialist},
author={AXE Technology},
year={2026},
url={https://huggingface.co/axyn/axe-blade-4b}
}
- Downloads last month
- 203
We're not able to determine the quantization variants.