OmniCoder-9B GPTQ Int4
GPTQ INT4 quantization of Tesslate/OmniCoder-9B — a VLM (Vision-Language Model) for agentic coding with image understanding.
Architecture
- Type:
Qwen3_5ForConditionalGeneration(VLM backbone) - Base: Qwen3.5-9B hybrid (Gated DeltaNet + full attention, 32 layers)
- Vision encoder: Preserved in BF16 (not quantized) — full image understanding capability
- Fine-tuned on: 425K agentic coding trajectories (LoRA r=64, alpha=32)
- Features: Agentic coding, tool calling, reasoning, long context (262K+), image input
Quantization
- Method: GPTQ via GPTQModel
- Bits: 4, Group: 128, Sym: True
- Calibration: 256 samples from allenai/c4
- Only MLP/FFN layers quantized: gate_proj, up_proj, down_proj
- Kept in BF16: lm_head, embed_tokens, all attention (DeltaNet + full), MTP, vision encoder
- Size: ~10.9 GB (INT4 text model + BF16 vision encoder)
Serving (vLLM >= 0.18.0)
vllm serve raydelossantos/OmniCoder-9B-GPTQ-Int4 \
--dtype float16 \
--trust-remote-code \
--enable-prefix-caching \
--tool-call-parser qwen3_coder \
--reasoning-parser qwen3 \
--enable-auto-tool-choice
Important flags
| Flag | Why |
|---|---|
--enable-prefix-caching |
Recommended — enables KV cache reuse for repeated system prompts |
--dtype float16 |
Better throughput on Ampere GPUs (BF16 weights cast to FP16) |
--trust-remote-code |
Required for Qwen3.5 model type |
Note:
--enforce-eageris not required on vLLM >= 0.18.0. The DeltaNet dtype mismatch was fixed in PR #35256. CUDA graphs with piecewise mode work correctly and provide ~3-4x speedup over eager mode.
Multi-GPU (Tensor Parallel)
# 4x RTX 3060 (48GB total) — fits with 80K context, ~39 t/s warm
vllm serve raydelossantos/OmniCoder-9B-GPTQ-Int4 \
--tensor-parallel-size 4 \
--max-model-len 81920 \
--gpu-memory-utilization 0.93 \
--dtype float16 \
--trust-remote-code \
--enable-prefix-caching \
--tool-call-parser qwen3_coder \
--reasoning-parser qwen3 \
--enable-auto-tool-choice
Benchmark (4x RTX 3060, TP=4, vLLM 0.18.0)
| Test | Tokens/sec |
|---|---|
| Short (64 tok) | 36 t/s |
| Code gen (256 tok) | 39 t/s |
| Long output (512 tok) | 40 t/s |
| Reasoning (256 tok) | 39 t/s |
Weight Structure
Weights use the Qwen3_5ForConditionalGeneration layout:
model.language_model.*— quantized text model (GPTQ INT4)model.visual.*— vision encoder (BF16, from base model)lm_head.*— language model head (BF16)
- Downloads last month
- 1,776