CipherModel-1.5B
Your IDE's new best friend. The model behind CipherCode β the AI coding assistant that learns your style, remembers your projects, and writes code in your voice.
By Lila AI LLC Β· Closed beta v0.1
What CipherCode Delivers
CipherCode isn't another generic completion plugin. It's a complete coding companion that lives natively inside VS Code and adapts to you.
Cipher Persona β Your Style, Learned
The first time you open a workspace, CipherCode silently scans your code and detects:
- Naming conventions (camelCase / snake_case / PascalCase)
- Function style (arrow vs named declarations)
- Async style (async/await vs
.then) - Comment placement and verbosity
- Indent size, semicolon preference, type-annotation density
- Your most-used libraries and imports
From that moment forward, every suggestion is generated to feel like you wrote it. Nothing leaves your machine β Persona lives entirely in VS Code's globalState.
Project Memory β Continuity That Actually Helps
CipherCode remembers your project across sessions:
| What's tracked | Where |
|---|---|
Project summary (auto-detected from package.json / README) |
.vscode/cipher-memory.json |
Project type (node / python / other) |
local |
| Top 10 most-edited files | local |
| Architectural decisions you've made | local |
| Last 20 chat messages | local |
| Recurring patterns in your code | local |
This context is injected into every prompt, so when you come back tomorrow, the model already knows what you're building.
Smart Commands
Right-click anywhere in your editor:
- Explain Code β clear summary of what's happening, even without a selection
- Refactor Code β clean up while preserving your style
- Fix Bug β find and patch issues, style-matched
- Add Comments β comment in your voice
- Document This File β language-aware doc comments (TSDoc / JSDoc / Google Python / Javadoc / XMLDoc / Doxygen / godoc / rustdoc / PHPDoc / YARD)
- Generate README from Project β full README from your code structure
Plus an inline chat sidebar with persistent history, code-block copy buttons, "Insert at cursor" actions, and a stop button that actually stops.
Privacy by Architecture
- Code stays on your machine β only the snippet you act on hits inference
- Persona never leaves your laptop
- Project memory lives in your workspace, not a Lila AI server
- Self-hostable on your own GCP if you want full ownership
- No telemetry, no accounts, no subscription
Powered By
Built on Qwen2.5-Coder-1.5B-Instruct β Alibaba's state-of-the-art open code model β quantized to Q4_K_M for efficient CPU inference and packaged for deployment via llama.cpp.
The intelligence in CipherCode comes from layering Persona detection, Project Memory, and carefully designed prompt templates on top of a strong base. The CipherCode VS Code extension orchestrates all of it; this repo hosts the weights it serves.
A LoRA fine-tune is on the roadmap for v0.2 β trained on real-world IDE workflow patterns collected during the closed beta.
Specifications
| Architecture | Qwen2.5-Coder transformer |
| Parameters | 1.5 B |
| Context window | 32 K (production runs at 4 K for efficiency) |
| Quantization | Q4_K_M |
| File size | 1.07 GB |
| License | Apache 2.0 β free for commercial use |
| Strong languages | Python, JavaScript, TypeScript, Java, Go, Rust, C/C++ |
Quick Start
Easy path β install the VS Code extension
If Lila AI sent you the closed-beta .vsix:
code --install-extension ciphercode-0.1.0.vsix
Open VS Code. Welcome walkthrough opens automatically. Start typing. No setup, no token, no GCP.
Hands-on path β run the model locally
# Pull the GGUF
hf download guhantech/CipherModel-1.5B \
CipherModel-1.5B-Q4_K_M.gguf --local-dir .
# Serve with llama-server
llama-server \
-m CipherModel-1.5B-Q4_K_M.gguf \
--host 0.0.0.0 --port 8080 \
--ctx-size 4096 -np 5
# Make a request
curl -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "cipher-model",
"messages": [{"role":"user","content":"write a python fizzbuzz"}],
"max_tokens": 256
}'
Python (llama-cpp-python)
from llama_cpp import Llama
llm = Llama(model_path="CipherModel-1.5B-Q4_K_M.gguf", n_ctx=4096)
out = llm("def fizzbuzz(n):", max_tokens=256)
print(out["choices"][0]["text"])
Roadmap
| Version | Status | What's in it |
|---|---|---|
| v0.1 | Live | Closed beta. Cipher Persona + Project Memory + 11 commands + chat sidebar. |
| v0.2 | Planned | LoRA fine-tune on collected IDE workflows. Better instruction-following. |
| v0.3 | Planned | Multi-file context awareness. Whole-project doc generation. |
| v1.0 | Planned | Public Marketplace launch. Optional hosted Pro tier for zero-setup. |
Citation
@article{hui2024qwen2,
title={Qwen2.5-Coder Technical Report},
author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and others},
journal={arXiv preprint arXiv:2409.12186},
year={2024}
}
Trademark
CipherCode and Cipher Persona are trademarks of Lila AI LLC. All rights reserved.
The model weights are released under Apache 2.0 β free to use, modify, and redistribute. Trademarks restrict only how you may name and brand derivative work; the underlying weights remain unrestricted.
Β© 2026 Lila AI LLC Β· Built for developers who don't want their AI to sound like Stack Overflow.
- Downloads last month
- 10
4-bit
Model tree for guhantech/CipherModel-1.5B
Base model
Qwen/Qwen2.5-1.5B