Local AI · Tool-first · Reproducible

Agents that
reason,
locally.

Logician is a tool-first local agent framework built for code review, research synthesis, and reproducible workflows — with structured execution, persistent context, repo grounding, and audit-ready traces. Runs entirely on your machine.

🛠️ Tool-driven agents 📚 Repo indexing 📝 Structured traces 🐍 llama.cpp backend 🔌 Extensible skills

Why Logician?

Most agent frameworks hide what they're doing. Logician makes every tool call, trace, and reasoning step visible and inspectable.

🎯
Structured tool flow
Typed tools and explicit execution paths make agent behavior trivial to inspect, log, and debug — no black boxes.
🧠
Persistent context
Semantic memory and repo grounding keep answers tied to your actual work, session after session.
🔍
Repo grounding
Index repositories, build searchable code graphs, and let the agent reason directly over your real codebase.
🖥️
Local-first design
Runs entirely on your machine with llama.cpp. Swap in alternative backends without changing any workflow code.
📊
Research-ready
Built-in workflows for code review, literature synthesis, and reproducible investigation with audit trails.
🔌
Extensible skills
Plug new tools and domain-specific workflows on top of the stable agent core without rewriting anything.

Up in four commands

Clone, ingest a repo, and you're ready. Each runner is a focused entry point — no global state, no hidden config.

logician — terminal
python apps/runners/demo.py
Launch demo · inspect output
python apps/runners/repl_demo.py
Interactive REPL for tool agents
python apps/runners/repo_ingest.py /path/to/repo
Index local repository
python apps/runners/repo_ingest.py https://github.com/org/repo.git
Clone & ingest remote repo

Built for developers

Every design decision prioritises inspectability, local operation, and composability over convenience magic.

01
Tool-first architecture
Every agent capability is a typed, inspectable tool call — not a hidden chain.
02
Audit-ready logs
Every step is recorded. Replay, diff, and verify agent reasoning at any point.
03
Local or cloud
Start with llama.cpp, swap to any API backend with a single config change.
04
Composable skills
Extend the agent core without breaking existing workflows or tools.