OpenJCK v0.3 is live — open source visual debugger for AI agents → pip install openjck

Debug AI agents.
Fix effortlessly.

Your agent ran 15 steps. Step 9 failed silently. OpenJCK shows you exactly what happened — every LLM call, every tool, every token.

Update

v0.2.2 Released

The dashboard has been completely redesigned with a new dark-mode architecture.

Read the Changelog →
Works with every major AI framework
LangChain
CrewAI
Ollama
OpenAI
LlamaIndex
AutoGen

It's time for a Better Way.

Add 3 decorators. That's it.

No config files. No setup. No SDK to learn. Wrap your entry point, your LLM calls, and your tools. OpenJCK does the rest.

from openjck import trace, trace_llm, trace_tool

@trace(name="my_agent")
def run(): ...

@trace_llm
def call_llm(messages): ...

Run your agent. Same as always.

No wrapper scripts. No extra commands. python my_agent.py — and OpenJCK captures everything silently in the background.

$ python my_agent.py
[OpenJCK] Run complete → COMPLETED
[OpenJCK] 8 steps  |  2840 tokens  |  4.2s
[OpenJCK] → http://localhost:7823/trace/a3f9c1b2

See every step visually.

Open the trace viewer and see your entire agent run laid out step by step. Every LLM call, every tool call, every decision.

$ npx openjck
→ Opening OpenJCK at localhost:7823

● ── ● ── ● ── ● ── ● ── ● ── ● ── ✕
1    2    3    4    5    6    7    8

Find the exact step that broke.

The timeline highlights failures in red. Click any step. See the full input, output, error traceback, and token count.

STEP 8 — write_file     [FAILED]    12ms
─────────────────────────────────────
ERROR
PermissionError: cannot write to output.md
File is open in another process
Line 47 in tools.py

Fix with full context.

You know exactly what failed, why it failed, and what the agent was trying to do. Fix the bug with all the context you need.

# Before: file was locked by another process
# OpenJCK showed: output.md opened at step 3

@trace_tool
def write_file(path: str, content: str):
    with open(path, 'w') as f:  # ← fixed
        f.write(content)

Ship faster. Debug less.

Re-run your agent. Watch the timeline go green. Every trace is saved locally — compare runs and ship with confidence.

$ python my_agent.py
[OpenJCK] Run complete → COMPLETED
[OpenJCK] 9 steps  |  2840 tokens  |  3.8s
✓ All steps passed

Make every token count.

from openjck import trace
from my_app import run_agent

@trace(name="production_agent")
def main():
    run_agent()
2 lines of code
To instrument any agent loop instantly.
< 1ms overhead
Invisible during normal execution runs.
100% local data
Nothing leaves your machine, ever.
6+ frameworks
Works with LangChain, CrewAI, AutoGen out of the box.
70X
faster debugging
40%
lower token usage
100%
clarity
"
I used to spend 30 minutes reading logs. OpenJCK finds the exact failing step in under 10 seconds. It's the Sentry I never had for my agent code.
Arjun MehtaML Engineer
"
The timeline view is genuinely brilliant. I cut my agent's token usage entirely by optimizing prompts I finally had visibility into.
Sofia RenardAI Product Engineer
"
Two decorators and my entire agent run was visualised. No config. No cloud account. This is exactly how developer tools should work.
Marcus ChenIndie Hacker

Stop guessing. Start debugging today.

Get Started Free