Skip to content

Quick Start

  • Python 3.10+
  • Node.js 18+ (for the viewer)
  • An agent to debug (or use the example below)
Terminal window
pip install openjck

Instrument your agent — show full working Ollama example:

Section titled “Instrument your agent — show full working Ollama example:”
from openjck import trace, trace_llm, trace_tool
import ollama
@trace(name="my_first_agent")
def run_agent(task: str):
messages = [{"role": "user", "content": task}]
response = call_llm(messages)
result = process_result(response.message.content)
return result
@trace_llm
def call_llm(messages):
return ollama.chat(model="qwen2.5:7b", messages=messages)
@trace_tool
def process_result(text: str) -> str:
return text.strip().upper()
if __name__ == "__main__":
run_agent("What is the capital of France?")

What you see in the terminal after running:

Section titled “What you see in the terminal after running:”
[OpenJCK] Run complete → COMPLETED
[OpenJCK] 3 steps | 180 tokens | 1.2s
[OpenJCK] View trace → http://localhost:7823/trace/a3f9c1b2
Terminal window
npx openjck

The OpenJCK UI displays:

  • Timeline: Clickable visualization of each step your agent took
  • Step Inspector: Click any step to see detailed information including inputs, outputs, and timing
  • Token Counts: Exact token usage per LLM call with cost calculations
  • Error Highlighting: Failed steps are highlighted in red with clickable tracebacks