Quick Start
Quick Start
Section titled “Quick Start”Prerequisites
Section titled “Prerequisites”- Python 3.10+
- Node.js 18+ (for the viewer)
- An agent to debug (or use the example below)
Install
Section titled “Install”pip install openjckInstrument your agent — show full working Ollama example:
Section titled “Instrument your agent — show full working Ollama example:”from openjck import trace, trace_llm, trace_toolimport ollama
@trace(name="my_first_agent")def run_agent(task: str): messages = [{"role": "user", "content": task}] response = call_llm(messages) result = process_result(response.message.content) return result
@trace_llmdef call_llm(messages): return ollama.chat(model="qwen2.5:7b", messages=messages)
@trace_tooldef process_result(text: str) -> str: return text.strip().upper()
if __name__ == "__main__": run_agent("What is the capital of France?")What you see in the terminal after running:
Section titled “What you see in the terminal after running:”[OpenJCK] Run complete → COMPLETED[OpenJCK] 3 steps | 180 tokens | 1.2s[OpenJCK] View trace → http://localhost:7823/trace/a3f9c1b2Open the viewer:
Section titled “Open the viewer:”npx openjckWhat the UI shows:
Section titled “What the UI shows:”The OpenJCK UI displays:
- Timeline: Clickable visualization of each step your agent took
- Step Inspector: Click any step to see detailed information including inputs, outputs, and timing
- Token Counts: Exact token usage per LLM call with cost calculations
- Error Highlighting: Failed steps are highlighted in red with clickable tracebacks
Next steps links:
Section titled “Next steps links:”- How It Works - Deep dive into OpenJCK architecture
- LangChain Integration - Using OpenJCK with LangChain
- CLI Reference - All OpenJCK CLI commands