LangChain
LangChain Integration
Section titled “LangChain Integration”Auto-patch (Recommended)
Section titled “Auto-patch (Recommended)”The easiest way to trace LangChain components is with the auto-patch function:
from openjck import patch_langchain
# Patch LangChain before importing/using itpatch_langchain()
# Now all LangChain components are automatically tracedfrom langchain.agents import initialize_agent, Toolfrom langchain.llms import Ollama
llm = Ollama(model="qwen2.5:7b")tools = [ Tool( name="Search", func=lambda q: "Paris is the capital of France.", description="Useful for answering factual questions" )]
agent = initialize_agent(tools, llm, agent="zero-shot-react-description")result = agent.run("What is the capital of France?")Manual Decorator Fallback
Section titled “Manual Decorator Fallback”If you prefer more control or can’t use the patcher, you can manually decorate LangChain components:
from openjck import trace, trace_llm, trace_toolfrom langchain.llms import Ollamafrom langchain.chains import LLMChainfrom langchain.prompts import PromptTemplate
# Trace the LLM calls@trace_llmdef call_llm(prompt): llm = Ollama(model="qwen2.5:7b") return llm(prompt)
# Trace tool usage@trace_tool(name="Calculator")def calculate(expression): return eval(expression) # In practice, use a safe evaluator
# Trace the overall chain@trace(name="math_chain")def math_chain(question): template = """You are a math assistant. Question: {question}""" prompt = PromptTemplate(template=template, input_variables=["question"]) chain = LLMChain(llm=call_llm, prompt=prompt) return chain.run(question)Full Working Example
Section titled “Full Working Example”from openjck import trace, trace_llm, trace_tool, patch_langchainfrom langchain.agents import initialize_agent, AgentTypefrom langchain.tools import Toolfrom langchain.llms import Ollama
# Auto-patch LangChain componentspatch_langchain()
# Define tools with tracing@trace_tool(name="Search")def search_tool(query): # Simulate a search tool if "capital" in query.lower(): return "Paris is the capital of France." return "I couldn't find information about that."
@trace_tool(name="Calculator")def calculator_tool(expression): try: return str(eval(expression)) # Simplified for example except: return "Error in calculation"
# Initialize LLMllm = Ollama(model="qwen2.5:7b")
# Create agenttools = [ Tool( name="Search", func=search_tool, description="Useful for answering factual questions" ), Tool( name="Calculator", func=calculator_tool, description="Useful for mathematical calculations" )]
agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
# Trace the overall agent execution@trace(name="langchain_agent")def run_agent(question): return agent.run(question)
# Run the agentif __name__ == "__main__": result = run_agent("What is the capital of France?") print(result)Supported LangChain Versions
Section titled “Supported LangChain Versions”OpenJCK is tested and compatible with LangChain version 0.1.0 and above. The auto-patch function works with:
- LLMs (Ollama, OpenAI, etc.)
- Chains (LLMChain, SequentialChain, etc.)
- Agents (Zero-shot, Structured-chat, etc.)
- Tools (custom and built-in)
How It Works
Section titled “How It Works”When you call patch_langchain():
- OpenJCK wraps LangChain’s core classes with tracing decorators
- LLM calls are automatically wrapped with
@trace_llm - Tool executions are wrapped with
@trace_tool - Agent runs are wrapped with
@trace - All trace data flows to the standard OpenJCK system and appears in the UI
Manual Instrumentation Tips
Section titled “Manual Instrumentation Tips”For fine-grained tracing within LangChain components:
- Use
@trace_llmon custom LLM wrappers - Use
@trace_toolon custom tools - Use
@traceon agent initialization and execution functions - Combine with
EventCapturefor tracing specific internal steps