Skip to content

AI Agents

Let Claude Code, Cursor, and other AI agents query your logs. Ask about production errors in plain English.

Query your error logs with AI. Ask questions in plain English. Debug faster.

The Agent API lets AI coding assistants (Claude Code, Cursor, Codex) query your LogNorth instance. Your data never leaves your server — the AI makes API calls, LogNorth returns results.

When debugging, you usually:

  1. Open LogNorth in a browser
  2. Search for the error
  3. Copy trace IDs back and forth
  4. Context-switch between code and logs

With the Agent API:

  • Ask in your editor: “What’s breaking in production?”
  • AI queries LogNorth: Fetches recent errors and traces
  • Get direct answers: See errors alongside your code
  • Follow the trace: AI correlates related events automatically

This works because LogNorth has a simple REST API. The AI just needs to know how to call it.

Once configured, ask your AI assistant:

Error investigation:

  • “What errors are happening in production?”
  • “Show me the last 10 errors”
  • “Find errors containing ‘connection refused’”

Trace analysis:

  • “Find all events for trace abc-123”
  • “What happened during this request?”
  • “Show me the full context for event 456”

Issue tracking:

  • “What are the most common errors?”
  • “When did this error first appear?”
  • “How many times has this error occurred?”

Time-based queries:

  • “What broke in the last hour?”
  • “Show me errors from yesterday”
  • “Compare error rates today vs last week”

In LogNorth, go to Settings and create an Agent API key. This is a read-only key (prefix lgn-agent-) separate from app keys used for ingestion.

Add to your shell profile (~/.zshrc or ~/.bashrc):

Terminal window
export LOGNORTH_URL="https://logs.yoursite.com"
export LOGNORTH_API_KEY="lgn-agent-..."

Install the LogNorth skill using the skills CLI:

Terminal window
npx skills add karloscodes/lognorth-releases --skill lognorth

This works with Claude Code, Cursor, Windsurf, Codex, and 30+ other AI agents.

Open your AI assistant and ask about your production errors:

You: What's breaking in production?
Claude: I'll check LogNorth for recent errors.
Found 3 errors in the last hour:
| Time | Message | Path |
|-------|----------------------------------|----------------|
| 14:32 | Database connection timeout | POST /checkout |
| 14:28 | Redis connection refused | POST /cart |
| 14:15 | Database connection timeout | POST /checkout |
The database timeout is recurring (2 occurrences).
Both errors share trace_id: req-abc-123 — same user session.
Want me to show the full trace?
  1. You ask in English: “What’s breaking?”
  2. AI calls the API: GET /api/v1/agent/events?is_error=true
  3. LogNorth responds: JSON with recent errors
  4. AI formats the answer: Tables, summaries, next steps

The AI never sees your raw data beyond what you query. It calls your API, gets results, presents them.

  • LogNorth stays on your server: We never see your logs
  • AI queries go to your provider: When you ask Claude, Cursor, etc., your query and results go through their servers
  • Read-only access: The API only returns data, never modifies
  • Standard auth: Uses your existing API keys

For full privacy, use a local model (Ollama, llama.cpp). Your API keys, your provider choice, your tradeoffs.

Any AI coding assistant that can make HTTP requests:

  • Claude Code (Anthropic’s CLI)
  • Cursor (AI code editor)
  • Windsurf (Codeium’s editor)
  • Codex (OpenAI’s CLI)
  • Continue (open source)
  • Aider (terminal-based)
  • Custom agents via the REST API

The skill uses these endpoints:

EndpointDescription
GET /api/v1/agent/eventsList events with filters
GET /api/v1/agent/events/:idGet event detail + related trace
GET /api/v1/agent/issuesList grouped errors with counts

Filters for /api/v1/agent/events:

ParameterExampleDescription
is_errortrueOnly errors
searchtimeoutText search
limit20Max results
start_time2024-01-01T00:00:00ZTime range start
end_time2024-01-02T00:00:00ZTime range end

Full API documentation: API Reference