Deploy Self-Hosted LLM Observability with Torrix

Single-container LLM logging and monitoring without Postgres or Redis

Updated: 5/14/2026
Difficulty
easy
Time
5m
Use Case
Monitor LLM calls, tokens, costs, and latency in production agent pipelines with minimal infrastructure overhead
Popularity
0 views

About this automation

Torrix is a self-hosted LLM observability platform that eliminates infrastructure friction by running as a single Docker container backed by SQLite. It captures tokens, cost, latency, full prompt/response traces, and reasoning tokens from OpenAI, Anthropic, Gemini, Groq, Mistral, Azure OpenAI, and compatible endpoints. Includes cost forecasting, budget caps, PII masking, model routing, evals with golden runs, prompt library with version control, MCP server integration, and OTLP/HTTP ingestion.

How to implement

1

Download docker-compose.yml from Torrix GitHub repository

2

Run 'docker compose up' to start the single-container deployment

3

Configure HTTP proxy or integrate Python/Node SDK for LLM call logging

4

Access observability dashboard to monitor tokens, costs, latency, and traces

5

Set up cost forecasting, budget caps, and PII masking rules as needed

6

Optional: Enable MCP server for AI Assistant log queries and OTLP ingestion