How data is stored
TMA1 stores traces and conversation logs on your local disk in ~/.tma1/data/. Nothing is uploaded to remote services, and you can inspect or delete the data at any time.
TMA1
Local observability for your AI agents. Every LLM call — tokens, cost, latency — silently recorded on your machine.
AGENT ONBOARDING
Read https://tma1.ai/SKILL.md and follow the instructions to install and configure TMA1 for your AI agent
Click any LLM call to see tokens, cost, latency, and — where supported — the full prompt and response.
Token usage and cost by model, over time. Spot the expensive calls.
All data in ~/.tma1/. No cloud, no accounts, no external services.
Features
Cost, latency, security, and errors — in one local dashboard.
Token counts and estimated cost per model, tracked over time. See which models and conversations cost the most, with burn-rate projections and cache efficiency analysis.
p50 and p95 latency percentiles for every model. Tool performance tables show call counts, success rates, and average response times.
Detects shell commands, external fetches, and potential prompt injections in GenAI traces. Tracks webhook errors, stuck sessions, and channel health for OpenClaw.
For agents that emit conversation content (like Claude Code), click a trace to read the actual dialogue — handy for debugging or auditing what your agent said.
Flags calls with unusual token counts, high error rates, or slow responses. Catch runaway loops before they pile up cost.
Search across all recorded events and traces. Find a specific model, trace an error back to a call, or filter by tool name.
How it works
Paste the onboarding instruction into your agent and it handles the rest. Or do it yourself:
One command. Downloads everything into ~/.tma1/. No Docker, no system packages.
Point the OTel endpoint to http://localhost:14318/v1/otlp. Works with Claude Code, Codex, OpenClaw, or any OTel SDK.
Browse to localhost:14318. Traces show up seconds after your agent’s next LLM call.
Security
Agent conversations often contain sensitive context. TMA1 keeps everything on your machine.
TMA1 stores traces and conversation logs on your local disk in ~/.tma1/data/. Nothing is uploaded to remote services, and you can inspect or delete the data at any time.
After first launch (which downloads GreptimeDB once), TMA1 makes no further network calls. No analytics, no crash reports, no update checks.
TMA1 is Apache-2.0. Read the code, audit the build, and run it air-gapped.
tma1-server runs as one local process and manages its embedded storage engine. No Docker, no system packages, no runtime dependencies.
Delete ~/.tma1/ and everything is gone. No orphaned cloud state, no remote accounts to close.
FAQ
Any agent that emits OpenTelemetry data. Claude Code sends metrics and logs. Codex sends logs and traces, and can also emit native metrics when otel.metrics_exporter is configured. OpenClaw sends traces and metrics. Any OTel SDK app with GenAI semantic conventions works out of the box. The dashboard auto-detects the data source and shows the right view.
Yes. Run mysql -h 127.0.0.1 -P 14002 to connect to the local SQL endpoint, or open localhost:14000/dashboard/ for the built-in query UI. Raw traces are in opentelemetry_traces, logs in opentelemetry_logs, and native metric tables are auto-created from incoming OTel metrics.
It depends on traffic and conversation length. A typical setup uses a few hundred MB per month.
Quick start
Paste this into your agent. It reads the skill file and handles the rest.
AGENT ONBOARDING
Read https://tma1.ai/SKILL.md and follow the instructions to install and configure TMA1 for your AI agent