TMA1

Know what your agent is doing and what it costs

Local observability for your AI agents. Every LLM call — tokens, cost, latency — silently recorded on your machine.

AGENT ONBOARDING

Read https://tma1.ai/SKILL.md and follow the instructions to install and configure TMA1 for your AI agent

Manual install
terminal
$ curl -fsSL https://tma1.ai/install.sh | bash

Inspect every call

Click any LLM call to see tokens, cost, latency, and — where supported — the full prompt and response.

Track the cost

Token usage and cost by model, over time. Spot the expensive calls.

Runs on your machine

All data in ~/.tma1/. No cloud, no accounts, no external services.

Features

Observability without the overhead

Cost, latency, security, and errors — in one local dashboard.

01

Cost breakdown

Token counts and estimated cost per model, tracked over time. See which models and conversations cost the most, with burn-rate projections and cache efficiency analysis.

02

Latency tracking

p50 and p95 latency percentiles for every model. Tool performance tables show call counts, success rates, and average response times.

03

Security monitoring

Detects shell commands, external fetches, and potential prompt injections in GenAI traces. Tracks webhook errors, stuck sessions, and channel health for OpenClaw.

04

Conversation replay

For agents that emit conversation content (like Claude Code), click a trace to read the actual dialogue — handy for debugging or auditing what your agent said.

05

Anomaly detection

Flags calls with unusual token counts, high error rates, or slow responses. Catch runaway loops before they pile up cost.

06

Full-text search

Search across all recorded events and traces. Find a specific model, trace an error back to a call, or filter by tool name.

How it works

Setup

Paste the onboarding instruction into your agent and it handles the rest. Or do it yourself:

[1]

Install

One command. Downloads everything into ~/.tma1/. No Docker, no system packages.

[2]

Configure your agent

Point the OTel endpoint to http://localhost:14318/v1/otlp. Works with Claude Code, Codex, OpenClaw, or any OTel SDK.

[3]

Open the dashboard

Browse to localhost:14318. Traces show up seconds after your agent’s next LLM call.

Security

Security & Privacy

Agent conversations often contain sensitive context. TMA1 keeps everything on your machine.

How data is stored

TMA1 stores traces and conversation logs on your local disk in ~/.tma1/data/. Nothing is uploaded to remote services, and you can inspect or delete the data at any time.

No network calls

After first launch (which downloads GreptimeDB once), TMA1 makes no further network calls. No analytics, no crash reports, no update checks.

Fully open source

TMA1 is Apache-2.0. Read the code, audit the build, and run it air-gapped.

Single binary

tma1-server runs as one local process and manages its embedded storage engine. No Docker, no system packages, no runtime dependencies.

Your data, your disk

Delete ~/.tma1/ and everything is gone. No orphaned cloud state, no remote accounts to close.

FAQ

Common questions

Which agents are supported?

Any agent that emits OpenTelemetry data. Claude Code sends metrics and logs. Codex sends logs and traces, and can also emit native metrics when otel.metrics_exporter is configured. OpenClaw sends traces and metrics. Any OTel SDK app with GenAI semantic conventions works out of the box. The dashboard auto-detects the data source and shows the right view.

Can I query the data with SQL?

Yes. Run mysql -h 127.0.0.1 -P 14002 to connect to the local SQL endpoint, or open localhost:14000/dashboard/ for the built-in query UI. Raw traces are in opentelemetry_traces, logs in opentelemetry_logs, and native metric tables are auto-created from incoming OTel metrics.

How much disk space does it use?

It depends on traffic and conversation length. A typical setup uses a few hundred MB per month.

Quick start

Try it now

Paste this into your agent. It reads the skill file and handles the rest.

AGENT ONBOARDING

Read https://tma1.ai/SKILL.md and follow the instructions to install and configure TMA1 for your AI agent

Or install manually
terminal
# Install TMA1
$ curl -fsSL https://tma1.ai/install.sh | bash
 
# Start TMA1
$ tma1-server
 
# Configure your agent (example: OpenClaw)
$ openclaw config set diagnostics.otel.endpoint http://localhost:14318/v1/otlp
 
# Or any OTel SDK
$ export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:14318/v1/otlp
 
# Open dashboard
$ open http://localhost:14318