Overview

Intertrace is a control plane for observable AI operations. It evaluates telemetry you send—events, tool calls, and outcomes—not your proprietary model weights or hidden chain-of-thought.

What Intertrace does

Intertrace helps security and platform teams verify that agents behave within policy, detect risky patterns, and investigate incidents using structured evidence from your runtime. We are not a replacement for your agents, gateways, or model providers.

You keep your stack. Intertrace ingests observable signals (traces, spans, gateway events) and applies deterministic policy checks, scoring, and baseline comparison where enough data exists.

The product UI adds an Assurance workflow (Red Team scenarios, scheduled probe assessments, run correlation, exports) on top of gateway classification. See Dashboard & assurance.

The Intertrace lockup appears in the docs header and at the top of the docs sidebar (linked to the documentation home); it switches automatically between the light- and dark-background variants when you use light or dark theme. Marketing pages and the support shell use the same asset in the top bar on light headers. Footers group Product, Developers, and Company links, with Privacy, Terms, and Security on the bottom row.

Honest limits

Behavioral baselines need volume and time (typically 7/30/90 day windows) before deviation alerts are trustworthy. Until integration sends traces, dashboards will show empty states with links to onboarding and connector docs.

We use observable reasoning verification—structured justifications tied to logged steps—not access to hidden internal reasoning.