AWS Marketplace·Enterprise deployment — listing in progress.Contact sales·View pricing
The AI Security Platform
Intertrace deploys purpose-built AI models between your applications and LLM providers. Real-time classification, runtime verification, and behavioral intelligence — running in your cloud, at machine speed. The runtime layer of enterprise AI security.
30-MINUTE RISK ASSESSMENT · DEPLOYED IN YOUR CLOUD · SOC 2 TYPE II IN PROGRESS
# One line. Full protection. client = OpenAI( base_url="https://gateway.intertrace.ai/v1" )
═══════════════════════════════════════════════════════════════════
Type any prompt. Watch Intertrace classify it in real-time.
Type a prompt or pick a scenario. Responses use Intertrace Unified Detection Engine (IUDE) — same Simulation Lab stack as the full page.
The fight moved from packets and file hashes to language and behavior. "Why AI?" is simple: the threat and the system are both non-deterministic. You need a control plane that can reason at that layer—and sit where the traffic already flows.
The Platform
REAL-TIME CLASSIFICATION
Every API call flows through Intertrace's AI classifiers for injection, jailbreaks, PII, and anomalies. Optional tool-call policy, versioned policy packs, and rich gateway events (tooling summaries, rail timings) feed the same control plane your security team uses in the UI.
AGENT BEHAVIOR MONITORING
Runtime watches behavior end-to-end: intent alignment, guardrails, and tool usage. Correlate traffic with agent run IDs, inspect spans and gateway hops, and fail closed when verification demands it.
ANOMALY BASELINES PER AGENT
Per-agent profiles from real telemetry—tools, data touchpoints, output patterns. Deviation feeds findings and incidents alongside classifier output, Red Team regressions, and assessment artifacts.
Intertrace's AI classifiers understand the meaning of prompts — detecting novel attacks, semantic jailbreaks, and adversarial patterns no rule engine can catch.
┌─ SCAN RESULTS ───────────────┐ │ │ │ ▶ Semantic Injection [██] │ │ ▶ PII Detection [░░] │ │ ▶ Jailbreak Analysis [██] │ │ ▶ Policy Compliance [██] │ │ ▶ Credential Harvest [██] │ │ ▶ Identity Spoofing [░░] │ │ │ │ CONFIDENCE: ████████░░ 82% │ │ LATENCY: 38ms │ └───────────────────────────────┘
┌──────────────────────────────────────────┐ │ $ intertrace gateway --status │ │ │ │ ╔══════════════════════════════════╗ │ │ ║ GATEWAY ████████░░ 82% ║ │ │ ║ RUNTIME ██████████ 100% ║ │ │ ║ INTEL ███████░░░ 74% ║ │ │ ╚══════════════════════════════════╝ │ │ │ │ > 12,847 requests classified │ │ > 23 threats blocked │ │ > 4 agents monitored │ │ > avg latency: 38ms │ │ │ │ STATUS: ALL SYSTEMS OPERATIONAL ● │ └──────────────────────────────────────────┘
Runtime verification is where you prove what the agent is doing—not just what it once scored on a test: how it reasons, whether it stays inside intent, and what happens the instant it doesn't. That's chain-of-thought, alignment, and enforcement in continuous traffic—not a one-off red-team slide.
Change one URL. Intertrace's AI starts protecting every call immediately.
$ intertrace init --provider openai + Gateway endpoint configured + Neural classifiers: ACTIVE + PII redaction: ENABLED + Tool policy / packs: LOADED Ready. All traffic protected.
$ intertrace posture --summary ┌─ POSTURE & ASSURANCE ────────┐ │ Overall: B+ (82/100) │ │ Red Team: ████████░░ 81% │ │ Last assessment: COMPLETE │ │ Open findings: 23 │ └──────────────────────────────┘ 4 critical · 7 high · runs correlated
$ intertrace report --format pdf ┌─ COMPLIANCE REPORT ──────────┐ │ Period: Last 30 days │ │ Requests analyzed: 12,847 │ │ Assessments: 2 completed │ │ Red Team exports: attached │ └──────────────────────────────┘ → report-2026-Q2.pdf generated
Classify traffic, enforce tool policy, verify runtime behavior, run Red Team and assessments, and ship evidence your team can stand behind.
Prompt injection defense
Novel attacks surfaced on first contact.
PII detection & redaction
Broad entity coverage; redact before forward.
Tool execution policy
Org and per-asset rules; merged policy packs.
Intent alignment
Surface drift when behavior leaves declared intent.
Reasoning verification
Inspect chain-of-thought, not only final text.
Behavioral baselines
Per-agent profiles; flag meaningful change.
Risk scoring
Posture signals mapped to common frameworks.
Red Team scenarios
Curated probes, presets, trend vs prior run.
Probe assessments
Queued profiles against your gateway; summaries and artifacts.
Managed agents
Intel, posture, monitoring, incident workflows.
Reports & exports
PDF and structured exports for audit.
Change one URL. Intertrace deploys in your environment and starts classifying traffic immediately.
$ intertrace connect --mcp + Connected to gateway.intertrace.ai + MCP server: api.intertrace.ai/mcp + Compatible: Claude, Cursor, Codex
What it is. The agent attack graph is a live map of your AI system: nodes (agents, tools, models, and data) and edges (how control and data moved). The attack path is the highlighted route that shows how abuse propagates from first touch to impact—one coherent chain instead of a pile of disconnected log lines.
What it helps with. Faster triage, clearer blast radius, and a defensible story for security, legal, and leadership—what connected to what, in order, with the framework tags your auditors and SOC expect. The Simulation Lab uses the same graph-on-top, story-below layout with multiple mock scenarios and full readouts.
Poisoned PDF → RAG → support & billing agents → 1.2M PII (Customers DB)
Sample attack path
Poisoned PDF → RAG → agents → sensitive customer data
Bad content entered the RAG context, then steered support and billing agents toward the customer database. The red path is that chain in one view—hops a wall of log lines usually won’t connect.
What it helps with: Helps you brief IR, legal, and leadership on what actually ran, in order—without rebuilding the story from tickets and raw telemetry.
Open the Simulation Lab for four mock scenarios, full risk readouts, and outcome detail.
Entities in the run: agents, tools, models, data stores.
Flow between them—the attack path is the highlighted red chain.
OWASP / MITRE-style tags on the risky surfaces.
Intertrace fuses gateway telemetry, agent behavior, and data touch so you get one defensible view—not tickets plus raw logs. The canvas above is the real product graph component, not a screenshot.
Intertrace is founded by an engineer who has secured systems at the highest levels of government and enterprise.
Founder, CEO & Principal Engineer
“A semantic jailbreak walked straight through our regex. The gateway classifier caught it on first contact: same intent, totally different words. We wanted AI sitting in front of the model for exactly that reason, not one more static ruleset.”
Sarah Chen
Head of AI Security, TechFlow
“We rolled org policy packs into per-asset tool rules. A call that used to squeak through on an old allow list finally got stopped at the gateway, and you can see the policy call right on the event. Feels like security actually matches how we ship agents.”
Marcus Johnson
CTO, DataSecure
“Intent alignment flagged our support bot nosing into data it shouldn't touch. The user-facing answer still looked totally fine. Best part is we could line it up with agent runs and gateway hops in one screen instead of chasing three separate logs.”
Elena Rodriguez
Director of ML Platform, NovaSoft
“We don't ship without looking at chain of thought. Intertrace showed the model reasoning its way toward pulling credentials before it ever printed a polite, harmless reply. We couldn't have wired that kind of runtime check in house.”
James Park
Staff Engineer, Meridian AI
“Those per-agent baselines turned a gut feeling into something real. When tools drifted from what we learned off production traffic, we had a ticket we could hand to engineering. No more mystery risk score nobody owns.”
Aisha Patel
Lead AI Security, FinGuard Technologies
“Red Team presets and scheduled probe assessments are in our release checklist now. After every deploy we rerun the same profiles. Summaries and artifacts sit next to gateway and runtime stuff, so nothing lives in a spreadsheet nobody opens.”
Ryan O'Brien
CISO, Atlas Health
“We aimed traffic at the gateway URL and hooked up MCP for our internal tools. The Simulation Lab meant we could break things on purpose before prod. And yeah, we benchmarked the latency ourselves. It really is in that sub-50ms ballpark.”
Lisa Nakamura
Engineering Lead, AI Labs
“Auditors wanted proof from real traffic: how we handle PII, what tool policy decided, run IDs, the whole story. The exports and PDFs out of Intertrace gave our board and compliance something they could sign off on without us living in Word for a quarter.”
Daniel Foster
VP of Compliance, Apex Financial
╔═══════════════════════════════════════════════════════════╗ ║ ║ ║ THE AI WATCHING YOUR AI ║ ║ ║ ╚═══════════════════════════════════════════════════════════╝

Enterprise AI security, deployed in your environment.