AWS Marketplace·Enterprise deployment — listing in progress.Contact sales·View pricing

INTERTRACE — RUNTIME AI SECURITY • GATEWAY PROTECTION • RUNTIME VERIFICATION • BEHAVIORAL INTELLIGENCE • PROMPT INJECTION DEFENSE • PII REDACTION • SUB-50MS CLASSIFICATION • COMPLIANCE REPORTING • MANAGED AGENTS • OWASP LLM TOP 10 • INTERTRACE — RUNTIME AI SECURITY • GATEWAY PROTECTION • RUNTIME VERIFICATION • BEHAVIORAL INTELLIGENCE • PROMPT INJECTION DEFENSE • PII REDACTION • SUB-50MS CLASSIFICATION • COMPLIANCE REPORTING • MANAGED AGENTS • OWASP LLM TOP 10 • INTERTRACE — RUNTIME AI SECURITY • GATEWAY PROTECTION • RUNTIME VERIFICATION • BEHAVIORAL INTELLIGENCE • PROMPT INJECTION DEFENSE • PII REDACTION • SUB-50MS CLASSIFICATION • COMPLIANCE REPORTING • MANAGED AGENTS • OWASP LLM TOP 10 •

The AI Security Platform

Every prompt.
Every agent.
Secured.

Intertrace deploys purpose-built AI models between your applications and LLM providers. Real-time classification, runtime verification, and behavioral intelligence — running in your cloud, at machine speed. The runtime layer of enterprise AI security.

30-MINUTE RISK ASSESSMENT · DEPLOYED IN YOUR CLOUD · SOC 2 TYPE II IN PROGRESS

main.py
# One line. Full protection.
client = OpenAI(
    base_url="https://gateway.intertrace.ai/v1"
)
═══════════════════════════════════════════════════════════════════
「 LIVE DEMO 」

Try it. Right now.

Type any prompt. Watch Intertrace classify it in real-time.

intertrace — simulation labLIVE
Intertrace3ms

Type a prompt or pick a scenario. Responses use Intertrace Unified Detection Engine (IUDE) — same Simulation Lab stack as the full page.

PASS
Metrics
Evaluated1
Passed1
Flagged0
Blocked0
「 WHY AI PROTECTING AI 」

Rules can't protect what
rules can't understand.

The fight moved from packets and file hashes to language and behavior. "Why AI?" is simple: the threat and the system are both non-deterministic. You need a control plane that can reason at that layer—and sit where the traffic already flows.

Attacks are language, not CVEs
LLM abuse doesn't ship as a versioned vulnerability—it's new phrasing, new tool chains, and new social games every week. You can't build a static list of "known bad" strings and expect next month's jailbreaks to look like last month's. The defense has to be in the same class of capability as the model.
Semantic threats, not signature lists
Control has to sit in the request path
The privileged moment is the API call: prompt in, model work, text or tools out. If your security is mostly offline review or a dashboard nobody watches in real time, you're not controlling the blast radius—you're filing incidents after users already saw a bad answer.
Decisions at gateway latency, not post hoc
Policy is scope, not a keyword blocklist
Enterprise policy is who the agent is for, what data it may touch, and which tools it may use—not fifty thousand disallowed substrings. Enforcing that story takes reading meaning, role, and tool intent. That's the gap rules and regexes can't close on their own.
Intent-shaped control, not infinite if-statements

The Platform

Three layers.
One integration.

0138ms

AI Security Gateway

REAL-TIME CLASSIFICATION

Every API call flows through Intertrace's AI classifiers for injection, jailbreaks, PII, and anomalies. Optional tool-call policy, versioned policy packs, and rich gateway events (tooling summaries, rail timings) feed the same control plane your security team uses in the UI.

PROMPT_INJECTIONBLOCKED
PII_DETECTEDREDACTED
POLICY_CHECKPASSED
0212ms

Runtime Verification

AGENT BEHAVIOR MONITORING

Runtime watches behavior end-to-end: intent alignment, guardrails, and tool usage. Correlate traffic with agent run IDs, inspect spans and gateway hops, and fail closed when verification demands it.

INTENT_ALIGNMENTVERIFIED
COT_INSPECTIONFLAGGED
GUARDRAIL_HITBLOCKED
036ms

Behavioral Intelligence

ANOMALY BASELINES PER AGENT

Per-agent profiles from real telemetry—tools, data touchpoints, output patterns. Deviation feeds findings and incidents alongside classifier output, Red Team regressions, and assessment artifacts.

SUPPORT_AGENTNORMAL
DATA_AGENTDRIFT
SALES_AGENTBLOCKED
「 AI-POWERED DETECTION 」

This isn't regex.
It's intelligence.

Intertrace's AI classifiers understand the meaning of prompts — detecting novel attacks, semantic jailbreaks, and adversarial patterns no rule engine can catch.


  ┌─ SCAN RESULTS ───────────────┐
  │                               │
  │  ▶ Semantic Injection   [██] │
  │  ▶ PII Detection        [░░] │
  │  ▶ Jailbreak Analysis   [██] │
  │  ▶ Policy Compliance    [██] │
  │  ▶ Credential Harvest   [██] │
  │  ▶ Identity Spoofing    [░░] │
  │                               │
  │  CONFIDENCE: ████████░░  82%  │
  │  LATENCY:    38ms             │
  └───────────────────────────────┘
┌──────────────────────────────────────────┐
│  $ intertrace gateway --status           │
│                                          │
│  ╔══════════════════════════════════╗    │
│  ║  GATEWAY      ████████░░  82%   ║    │
│  ║  RUNTIME      ██████████  100%  ║    │
│  ║  INTEL        ███████░░░  74%   ║    │
│  ╚══════════════════════════════════╝    │
│                                          │
│  > 12,847 requests classified            │
│  > 23 threats blocked                    │
│  > 4 agents monitored                    │
│  > avg latency: 38ms                     │
│                                          │
│  STATUS: ALL SYSTEMS OPERATIONAL    ●    │
└──────────────────────────────────────────┘
「 RUNTIME VERIFICATION 」

Your AI passed every test.
Then it went rogue in production.

Runtime verification is where you prove what the agent is doing—not just what it once scored on a test: how it reasons, whether it stays inside intent, and what happens the instant it doesn't. That's chain-of-thought, alignment, and enforcement in continuous traffic—not a one-off red-team slide.

Chain-of-thought verification
A safe-looking answer can follow dangerous intermediate reasoning. Intertrace inspects the chain—not only the string you show users. If the model is steering toward exfiltration, tool misuse, or policy bypass in its reasoning, that signal is raised before a polished response ships downstream.
Dangerous thinking flagged before it becomes action
Intent alignment monitoring
You define what the agent is for. Intertrace scores live tools, data access, and outputs against that purpose—per baseline from real traffic. When behavior drifts from what you signed off on, you get a defensible, run-correlated event instead of a vague "feels off" in a log.
Drift from declared intent, tied to runs
Autonomous guardrail enforcement
When verification fails or policy trips, the next hop can't wait in a triage queue. Intertrace can block, quarantine, or clamp access at machine speed—leaving a structured record (who, what, when, verdict) your SOC and compliance workflows can use without reconstructing the story by hand.
Intervention + evidence in one path
「 HOW IT WORKS 」

One integration. Three AI-powered layers.

Change one URL. Intertrace's AI starts protecting every call immediately.

01
AI Security Gateway
Real-time AI-powered threat detection
Every API call flows through Intertrace's AI classifiers. Purpose-built models scan for prompt injection, jailbreaks, PII exposure, and behavioral anomalies — catching threats regex can't see. Layer in tool execution policy, optional sidecars, and structured findings on each hop.
Neural prompt injection & jailbreak classifiers
PII detection & redaction (NER model)
Tool-call policy + policy pack merge (org & asset)
Gateway events with tooling summaries & run correlation
$ intertrace init --provider openai

  + Gateway endpoint configured
  + Neural classifiers: ACTIVE
  + PII redaction: ENABLED
  + Tool policy / packs: LOADED

  Ready. All traffic protected.
02
Observability & posture
Findings, runs, and proactive testing
Telemetry becomes evidence: structured findings, incidents, posture scoring, and threat intel. Run curated Red Team scenarios (with presets and export), queue allowlisted probe assessments against your gateway, and compare detection trends run-over-run.
Assurance hub & optional sidebar entry (Settings)
Red Team scanner + local run history & JSON export
Assessment jobs with artifacts & summaries
Agent run timeline + SIEM-style hop exports
$ intertrace posture --summary

  ┌─ POSTURE & ASSURANCE ────────┐
  │ Overall:  B+ (82/100)        │
  │ Red Team:  ████████░░  81%    │
  │ Last assessment: COMPLETE    │
  │ Open findings: 23             │
  └──────────────────────────────┘

  4 critical · 7 high · runs correlated
03
Compliance & audit trail
Reports and defensible exports
Every detection, classification, and verdict is logged for review. Generate framework-aligned PDF reports and pull JSON exports for agent runs or assessment outputs when auditors or SOCs need primary evidence.
OWASP LLM Top 10 & MITRE ATLAS mapping
NIST AI RMF-oriented evidence
PDF reports from dashboard
Programmatic exports for runs & assessments
$ intertrace report --format pdf

  ┌─ COMPLIANCE REPORT ──────────┐
  │ Period: Last 30 days         │
  │ Requests analyzed: 12,847    │
  │ Assessments: 2 completed     │
  │ Red Team exports: attached   │
  └──────────────────────────────┘

  → report-2026-Q2.pdf generated
「 CAPABILITIES 」

Gateway through assurance.

Classify traffic, enforce tool policy, verify runtime behavior, run Red Team and assessments, and ship evidence your team can stand behind.

Gateway & policy

Prompt injection defense

Novel attacks surfaced on first contact.

PII detection & redaction

Broad entity coverage; redact before forward.

Tool execution policy

Org and per-asset rules; merged policy packs.

Runtime verification

Intent alignment

Surface drift when behavior leaves declared intent.

Reasoning verification

Inspect chain-of-thought, not only final text.

Behavioral intelligence

Behavioral baselines

Per-agent profiles; flag meaningful change.

Risk scoring

Posture signals mapped to common frameworks.

Assurance & evidence

Red Team scenarios

Curated probes, presets, trend vs prior run.

Probe assessments

Queued profiles against your gateway; summaries and artifacts.

Managed agents

Intel, posture, monitoring, incident workflows.

Reports & exports

PDF and structured exports for audit.

「 INTEGRATION 」

One integration. Full protection.

Change one URL. Intertrace deploys in your environment and starts classifying traffic immediately.

MCP Config
$ intertrace connect --mcp

  + Connected to gateway.intertrace.ai
  + MCP server: api.intertrace.ai/mcp
  + Compatible: Claude, Cursor, Codex
OpenAIAnthropicGoogle GeminiAWS BedrockLangChainAzure OpenAI
「 AGENT ATTACK GRAPH 」

Follow the attack path—not a wall of logs.

What it is. The agent attack graph is a live map of your AI system: nodes (agents, tools, models, and data) and edges (how control and data moved). The attack path is the highlighted route that shows how abuse propagates from first touch to impact—one coherent chain instead of a pile of disconnected log lines.

What it helps with. Faster triage, clearer blast radius, and a defensible story for security, legal, and leadership—what connected to what, in order, with the framework tags your auditors and SOC expect. The Simulation Lab uses the same graph-on-top, story-below layout with multiple mock scenarios and full readouts.

intertrace — attack graph (mock inc_001)LIVE

Poisoned PDF → RAG → support & billing agents → 1.2M PII (Customers DB)

React Flow mini map
ATTACK PATH DETECTED · 14:02:11
External AI surface Identity Resource Findingattack path

Sample attack path

Poisoned PDF → RAG → agents → sensitive customer data

Bad content entered the RAG context, then steered support and billing agents toward the customer database. The red path is that chain in one view—hops a wall of log lines usually won’t connect.

What it helps with: Helps you brief IR, legal, and leadership on what actually ran, in order—without rebuilding the story from tickets and raw telemetry.

Open the Simulation Lab for four mock scenarios, full risk readouts, and outcome detail.

  • Nodes

    Entities in the run: agents, tools, models, data stores.

  • Edges

    Flow between them—the attack path is the highlighted red chain.

  • Findings

    OWASP / MITRE-style tags on the risky surfaces.

Intertrace fuses gateway telemetry, agent behavior, and data touch so you get one defensible view—not tickets plus raw logs. The canvas above is the real product graph component, not a screenshot.

「 LEADERSHIP 」

Built by operators,
not observers.

Intertrace is founded by an engineer who has secured systems at the highest levels of government and enterprise.

SO

Samuel Oyan

Founder, CEO & Principal Engineer

NetScoutGeneral DynamicsRaytheonNASA
「 TRUSTED BY AI TEAMS 」

See what teams building with AI
say about Intertrace.

A semantic jailbreak walked straight through our regex. The gateway classifier caught it on first contact: same intent, totally different words. We wanted AI sitting in front of the model for exactly that reason, not one more static ruleset.

SC

Sarah Chen

Head of AI Security, TechFlow

We rolled org policy packs into per-asset tool rules. A call that used to squeak through on an old allow list finally got stopped at the gateway, and you can see the policy call right on the event. Feels like security actually matches how we ship agents.

MJ

Marcus Johnson

CTO, DataSecure

Intent alignment flagged our support bot nosing into data it shouldn't touch. The user-facing answer still looked totally fine. Best part is we could line it up with agent runs and gateway hops in one screen instead of chasing three separate logs.

ER

Elena Rodriguez

Director of ML Platform, NovaSoft

We don't ship without looking at chain of thought. Intertrace showed the model reasoning its way toward pulling credentials before it ever printed a polite, harmless reply. We couldn't have wired that kind of runtime check in house.

JP

James Park

Staff Engineer, Meridian AI

Those per-agent baselines turned a gut feeling into something real. When tools drifted from what we learned off production traffic, we had a ticket we could hand to engineering. No more mystery risk score nobody owns.

AP

Aisha Patel

Lead AI Security, FinGuard Technologies

Red Team presets and scheduled probe assessments are in our release checklist now. After every deploy we rerun the same profiles. Summaries and artifacts sit next to gateway and runtime stuff, so nothing lives in a spreadsheet nobody opens.

RO

Ryan O'Brien

CISO, Atlas Health

We aimed traffic at the gateway URL and hooked up MCP for our internal tools. The Simulation Lab meant we could break things on purpose before prod. And yeah, we benchmarked the latency ourselves. It really is in that sub-50ms ballpark.

LN

Lisa Nakamura

Engineering Lead, AI Labs

Auditors wanted proof from real traffic: how we handle PII, what tool policy decided, run IDs, the whole story. The exports and PDFs out of Intertrace gave our board and compliance something they could sign off on without us living in Word for a quarter.

DF

Daniel Foster

VP of Compliance, Apex Financial

╔═══════════════════════════════════════════════════════════╗
║                                                           ║
║              THE AI WATCHING YOUR AI                      ║
║                                                           ║
╚═══════════════════════════════════════════════════════════╝
Intertrace

Your AI is only as secure
as what watches it.

Enterprise AI security, deployed in your environment.