59+

LLM Providers Supported

Deny-by-Default

Policy Engine

<1ms

Gateway Overhead

Real-Time

Anomaly Detection

Igris Sentinel

Runtime MCP Governance

Igris Sentinel is the industry's first SDK-first governance platform for the Model Context Protocol (MCP). Sit between your AI agents and every MCP server they connect to. Enforce RBAC policies with metadata conditions, inject upstream credentials securely, block unauthorized tool calls in real time, and log every interaction — all with a single SDK function call.

Encrypted Credential Vault
Metadata-Based Policy Conditions
Deny-by-Default Policy Engine
Credential Injection
Token Usage Tracking
User Management
Learn More About Igris Sentinel
igris-sdk.ts
Technical Preview
import { Igris } from "@igris/sdk";

const igris = new Igris({ apiKey: "ig_..." });

const config = igris.getMcpConfig("vk_github", {
  user: "alice@company.com",
  metadata: { role: "developer" },
});

// Works with any MCP client
const client = new McpClient({
  transport: new StreamableHttp(config.url, {
    headers: config.headers,
  }),
});

Igris Guard

LLM Firewall & Conversation Protection

Igris Guard sits transparently between your users and any LLM — OpenAI, Anthropic Claude, Google Gemini, or a self-hosted model. Every user prompt is inspected before it reaches the model; every LLM response is validated before it reaches the user. Apply content policies, prevent PII leakage, block prompt injection attacks, and enforce per-user controls — without switching LLM providers or refactoring your application.

Prompt Inspection
Response Inspection
User-Level RBAC
Token Usage Controls
No-Code Policy Builder
Full Interaction Logs
Learn More About Igris Guard
guard-policy.json
Technical Preview
{
  "name": "Content safety policy",
  "model": "gpt-4o",
  "rules": [
    {
      "type": "prompt",
      "action": "block",
      "conditions": { "contains_pii": true }
    },
    { "type": "response", "action": "redact" }
  ]
}

Individually Powerful. Together Unstoppable.

All three Igris Security products share a common telemetry layer. Sentinel governance events and Guard firewall decisions flow into Igris Lens. Your security posture gets stronger with every product you add.

Igris Lens

CISO Dashboard & Unified Observability

Igris Lens turns raw AI security events into actionable intelligence. It centralizes every policy decision, tool call, prompt, response, and anomaly from Igris Sentinel and Igris Guard into one high-fidelity audit trail. Executives get risk heatmaps. Compliance teams get one-click PDF reports. Security engineers get trace-level debugging.

Unified Log Stream

Unified log stream from all Igris Security products.

Executive Risk Heatmaps

Executive risk heatmaps across departments and AI sub-systems.

Immutable Audit Trails

Immutable 1-click audit trails verified for compliance investigations.

Automated PDF Reports

Automated PDF report generation for compliance and security audits.

Real-Time Anomaly Alerting

Real-time anomaly alerting via Slack, email, and webhooks.

SIEM Integration

SIEM integration: export logs via webhooks, REST API, or streaming.

Learn More About Igris Lens
Igris Lens Dashboard
LIVE

Policy Decisions

Real-Time

Anomaly Alerts

Instant

Log Retention

3 Years

Report Export

One-Click PDF

What You See in Lens

Unified event stream from Sentinel + Guard
Risk heatmaps by department and AI sub-system
Alert dispatch via Slack and Discord webhooks
Immutable audit trail for compliance investigations

◆ Platform FAQ

Technical Deep Dive

Common questions about the Igris Security platform architecture.

Igris Sentinel acts as a termination point for every MCP session. It inspects the incoming request, evaluates the tool call against your active policy set in real time, and forwards the request to the target MCP server with the correct upstream credentials injected. End-to-end security with real-time inspection and <1ms overhead.
Igris Sentinel sits between your AI agent and MCP servers — it governs what tools your agents can call. Igris Guard sits between your users and your LLM — it governs what users can ask and what the LLM can respond with. Together they cover the full AI interaction surface.
Yes. Igris Sentinel is fully framework-agnostic. It works with any server that adheres to the Model Context Protocol — whether it is a built-in server from Anthropic, a partner integration, or a custom internal tool your team built. If it speaks MCP, Igris Sentinel can govern it.
Igris Guard works as a transparent proxy in front of any LLM via API — OpenAI (GPT-4o, o1), Anthropic Claude, Google Gemini, Meta Llama (self-hosted or via API), Mistral, and custom or fine-tuned models. No provider-specific code changes needed.
Igris dispatches real-time alerts via Slack and Discord webhooks. Alerts fire on policy denials, anomaly detections, and session suspensions. Slack alerts use Block Kit formatting, Discord uses rich embeds. Alert dispatch is asynchronous and never blocks the request path. All alerts are also persisted as audit events.
Virtual keys are the credential abstraction layer in Igris. Each virtual key maps a user-facing slug to an upstream provider credential — whether that's an MCP server or one of 59+ LLM providers. Credentials are AES-256 encrypted at rest, rotatable without downtime, and support per-key model allowlists, logging controls, and anomaly detection thresholds.

SDK Integration Targets

Integrate governance into any application built with modern frameworks

TypeScript SDK

Ready to Govern Your AI?

Join teams using Igris to enforce runtime policies and maintain full visibility over their AI operations.