59+

LLM Providers Supported

Deny-by-Default

Policy Engine

<1ms

Gateway Overhead

Real-Time

Anomaly Detection

◆ Integrations

Works with Every Major LLM Provider

Igris Security integrates with every major LLM provider. Route requests through a single unified gateway with full policy enforcement.

  • OpenAI

    LLM Provider

  • Anthropic

    LLM Provider

  • Google

    LLM Provider

  • Groq

    LLM Provider

  • Mistral AI

    LLM Provider

  • DeepSeek

    LLM Provider

  • Cohere

    LLM Provider

  • 50+

    & Many More

◆ Platform

Complete AI Security. Three Integrated Layers.

Igris Security is designed so each product works standalone — and becomes dramatically more powerful when used together. Every product shares a common telemetry layer that feeds into Igris Lens.

Igris Sentinel

Runtime Governance Between Agent and MCP

Add the Igris Security SDK to your AI agent application and route every MCP tool call through Igris Sentinel. Define RBAC policies, inject encrypted upstream credentials, block unauthorized calls in real time, and maintain a complete audit trail — all with a single function call.

  • SDK-first: one function call
  • RBAC + metadata policy engine
  • Deny-by-default first-match-wins
  • Encrypted credential vault (AES-256-GCM)
  • Real-time blocking + session kill switch
  • Token usage tracking
  • Complete audit trail
Learn More
SDK-first: one function call
RBAC + metadata policy engine
Deny-by-default first-match-wins
Encrypted credential vault (AES-256-GCM)
Real-time blocking + session kill switch
Token usage tracking
Complete audit trail

Igris Guard

Real-Time Firewall Between Users and LLMs

Igris Guard sits transparently between your users and any LLM — OpenAI, Anthropic, Gemini, or your own. Every prompt is inspected before reaching the model; every response is validated before reaching the user. Enforce content policies, prevent PII leakage, apply user-level rate limits, and log every interaction.

  • Transparent proxy — no LLM provider changes
  • Prompt inspection: PII, injection, content filtering
  • Response inspection: hallucination flagging, compliance
  • User-level RBAC
  • Token usage + budget enforcement
  • Full interaction logs
  • No-code policy builder
Learn More
Transparent proxy — no LLM provider changes
Prompt inspection: PII, injection, content filtering
Response inspection: hallucination flagging, compliance
User-level RBAC
Token usage + budget enforcement
Full interaction logs
No-code policy builder

Igris Lens

Unified Observability Across Your Entire AI Stack

Igris Lens aggregates all security events, logs, traces, and anomalies from Igris Sentinel and Igris Guard into one real-time dashboard. CISOs get executive risk heatmaps. Compliance teams get one-click audit reports. Engineering teams get trace-level debugging. Everyone gets what they need.

  • Unified log stream
  • Real-time anomaly alerts
  • Executive risk heatmaps
  • CISO-ready compliance reports
  • Immutable audit trail
  • Role-based dashboard access
  • Alert dispatch via Slack and Discord
Learn More
Unified log stream
Real-time anomaly alerts
Executive risk heatmaps
CISO-ready compliance reports
Immutable audit trail
Role-based dashboard access
Alert dispatch via Slack and Discord

How Igris Security Protects Your AI Stack

From the first connection to a live LLM conversation in production, Igris Security is working at every layer.

Connect

Set up connections for your MCP servers and LLM providers. Credentials are encrypted with AES-256 and never exposed to your application code.

Govern

Define your policies — deny-by-default rules, content guard, token limits, rate limiting. Configure via SDK or the web dashboard.

Observe

See everything in Igris Lens — audit trail, anomaly alerts, risk heat maps, and compliance reports in real time.

◆ Why Igris Security

Why Security Teams Choose Igris Security

Built for teams shipping AI at scale, in environments where a single misconfigured MCP tool or unguarded LLM response can become a critical incident.

Full-Stack AI Coverage

The only platform that covers MCP runtime governance, LLM prompt and response protection, and unified observability — in one product family with a shared telemetry layer.

Designed for Zero-Trust AI

Igris Sentinel enforces deny-by-default on every MCP tool call. Igris Guard blocks every non-compliant prompt and response. Nothing passes without explicit authorization.

SDK-First, <1ms Overhead

Governance shouldn't slow your agents down. The Igris Sentinel and Guard SDKs add less than 1ms of overhead. One function call handles policy enforcement, credential injection, and audit logging.

Compliance-Ready Out of the Box

Igris Security maps every security event to industry compliance requirements. Generate audit-ready PDF reports in one click directly from Igris Lens.

Enterprise-Grade, Self-Hostable

SSO (SAML/OIDC), role-based access control, API key scoping, anomaly detection, and 3-year log retention. Deploy as a managed cloud service or on your own infrastructure.

Connections — One Credential Layer for 59+ Providers

Manage credentials for 59+ LLM providers through a single connection abstraction layer. AES-256 encrypted at rest, rotatable without downtime, with per-key model allowlists and logging controls.

◆ Quick Start

Get Started in 3 Steps

From zero to governed MCP calls in under 5 minutes.

1

Install the SDK

One command to add governance

npm install @igris/sdk

import { Igris } from "@igris/sdk";

const igris = new Igris({
  apiKey: process.env.IGRIS_API_KEY,
});
2

Create a Connection

Store upstream credentials securely

// In the Igris dashboard
{
  "slug": "github-prod",
  "name": "GitHub Production",
  "upstreamUrl": "https://mcp.github.com",
  "credential": "ghp_..." // encrypted at rest
}
3

Make Governed Tool Calls

Every call checked, logged, protected

const result = await igris.mcp.callTool(
  "github-prod",
  "create_issue",
  { repo: "acme/igris", title: "..." },
  { user: "alice@company.com", metadata: { role: "developer" } },
);
// → Policy enforced
// → Credentials injected
// → Audit trail logged

◆ FAQ

Frequently Asked Questions

What is Igris Security?

Igris Security is a three-product AI governance platform. Igris Sentinel is the MCP governance proxy — it sits between your AI agents and MCP servers, enforcing policies, injecting credentials, and logging every tool call. Igris Guard is the LLM firewall — it inspects prompts and responses in real time, blocking jailbreaks, data leakage, and policy violations. Igris Lens is the observability layer — a unified dashboard giving security and compliance teams full visibility into every AI interaction across your organisation.

What is MCP and why does it need security?

MCP (Model Context Protocol) is an open standard that lets AI agents interact with external tools and services — reading files, calling APIs, executing code. Because MCP gives AI agents real-world capabilities, a misconfigured or compromised MCP connection can lead to data exfiltration, unauthorized actions, or compliance violations. Igris Sentinel governs every MCP interaction with deny-by-default policies, credential isolation, and a full audit trail.

Who is Igris Security built for?

Igris Security is built for two groups. Developers and AI engineers who build agentic applications use Igris Sentinel and Guard to add governance to MCP tool calls with a single function call via the TypeScript SDK. Security teams and enterprises that run AI products at scale use Igris Lens to monitor, audit, and report on every AI interaction — whether it was built in-house or adopted from a third party.

How does Igris Security integrate with existing systems?

Integration is lightweight by design. Install the TypeScript SDK with npm install @igris/sdk and wrap your existing MCP calls with one function. Alert webhooks connect to Slack and Discord out of the box. The full REST API lets you automate policy management and export audit data into any downstream system. No infrastructure changes, no agents to deploy.

What compliance frameworks does Igris support?

Igris generates compliance-ready audit reports from your runtime data. Every proxied request is logged with full context — tool name, model, user, cost, policy action — making it straightforward to produce evidence for compliance reviews. Reports are exportable as PDF.

Ready to Govern Your AI?

Join teams using Igris to enforce runtime policies and maintain full visibility over their AI operations.