HomePlatformIgris Guard

◆ Igris Guard

Protect Every Conversation Between Your Users and Your LLM

Igris Guard is the LLM firewall from Igris Security. It sits transparently between your users and any LLM — OpenAI, Anthropic Claude, Google Gemini, or your own model. Every prompt is inspected before it reaches the model; every response is validated before it reaches the user. No provider changes. No performance trade-offs. Just complete control over your LLM layer.

Transparent Proxy

Architecture

Prompt Inspection

Inbound

Response Validation

Outbound

PII Protection

Privacy

RBAC Controls

Access

◆ Capabilities

Built for LLM Security

Eleven core capabilities that give your security team complete control over every conversation between your users and your LLM

Prompt Injection Attack Prevention

Detect and block prompt injection attacks before they reach your LLM — including jailbreak attempts, instruction override attacks, and adversarial prompts designed to manipulate model behavior.

PII Detection & Redaction

Automatically detect and redact Personally Identifiable Information in user prompts before they are sent to the LLM. Protect user data privacy and meet regulatory requirements.

Content Policy Enforcement

Define what topics, content types, and request patterns are allowed or blocked for each user role or team. Block off-topic queries, competitor mentions, or any custom content category — using the no-code policy builder.

Response Inspection & Redaction

Validate every LLM response before it reaches the user. Detect and redact sensitive data in model outputs — leaked PII, internal configurations, or policy-violating content the LLM should not have produced.

User-Level RBAC Controls

Restrict which LLM models, features, and topics individual users or teams can access — based on role, team, subscription tier, or any custom metadata. Different users, different rules — managed centrally.

Token Usage & Budget Controls

Track token consumption per user, team, and department. Set hard limits to prevent runaway usage, enforce budget policies, and generate token usage reports for cost attribution.

Policy Creation — No-Code Builder

Create and manage content policies, access rules, and guardrails from the Igris Security dashboard. No YAML, no DSL — a visual policy editor that your compliance team can use without engineering support.

Full Interaction Logs

Every conversation is logged with the user prompt, LLM response, user identity, model used, token count, latency, and every policy decision made. Searchable, filterable, and exportable from Igris Lens.

59+ LLM Providers

Unified API gateway abstracting OpenAI, Anthropic, Google, Groq, Mistral, DeepSeek, Cohere, and 50+ more behind a single OpenAI-compatible interface.

Cost Tracking

Per-request cost calculation using model-specific pricing. Track spend per provider, model, virtual key, and time period.

Token Guard

Pre-forward token limit enforcement. Set maximum input tokens, output tokens, and combined request tokens per policy.

◆ How It Works

Running in 3 Steps

From SDK install to fully protected LLM conversations in under five minutes

01

Install the Igris SDK

Add the Igris TypeScript SDK to your project. The SDK handles authentication and connects your application to the Igris governance layer.

npm install @igris/sdk

import { Igris } from "@igris/sdk";

const igris = new Igris({
  apiKey: process.env.IGRIS_API_KEY,
});
02

Create a Virtual Key

Create a virtual key in the Igris dashboard for your LLM provider. Your real API credentials are AES-256 encrypted at rest and never exposed to application code.

// Virtual Key (created in Igris dashboard)
{
  "slug": "my-openai",
  "name": "OpenAI Production",
  "providerSlug": "openai",
  "credential": "sk-..." // AES-256 encrypted at rest
}
03

Make Governed LLM Calls

Point any OpenAI-compatible SDK at your Igris virtual key endpoint. Every request is automatically governed — policies enforced, content inspected, tokens tracked, costs calculated, and the full interaction logged in Igris Lens.

import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "https://api.igrisecurity.com/llm/my-openai/v1",
  apiKey: process.env.IGRIS_API_KEY,
});

const response = await client.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: "Hello" }],
});
// → Policy enforced
// → Content guard checked
// → Tokens tracked
// → Cost calculated
// → Audit trail logged

◆ Use Cases

Built for Every Team

SaaS Companies with AI Features

Product & Engineering Teams

You have built AI chat, AI summarization, or AI assistants into your SaaS product. Igris Guard gives you complete control over what your users can ask and what your LLM can respond with — across every customer, tenant, and plan tier.

  • Multi-tenant policy isolation per customer or plan
  • Content filtering tailored to your product category
  • Token usage tracking for cost attribution and billing

Enterprise AI Deployments

CISOs, IT, and Legal Teams

Your company is deploying AI assistants internally — HR bots, legal research tools, financial assistants. Igris Guard ensures sensitive data stays internal, off-topic requests are blocked, and every conversation is auditable.

  • PII and sensitive data protection for regulatory compliance
  • Department-level RBAC for LLM access
  • Immutable conversation logs for legal and HR compliance

AI Customer Support & Chatbots

Customer Experience & Security Teams

Your AI customer support agent speaks to thousands of users every day. Igris Guard prevents jailbreaks, keeps conversations on-topic, blocks confidential business information from being exposed, and gives you a complete log of every conversation.

  • Jailbreak and prompt injection prevention
  • Competitor and pricing topic blocking
  • Real-time alert if sensitive data appears in responses

◆ Get Started

Add LLM Firewall Protection to Your AI Product Today

Igris Guard is available as a managed cloud service or self-hosted. Works with any LLM provider via API. Get running in minutes.