SDK Quickstart
The official JavaScript/TypeScript SDK provides a typed client for all IndigiArmor API endpoints. Zero dependencies — works in Node.js 18+, browsers, Deno, Cloudflare Workers, and Edge runtimes.
Installation
npm install indigiarmoryarn add indigiarmorpnpm add indigiarmorUsing React?
Add indigiarmor-react for the WarningOverlay component — matches the browser extension's scan result design with risk scores, signal breakdowns, and action buttons.
npm install indigiarmor indigiarmor-reactSee the Next.js Integration Guide for a complete walkthrough with server-side scanning, error handling, and bypass flows.
Initialize
import { IndigiArmor } from 'indigiarmor';
const ia = new IndigiArmor('ia_sk_your_key_here');API keys must start with ia_sk_. Get yours from the dashboard under Settings → API Keys.
Configuration
const ia = new IndigiArmor('ia_sk_...', {
// Base URL (default: 'https://indigiarmor.com')
baseUrl: 'https://indigiarmor.com',
// Request timeout in ms (default: 30000)
timeout: 10_000,
// Custom fetch implementation (default: globalThis.fetch)
fetch: customFetch,
// Lifecycle callbacks — triggered after every scan() call
callbacks: {
onBlock: (result) => console.error('Blocked:', result.explanation),
onFlag: (result) => console.warn('Flagged:', result.explanation),
onAllow: (result) => {},
},
});Callbacks are invoked based on the result tier. Errors inside callbacks are swallowed and never break the scan flow. Async callbacks are awaited.
Core Scanning
Scan a Prompt
const result = await ia.scan('Tell me about John Smith, SSN 123-45-6789');
console.log(result.tier); // 'red'
console.log(result.action); // 'block'
console.log(result.risk_score); // 9.5
console.log(result.signals); // [{ domain: 'pii', type: 'ssn', ... }]
console.log(result.sanitized_prompt); // 'Tell me about [REDACTED], SSN [REDACTED]'
console.log(result.explanation); // 'Detected SSN and full name...'
console.log(result.latency_ms); // 12Scan a Document
Scan extracted text from PDFs, DOCX, and other documents.
const result = await ia.scanDocument('Student ID 12345 IEP records', {
filename: 'report.pdf',
mime_type: 'application/pdf',
});
console.log(result.tier); // 'red'
console.log(result.signals); // [{ domain: 'education', type: 'student_id', ... }]Scan an LLM Response
Scan LLM output for PII leaks before returning to the user.
const check = await ia.scanResponse(llmOutput);
if (!check.clean) {
console.log(check.signals);
// Use the sanitized version instead
const safe = check.sanitized_response;
}Human-in-the-Loop (Yellow Tier)
Yellow-tier results are flagged but not blocked. Use token-based confirmation for human approval.
const result = await ia.scan(userPrompt);
if (result.action === 'flag' && result.token_id) {
// Show user a warning, then confirm if approved
const confirmed = await ia.confirm(result.token_id);
if (confirmed.confirmed) {
// Use confirmed.sanitized_prompt to proceed
}
}Convenience Layers
Higher-level APIs that handle scanning, decision-making, and integration in one call. Every convenience layer accepts a yellowStrategy option:
| Strategy | Behavior |
|---|---|
'sanitize' | Allow, using the sanitized prompt (default) |
'allow' | Allow, using the original prompt |
'block' | Reject the prompt |
guard() — Scan and Decide
Scan a prompt and get a simple safe/unsafe decision in one call.
const { safe, prompt, result } = await ia.guard('user input here');
if (safe) {
// `prompt` is the sanitized version when applicable
await callLLM(prompt);
} else {
console.log('Blocked:', result.explanation);
}
// Override the yellow strategy per call
const { safe: strict } = await ia.guard('user input', { yellowStrategy: 'block' });protect() — Wrap Any Function
Scan input, call your function with the safe prompt, and optionally scan the output.
const { output, inputScan, outputScan } = await ia.protect(
'user input here',
async (safePrompt) => {
return await callLLM(safePrompt);
},
{ scanOutput: true },
);Throws IndigiArmorError with code PROMPT_BLOCKED if input is blocked, or RESPONSE_BLOCKED if output scanning detects sensitive data.
middleware() — Express
Drop-in Express middleware that scans request bodies before your route handler.
import express from 'express';
import { IndigiArmor } from 'indigiarmor';
const app = express();
const ia = new IndigiArmor(process.env.INDIGIARMOR_API_KEY!);
app.use(express.json());
app.post('/api/chat', ia.middleware({ promptField: 'prompt' }), (req, res) => {
// req.body.prompt is now sanitized
// req.scanResult contains the full scan result
res.json({ reply: 'ok' });
});Blocked prompts receive a 400 response with { error: 'blocked', explanation, tier }. Requests without the prompt field pass through.
nextHandler() — Next.js App Router
Wrap a Next.js route handler with automatic prompt scanning.
// app/api/chat/route.ts
import { IndigiArmor } from 'indigiarmor';
const ia = new IndigiArmor(process.env.INDIGIARMOR_API_KEY!);
export const POST = ia.nextHandler(
async (req, scanResult) => {
const { prompt } = await req.json();
return Response.json({ answer: await callLLM(prompt) });
},
{ promptField: 'prompt' },
);Uses Web standard Request/Response — no Next.js dependency required.
wrapOpenAI() — OpenAI Client
Proxy-wrap an OpenAI client so every chat completion is automatically scanned.
import OpenAI from 'openai';
import { IndigiArmor } from 'indigiarmor';
const ia = new IndigiArmor(process.env.INDIGIARMOR_API_KEY!);
const openai = ia.wrapOpenAI(new OpenAI(), {
yellowStrategy: 'sanitize',
scanOutput: true,
});
// All user messages are scanned before reaching OpenAI
const completion = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello' }],
});Scans each user message individually. Handles both string and ContentPart[] formats. Works with OpenAI-compatible providers (Groq, Together, Azure OpenAI).
wrapAnthropic() — Anthropic Client
Proxy-wrap an Anthropic client so every message call is scanned, including the system parameter.
import Anthropic from '@anthropic-ai/sdk';
import { IndigiArmor } from 'indigiarmor';
const ia = new IndigiArmor(process.env.INDIGIARMOR_API_KEY!);
const anthropic = ia.wrapAnthropic(new Anthropic(), { scanOutput: true });
const message = await anthropic.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 1024,
system: 'You are a helpful assistant.',
messages: [{ role: 'user', content: 'Hello' }],
});wrapGemini() — Google GenAI Client
Proxy-wrap a Google GenAI client so every generateContent call is scanned, including systemInstruction.
import { GoogleGenAI } from '@google/genai';
import { IndigiArmor } from 'indigiarmor';
const ia = new IndigiArmor(process.env.INDIGIARMOR_API_KEY!);
const genai = ia.wrapGemini(new GoogleGenAI({ apiKey: '...' }), {
scanOutput: true,
});
const response = await genai.models.generateContent({
model: 'gemini-2.5-flash',
contents: 'Hello',
});Output scanning only works with non-streaming responses. Only text content is scanned; images pass through unchanged.
Management APIs
Policy Management
// List policies
const policies = await ia.listPolicies();
// Create a strict policy
const policy = await ia.createPolicy({
name: 'Strict PII Detection',
action: 'block',
sensitivity_threshold: 0.3,
detection_categories: ['pii', 'education'],
is_default: true,
});
// Update
await ia.updatePolicy(policy.id, { enabled: false });
// Delete
await ia.deletePolicy(policy.id);
// Use compliance templates (Education+)
const templates = await ia.listPolicyTemplates();
const ferpaPolicy = await ia.createPolicyFromTemplate({
template_id: templates[0].id,
name: 'FERPA Policy',
});API Key Management
// Create a new key (raw key returned only once)
const { raw_key, key } = await ia.createKey({
name: 'Production',
rate_limit_per_minute: 120,
});
console.log(raw_key); // ia_sk_... — store this securely
// List keys (hashes never exposed)
const keys = await ia.listKeys();
// Revoke a key
await ia.revokeKey(key.id);Usage & Audit
// Get usage stats
const usage = await ia.getUsage({ from: '2026-01-01' });
console.log(usage.totals.scan_count);
// Query audit logs
const logs = await ia.listAuditLogs({
tier: 'red',
limit: 10,
});
// Get single audit entry
const entry = await ia.getAuditLog(logs.logs[0].id);Allowlist
// Add a term to bypass detection
await ia.addAllowlistEntry({
term: 'Acme Corp',
domain: 'pii',
});
// List entries
const list = await ia.listAllowlist();
// Remove
await ia.removeAllowlistEntry(list[0].id);Webhooks (Professional+)
Receive real-time notifications when scans trigger specific events.
// Create a webhook
const webhook = await ia.createWebhook({
url: 'https://your-app.com/webhooks/indigiarmor',
events: ['scan.blocked', 'scan.flagged'],
format: 'json',
});
// List, update, delete
const webhooks = await ia.listWebhooks();
await ia.updateWebhook(webhook.id, { enabled: false });
await ia.deleteWebhook(webhook.id);
// Send a test event
const { success } = await ia.testWebhook(webhook.id);Teams (Professional+)
Organize users into teams with scoped access.
// Create a team
const team = await ia.createTeam({ name: 'Engineering' });
// Manage members
await ia.addTeamMember(team.id, userId);
const members = await ia.listTeamMembers(team.id);
// List, update, delete
const teams = await ia.listTeams();
await ia.updateTeam(team.id, { name: 'Platform Engineering' });
await ia.deleteTeam(team.id);Custom Entities (Enterprise)
Define custom detection patterns and keywords for your organization.
// Create a custom entity type
const entity = await ia.createCustomEntity({
name: 'Employee ID',
domain: 'pii',
patterns: ['EMP-\\d{6}'],
keywords: ['employee id', 'emp id'],
weight: 0.8,
});
// List, update, delete
const entities = await ia.listCustomEntities();
await ia.updateCustomEntity(entity.id, { enabled: false });
await ia.deleteCustomEntity(entity.id);Reports (Education+)
Generate audit exports and compliance summaries.
// Export audit logs as CSV
const exported = await ia.exportAuditLogs({
from: '2026-01-01',
to: '2026-03-31',
format: 'csv',
domain: 'education',
});
// Generate compliance summary
const report = await ia.getComplianceReport('2026-01-01', '2026-03-31');
console.log(report.total_scans);
console.log(report.block_rate);
console.log(report.domain_distribution);Red Team Simulation (Enterprise)
Run attack simulations against your current policy to find detection gaps.
// Run simulation
const { summary, results } = await ia.runRedTeam();
console.log(summary.detection_rate); // 0.95
console.log(summary.missed); // 2
// List available attack cases
const attacks = await ia.listAttackCases();Error Handling
All API errors are thrown as typed subclasses of IndigiArmorError:
import {
IndigiArmorError,
AuthenticationError,
RateLimitError,
ValidationError,
NotFoundError,
} from 'indigiarmor';
try {
await ia.scan(prompt);
} catch (err) {
if (err instanceof RateLimitError) {
// err.retryAfter — seconds until retry
console.log(`Rate limited. Retry after ${err.retryAfter}s`);
} else if (err instanceof AuthenticationError) {
// Invalid or revoked API key (401)
} else if (err instanceof ValidationError) {
// Bad request payload (400)
} else if (err instanceof NotFoundError) {
// Resource not found (404)
} else if (err instanceof IndigiArmorError) {
// err.code — machine-readable error code
// err.status — HTTP status (0 for network/timeout errors)
// err.requestId — server request ID for support
}
}Error codes from convenience layers:
| Code | Meaning |
|---|---|
PROMPT_BLOCKED | Input blocked (red tier, or yellow with block strategy) |
RESPONSE_BLOCKED | LLM output failed response scanning |
TIMEOUT | Request exceeded the configured timeout |
NETWORK_ERROR | Network-level failure (DNS, connection refused, etc.) |
Complete Method Reference
Scanning
| Method | Description |
|---|---|
scan(prompt) | Scan a prompt for risk signals |
scanDocument(content, metadata?) | Scan document text (PDF, DOCX, etc.) |
scanResponse(response) | Scan LLM output for PII leaks |
confirm(tokenId) | Confirm a yellow-tier token |
Convenience Layers
| Method | Description |
|---|---|
guard(prompt, options?) | Scan and get safe/unsafe decision |
protect(prompt, fn, options?) | Scan input, call function, optionally scan output |
middleware(options?) | Express middleware for request scanning |
nextHandler(handler, options?) | Next.js App Router handler wrapper |
wrapOpenAI(client, options?) | Proxy-wrap OpenAI client |
wrapAnthropic(client, options?) | Proxy-wrap Anthropic client |
wrapGemini(client, options?) | Proxy-wrap Google GenAI client |
Management
| Method | Description |
|---|---|
listPolicies() | List all policies |
createPolicy(data) | Create a detection policy |
updatePolicy(id, data) | Update a policy |
deletePolicy(id) | Delete a policy |
listPolicyTemplates() | List compliance templates (Education+) |
createPolicyFromTemplate(data) | Create policy from template (Education+) |
listKeys() | List API keys |
createKey(data) | Generate a new API key |
revokeKey(id) | Revoke an API key |
getUsage(query?) | Get scan usage statistics |
listAuditLogs(query?) | Query paginated audit logs |
getAuditLog(id) | Get a single audit entry |
listAllowlist() | List allowlisted terms |
addAllowlistEntry(data) | Add an allowlist entry |
removeAllowlistEntry(id) | Remove an allowlist entry |
listWebhooks() | List webhooks (Professional+) |
createWebhook(data) | Create a webhook (Professional+) |
updateWebhook(id, data) | Update a webhook (Professional+) |
deleteWebhook(id) | Delete a webhook (Professional+) |
testWebhook(id) | Send test event (Professional+) |
listTeams() | List teams (Professional+) |
createTeam(data) | Create a team (Professional+) |
updateTeam(id, data) | Update a team (Professional+) |
deleteTeam(id) | Delete a team (Professional+) |
listTeamMembers(teamId) | List team members (Professional+) |
addTeamMember(teamId, userId) | Add team member (Professional+) |
listCustomEntities() | List custom entities (Enterprise) |
createCustomEntity(data) | Create custom entity (Enterprise) |
updateCustomEntity(id, data) | Update custom entity (Enterprise) |
deleteCustomEntity(id) | Delete custom entity (Enterprise) |
exportAuditLogs(query) | Export audit logs (Education+) |
getComplianceReport(from, to) | Compliance summary (Education+) |
runRedTeam() | Run attack simulation (Enterprise) |
listAttackCases() | List attack cases (Enterprise) |
TypeScript Types
All request and response types are exported for full type safety:
import type {
ScanResult,
ScanResponseResult,
Signal,
Policy,
CreatePolicyRequest,
Webhook,
CreateWebhookRequest,
Team,
CustomEntity,
ComplianceReport,
GuardResult,
ProtectResult,
YellowStrategy,
// ... and more
} from 'indigiarmor';