The evidence and governance layer for AI
Govern runtime policy, keep a tamper-evident Ledger, and export signed Evidence Bundles with offline verification.
Gateway — enforce policy at prompts, tools, and outputs
Enforce guardrails with IBM Granite Guardian (LLM) advisories, plus rules-only fallback. Latency targets are fit for production. Decisions are exportable for audits.
See runtime demoLedger — tamper-evident event log
Append-only events with minimal payloads. Export slices tied to Seals for auditors.
View audit trail sampleDossier — living conformity file
Map EU AI Act and ISO/IEC 42001 controls, approvals, and version history.
See templateRulepacks — versioned obligations
Version, diff, and pin obligations (EU AI Act, ISO/IEC 42001, NIST AI RMF, NYC AEDT, CPPA ADMT).
Browse mappingsSentinel — post-market monitoring
Configure sampling, file incidents, and assign corrective actions.
Configure monitorVault — secure artefacts under WORM retention
Store artefacts with WORM (Write Once Read Many) retention and fine-grained access.
See retention defaultsSeal & Verify — signed evidence, offline checks
Create DSSE-signed Seals with STH inclusion and optional TSA. Verify offline via Portal or CLI. Share with auditors and procurement.
Verify a sample SealIf you enable the Responsible AI Pack, your Verify report includes an optional RAI Annex showing latest evaluations and approvals. Verification remains fully offline with DSSE, STH, and optional TSA checks.
Responsible AI Pack
Operationalise Responsible AI without the sprawl. Turn on pre-built Rulepacks, one-click Sentinel monitors (bias, drift, toxicity, privacy-leak), and an optional RAI Annex in your Seal. Reviewers still verify everything offline in seconds — no new accounts required.
