Skip to content
This page is currently available in English.

The evidence and governance layer for AI

Govern runtime policy, keep a tamper-evident Ledger, and export signed Evidence Bundles with offline verification.

Gateway — enforce policy at prompts, tools, and outputs

Enforce guardrails with IBM Granite Guardian (LLM) advisories, plus rules-only fallback. Latency targets are fit for production. Decisions are exportable for audits.

See runtime demo

Ledger — tamper-evident event log

Append-only events with minimal payloads. Export slices tied to Seals for auditors.

View audit trail sample

Dossier — living conformity file

Map EU AI Act and ISO/IEC 42001 controls, approvals, and version history.

See template

Rulepacks — versioned obligations

Version, diff, and pin obligations (EU AI Act, ISO/IEC 42001, NIST AI RMF, NYC AEDT, CPPA ADMT).

Browse mappings

Sentinel — post-market monitoring

Configure sampling, file incidents, and assign corrective actions.

Configure monitor

Vault — secure artefacts under WORM retention

Store artefacts with WORM (Write Once Read Many) retention and fine-grained access.

See retention defaults

Seal & Verify — signed evidence, offline checks

Create DSSE-signed Seals with STH inclusion and optional TSA. Verify offline via Portal or CLI. Share with auditors and procurement.

Verify a sample Seal

If you enable the Responsible AI Pack, your Verify report includes an optional RAI Annex showing latest evaluations and approvals. Verification remains fully offline with DSSE, STH, and optional TSA checks.

Responsible AI Pack

Operationalise Responsible AI without the sprawl. Turn on pre-built Rulepacks, one-click Sentinel monitors (bias, drift, toxicity, privacy-leak), and an optional RAI Annex in your Seal. Reviewers still verify everything offline in seconds — no new accounts required.