Sentinel wraps any autonomous decision system and records tamper-resistant decision traces to local sovereign storage. Works with LLMs, ML classifiers, rule engines, and robotic systems. Zero cloud. Zero US CLOUD Act exposure. 110 days to EU AI Act enforcement.
The Auditor Release. Signed PDF evidence packs. One-stop CI check. Honest-scope framing. Runtime briefing for technical evaluators.
sentinel evidence-pack — one command produces a cover page, executive summary, EU AI Act / DORA / NIS2 coverage, trace samples, SHA-256 hash manifest, and a sovereign attestation. Reproducible, offline-verifiable, suitable as an audit binder artefact.
sentinel ci-check — aggregates the EU AI Act snapshot, runtime sovereignty scan, and an optional manifesto check into a single exit code. Fully in-process. No subprocesses. No network. Works air-gapped.
New operator-grade briefing page. Operating picture, runtime walkthrough, decision record, evidence route, deployment posture, and scope. Dark and light mode, keyboard navigable, no framework, no tracking.
README, CLI --help, and docs/eu-ai-act.md now consistently name Sentinel as the decision-trace and policy-enforcement layer for EU AI Act Art. 12 / 13 / 14 / 17. Not a full compliance solution, and no middleware kernel can be.
[pdf] extra.
Three ways to log autonomous decisions. Only one passes the EU AI Act, the CLOUD Act, and the air-gap test.
| Requirement | Cloud observability | Proprietary platforms | Sentinel |
|---|---|---|---|
| Decision records | ✓ | ✓ | ✓ |
| EU AI Act Art. 12 | Partial | Partial | ✓ Full |
| US CLOUD Act exposure | ✗ Applies | ✗ Applies | ✓ None |
| Air-gapped capable | ✗ | ✗ | ✓ |
| Open source | Some | ✗ | ✓ Apache 2.0 |
| On-premise | ✗ | Expensive | ✓ Default |
| BSI path | ✗ | ✗ | ✓ v3.0 ready |
| Quantum-safe signing | ✗ | Server-side | ✓ ML-DSA-65, client-side |
| Manifesto-as-code CI | ✗ | ✗ | ✓ 5 theses, every PR |
| ML classifier governance | ✗ | ✗ | ✓ |
| Rule engine audit trail | ✗ | ✗ | ✓ |
Three layers between your business logic and any autonomous decision system. One thin kernel you can read end-to-end.
What was decided. EU AI Act Art. 12, automated.
What may be decided. Policy-as-code, kill switch, preflight.
Which model decides. Coming v4.0 — RFC-002 in discussion.
Live data from a sample deployment. Every chart is inline SVG — zero external resources.
| Time | Agent | Result | ms |
|---|---|---|---|
| 12:34:51 | procurement_agent | ALLOW | 3 |
| 12:34:52 | access_control | DENY | 2 |
| 12:34:53 | doc_classifier | ALLOW | 4 |
| 12:34:54 | procurement_agent | ALLOW | 3 |
| 12:34:55 | mission_eval | EXCEPTION | 8 |
| 12:34:56 | access_control | ALLOW | 2 |
| 12:34:57 | doc_classifier | DENY | 3 |
| 12:34:58 | procurement_agent | ALLOW | 4 |
Evaluate the full sovereignty stack — or embed it in your code.
# Install + full end-to-end demo (no code required) $ pipx install sentinel-kernel $ sentinel demo # Sovereignty scan of your environment $ sentinel scan # EU AI Act compliance check $ sentinel compliance check # Generate a self-contained HTML sovereignty report $ sentinel report --output sovereignty.html # Generate a portable governance attestation $ sentinel attestation generate --output governance.json
from sentinel import Sentinel sentinel = Sentinel() # SQLite, zero config @sentinel.trace async def my_agent(context: dict) -> dict: return {"decision": "approved"} # Every call produces a sovereign trace result = await my_agent({"amount": 5000}) print(result) # {"decision": "approved"} # Query traces traces = sentinel.query(limit=1) print(traces[0].policy_result) # ALLOW
from sentinel import Sentinel from sentinel.policy.evaluator import SimpleRuleEvaluator from sentinel.storage.filesystem import FilesystemStorage sentinel = Sentinel( policy_evaluator=SimpleRuleEvaluator({ "threshold": lambda ctx: ctx["amount"] <= 10_000 }), storage=FilesystemStorage("/mnt/traces"), sovereign_scope="EU", data_residency="on-premise-de", ) @sentinel.trace async def approve_procurement(ctx: dict) -> dict: return {"approved": ctx["amount"] <= 10_000} # DENY recorded automatically for high-value requests await approve_procurement({"amount": 50_000})
from sentinel import Sentinel from sentinel.manifesto import SentinelManifesto from sentinel.manifesto.requirements import ( EUOnly, Required, AcknowledgedGap, ) from sentinel.compliance.euaiact import EUAIActChecker class OurPolicy(SentinelManifesto): name = "Production Sovereignty Policy v1" jurisdiction = EUOnly() kill_switch = Required() ci_cd = AcknowledgedGap( provider="GitHub Actions (Microsoft/US)", migrating_to="Self-hosted Forgejo", by="2027-Q2", reason="No EU-sovereign CI with comparable UX", ) sentinel = Sentinel() # Check EU AI Act compliance report = EUAIActChecker().check(sentinel) print(report.diff()) # Generate self-contained HTML report report.save_html("sovereignty_report.html") # Check manifesto vs reality manifesto_report = OurPolicy().check(sentinel_instance=sentinel) print(f"Score: {manifesto_report.overall_score:.0%}")
from sentinel import ( Sentinel, BudgetTracker, generate_attestation, verify_attestation, ) from sentinel.crypto import QuantumSafeSigner # Quantum-safe signing — keys stay on your infrastructure signer = QuantumSafeSigner( key_path="/etc/sentinel/keys/signing.key", public_key_path="/etc/sentinel/keys/signing.pub", ) sentinel = Sentinel(signer=signer) # Preflight — check before you act, no trace written result = sentinel.preflight("data:delete:production") if not result.cleared: raise RuntimeError(result.reasons) # BudgetTracker — every cost entry is a sovereign trace budget = BudgetTracker(sentinel=sentinel, limit=10.0) check = budget.check(estimated_cost=0.25) budget.record("api:mistral", actual_cost=0.23) # Portable attestation — verifiable offline, no service needed att = generate_attestation(sentinel=sentinel) assert verify_attestation(att).valid
Four scenarios where a missing trace is worse than a crash.
Autonomous go/no-go decisions with mission policy evaluation. Kill switch for immediate halt (Art. 14). Air-gapped deployment verified by dedicated test suite. VS-NfD roadmap.
Treatment recommendation audit trail. GDPR-compliant data residency. Every clinical AI decision recorded with SHA-256 hash. Art. 14 human oversight for escalation workflows.
Transaction approval automation with DORA-aligned logging. Append-only tamper-resistant records. Regulators get the full trace: what, when, which model, which policy.
Government AI transparency requirements met by default. Sovereign deployment — no foreign jurisdiction access possible. EU AI Act compliance diff for internal auditors.
Every v1 → v3 capability. Eleven articles mapped. One honest compliance story.
| Article | Requirement | Sentinel | What to do |
|---|---|---|---|
| Art. 12 | Automatic logging | ✓ Full | Nothing — automated |
| Art. 13 | Transparency | ✓ Full | Nothing — automated |
| Art. 14 | Human oversight | ✓ Full | Name the operator of the kill switch |
| Art. 9 | Risk management | ~ Partial | Document risk categories and plan |
| Art. 11 | Technical documentation | → Human action | Write the Annex IV tech doc package |
| Art. 17 | Quality management | ~ Partial | Define change control and QMS procedures |
| Art. 16 | Provider obligations | ~ Partial | Register, CE mark, conformity assessment |
| Art. 26 | Deployer obligations | ~ Partial | Staff training, oversight procedures |
| Art. 10 | Data governance | → Human action | Document training data provenance |
| Art. 15 | Accuracy & robustness | → Human action | Accuracy metrics and pen testing |
| Art. 72 | GPAI post-market | ~ Conditional | Model card if deploying GPAI as high-risk |
Phase 1 done. Phase 2 in motion. Phase 3 designed. Every version reflects shipped code, not plans.
git clone https://github.com/sebastianweiss83/sentinel-kernel cd sentinel-kernel/demo docker compose -f docker-compose.minimal.yml up