Security at OriginChain.
OriginChain is a managed AI-native database with single-tenant compute and storage, region-isolated by default, encrypted in transit and at rest, with every privileged action audit-logged. Trading positions, patient vitals, and inventory ledgers run on it today.
Single-tenant by construction.
One customer per EC2 instance. One EBS volume. One bearer token. One DNS record. Cross-tenant reads are not blocked by a permission check — there is no shared compute or storage to read across in the first place.
Dedicated compute
Every customer instance runs on its own EC2 box. No shared process, no shared memory, no shared kernel. Cross-tenant reads are not a permission check — they are physically impossible.
Dedicated storage
Each instance gets its own EBS volume. No shared block device, no shared file system, no shared cache. Customer data and customer WAL never touch another tenant's bytes.
Dedicated bearer + DNS
One bearer token, one TenantId, one DNS record per instance. Tokens are bound to a single tenant at issue time. A slug or route mismatch returns 403 — it never returns a row.
BLAKE3-prefixed key shape
Every storage key is prefixed with a BLAKE3 hash of the TenantId. Even inside the substrate's own indices, lookups outside a tenant's prefix are unreachable by construction. Defense in depth, behind the physical boundary.
Encrypted, end to end.
TLS on every wire. AES-256 on every disk. Customer-managed keys available on Enterprise.
TLS 1.2 minimum, TLS 1.3 preferred, on every external endpoint and every replication hop. Per-region auto-renewing wildcard certificates. New certs are picked up without restart.
All EBS volumes encrypted with AES-256 by the managed-disk service. Keys rotated automatically by the cloud KMS on managed tiers.
Bring your own KMS key. Customer-managed keys wrap the data key; revocation severs decrypt instantly. Available on Enterprise.
WAL archives, sealed-segment uploads, replica streams, and OTLP exports all flow over TLS to encrypted storage. The backup channel never leaves your region.
One bearer, one tenant, constant-time check.
Tokens are scoped to a single tenant at issue time. Comparison is constant-time. Rate limits are per-key, fairness is enforced inside the commit window, and every write supports idempotency.
Long-lived API keys (oc_live_sk_…) and short-lived JWTs both resolve to the same (TenantId, roles) tuple at the ingress. Token comparison is constant-time. Mismatch returns 403, never a row from the wrong table.
A four-dimension token bucket per API key — bytes/s, ops/s, ask/s, concurrent_queries — with per-key configurability. Group commit runs on a 100 µs window with 256-writer fairness, so a loud key cannot starve quiet ones inside the same tenant.
Every mutating request accepts an Idempotency-Key header. Replays return the original response, so retries are safe across timeouts, network blips, and proxy hiccups.
Mint, list, and revoke bearer tokens directly from the console. Rotation honors the previous key for 60 seconds — long enough for a rolling deploy, short enough to close a leaked token fast.
Every privileged action, append-only.
Console actions and administrative API calls — bearer mint, rotate, revoke, schema register, manifest edit, PITR run, lease takeover — land in an append-only events table. Retention follows your tier; export to your SIEM at any time.
Recorded fields
RFC 3339, microsecond precision, UTC.
Bearer token id, console user id, or system principal.
Resource path: tenant, schema, manifest, key, or job id.
Mint, rotate, revoke, register, edit, restore, takeover, export.
Source IP of the API or console request, captured at the ingress.
200 / 403 / 409 / 500 with the engine error code, when applicable.
Retention by tier
7 days
90 days
365 days
Custom — contractual
See /pricing for the full tier breakdown.
RPO = 0. RTO ~ 25 s. Restore to any second.
Sync replication on paid tiers. Failover promotes a hot follower in around 25 seconds. WAL archives ship continuously to S3, so you can restore to any timestamp inside your retention window.
Sync replication on paid tiers. A write is acked only after it has been durable on a follower. Zero data loss on failover.
Failover promotes a hot follower in around 25 seconds. The S3 lease primitive guarantees at most one writer at a time, so promotion is atomic and split-brain-free.
WAL is archived to S3 with sealed-segment ship + continuous tail-shipping. Restore to any sealed-segment boundary, with second-level granularity inside the tail window.
Every 24 hours the latest snapshot mounts on a throwaway EC2 and runs check-integrity end-to-end. A failure pages on-call before it ever reaches a customer restore.
Backup retention
Compliance posture.
Certifications live and underway, plus the contractual instruments customers ask for first. Need an audit timeline, a letter of intent, or a security questionnaire? Talk to us.
Optimistic CAS — predictable, never silent.
Concurrent edits to the same row are protected by single-row
compare-and-set on a server-managed
_oc_row_version
column. A stale write fails fast with a version mismatch instead of
silently overwriting the other client's edit. Last-writer-wins is
explicit and predictable: the winner is the one who read the latest
version.
Single-row CAS
put_row_cas,
get_row_versioned,
delete_row_cas.
Every row carries a server-managed
_oc_row_version;
stale writes return a version mismatch, never a silent overwrite.
Always-on, minimal overhead, every API surface.
Email security@originchain.ai.
Send a minimal reproduction, the affected version, and your assessment of severity. For an encrypted channel, request our PGP key and we'll respond signed.
Reporters are credited by name in release notes unless they prefer to remain anonymous.
Scope is the OriginChain managed cloud, the engine binary, and the customer-facing console. The static marketing site is out of scope.
- 24 hours First human response acknowledging your report.
- 7 days Triage: severity, reproducibility, and a fix plan.
- 30 days Patch released or a public advisory, whichever comes first.
- + Remote code execution in any ingress path (HTTPS, SSE, the bearer-auth layer)
- + Auth or access-control bypass in the engine or the cloud control plane
- + Tenant-to-tenant crossover on the managed cloud (VPC, IAM, or process boundary)
- + Data exfiltration or corruption via a crafted query or row write
- + LLM prompt-injection that escapes the plan compiler and reaches storage
- + Cryptographic or integrity flaws in the WAL, checkpoint, FTS postings, vector graph, or backup format
- + Bypass of per-tenant rate-limit / quota or per-API-key bucket accounting
- + Concurrency hazards that violate single-row CAS or schema-cutover atomicity
- − Denial-of-service from obviously abusive query volume within the tier's quota
- − Issues requiring a compromised host or physical access to disk
- − Vulnerabilities in third-party dependencies already tracked by their maintainers
- − Social engineering of OriginChain staff or support
- − The marketing site (originchain.ai) — it is a static deployment
Want the technical depth?
The architecture page covers the substrate, key shapes, replication topology, and recovery in detail. For audit timelines, BAA / DPA questions, or a security questionnaire, write security directly.