OriginChain
Industries · trading, risk, settlement

AI database for financial services. One store for ticks, books, and the counterparty graph.

The problem

Trading desks reconcile fills, walk counterparty exposure, and search past patterns against three different stores — a Postgres, an Elasticsearch, and a vector index — and pay the consistency tax every time.

The OriginChain answer

OriginChain holds tick data, positions, the counterparty graph, and pattern embeddings on a single hash-keyed store. One bearer token. One HTTPS endpoint. SQL with OUTER JOIN reconciles unmatched trades, HNSW finds similar 24h windows at recall@10 = 0.96 with p99 109 ms (or p99 37 ms in fast mode), graph Dijkstra walks weighted exposure paths in 25 ms over 50k counterparties, and BM25 searches the audit log — all single-tenant on a region-isolated instance.

p99 read
< 8 ms
vector recall@10 · 100k
0.96
graph BFS · 50k nodes
~25 ms
tenancy
single-tenant region-isolated
what they use OriginChain for

One bearer token. One endpoint. Every query shape.

Each example below is a real call against the public HTTP API. Copy the curl, set $OC_TOKEN, and you'll see the same shape of response your app gets in production. Latency numbers are measured against a Storm-tier instance in ap-south-1.

Schemas you'd register

Register these once via oc schema put or the /v1/schema endpoint, and every example below resolves against them.

schema purpose key fields
market_ticks Per-symbol tick stream symbol · ts · price · volume
book_positions Open positions per trading book book_id · symbol · qty · avg_price
trades_executed Executed trades (our side) trade_id · ts · symbol · qty · counterparty_id
trades_confirmed Counterparty-confirmed trades confirm_id · ts · symbol · qty · counterparty_id
counterparties Counterparty graph (exposure edges) cp_id · name · rating · exposure_to[]
patterns_embed Price-pattern embeddings (24h windows) window_id · symbol · embedding[768]

SQL for analytics and reconciliation

Standard SQL with JOIN, GROUP BY, HAVING, and window functions against the same store.

sql POST /v1/sql

Reconcile unmatched trades with one OUTER JOIN

request: SELECT t.trade_id, t.symbol, t.qty FROM trades_executed t LEFT JOIN trades_confirmed c ON t.trade_id = c.confirm_id WHERE c.confirm_id IS NULL AND t.ts > now() - interval '4 hours'
p99 < 60 ms across 4-hour scan window · schemas: trades_executed · trades_confirmed
curl
curl -X POST https://oc-acme.ap-south-1.originchain.ai/v1/sql \
  -H "Authorization: Bearer $OC_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"sql":"SELECT t.trade_id, t.symbol, t.qty FROM trades_executed t LEFT JOIN trades_confirmed c ON t.trade_id = c.confirm_id WHERE c.confirm_id IS NULL AND t.ts > now() - interval ''4 hours''"}'
response · application/json
{
  "rows": [
    { "trade_id": "T-91002", "symbol": "RELIANCE", "qty":  100 },
    { "trade_id": "T-91018", "symbol": "INFY",     "qty":  250 }
  ],
  "meta": { "latency_ms": 47, "rows_scanned": 184320 }
}

Vector search for similarity

HNSW with tunable speed/recall. Default high_recall: recall@10 = 0.96 at 100k, p99 109 ms. Fast: p99 37 ms (recall 0.69). Metadata filters during graph traversal.

vector · hnsw POST /v1/vector/topk

Find the 5 most-similar 24h price patterns

request: topk against patterns_embed.embedding for today's NIFTY window
recall@10 = 0.96 · p99 109 ms at 100k windows (high_recall) · schemas: patterns_embed
curl
curl -X POST https://oc-acme.ap-south-1.originchain.ai/v1/vector/topk \
  -H "Authorization: Bearer $OC_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "schema": "patterns_embed",
    "field":  "embedding",
    "query":  "@today_nifty_24h",
    "k":      5,
    "metric": "cosine",
    "filter": { "symbol": "NIFTY" }
  }'
response · application/json
{
  "rows": [
    { "window_id": "NIFTY-2024-09-14T0900", "score": 0.973, "outcome_5d": "+1.8%" },
    { "window_id": "NIFTY-2023-11-22T0900", "score": 0.961, "outcome_5d": "+0.4%" }
  ],
  "meta": { "latency_ms": 109, "index_size": 100000, "mode": "high_recall" }
}

Graph traversal

BFS, DFS, and weighted Dijkstra against ref edges already in the data.

graph traversal POST /v1/graph/{op}

Walk counterparty exposure within 3 hops

request: BFS from defaulted entity X across the exposure graph
~25 ms BFS · 50k-node graph · schemas: counterparties
curl
curl -X POST https://oc-acme.ap-south-1.originchain.ai/v1/graph/bfs \
  -H "Authorization: Bearer $OC_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "schema":    "counterparties",
    "source":    "CP-X",
    "edge":      "exposure_to",
    "max_depth": 3
  }'
response · application/json
{
  "rows": [
    { "cp_id": "CP-2841", "name": "Acme Hedge",       "hops": 1, "rating": "A-"  },
    { "cp_id": "CP-3019", "name": "Borealis Capital", "hops": 2, "rating": "BBB" },
    { "cp_id": "CP-3104", "name": "Crescent Funds",   "hops": 3, "rating": "BB+" }
  ],
  "meta": { "latency_ms": 25, "nodes_visited": 1842 }
}

Natural-language questions

Plain English in. JSON out. Compiled plan cached after first touch.

natural language POST /v1/ask

Running delta of the options book

request: running delta of my options book, grouped by expiry
compiled plan cached after first touch · schemas: book_positions
curl
curl -X POST https://oc-acme.ap-south-1.originchain.ai/v1/ask \
  -H "Authorization: Bearer $OC_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"q":"running delta of my options book, grouped by expiry"}'
response · application/json
{
  "rows": [
    { "expiry": "2026-04-24", "book_delta":  2415.3 },
    { "expiry": "2026-05-29", "book_delta": -1180.7 },
    { "expiry": "2026-06-26", "book_delta":   304.5 }
  ],
  "meta": { "latency_ms": 51, "plan": "join(book_positions, options_legs) · group_by(expiry) · sum(delta * qty)" }
}
why one substrate

Cross-shape consistency, by construction.

When SQL, vector, full-text, and graph all read from the same hash-keyed k/v store, a row written at 09:14:02.118 is visible to every shape on the next read. No ETL window, no replication lag, no consistency tax across vendors.

single-tenant

Region-isolated dedicated instance

Your data sits in your region, on a dedicated instance with its own keys and its own resource budget. No noisy-neighbour. No shared control plane.

durable

PITR + cross-AZ replication

Every write goes to a durable WAL, replicated to a hot standby in a second AZ. Restore to any second in your retention window.

observable

OTLP metrics + audit log

Per-key latency histograms, hit rate on the plan cache, and an append-only audit log of every privileged action — exported via OTLP to your observability stack.

ready when you are

Ninety seconds to an endpoint. No stack to wire up.

Pick a region, pick a tier, and we provision a single-tenant instance on AWS. The first query you send is the first query we'll show you how to write — in English.

talk to a human