<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>OriginChain Engineering Blog</title><description>Engineering notes, design decisions, and customer stories from the team behind OriginChain — the AI-native database for SQL, vector, full-text, and graph.</description><link>https://originchain.ai</link><language>en-us</language><atom:link href="https://originchain.ai/api/blogs/rss.xml" rel="self" type="application/rss+xml"/><item><title>How a 24-hour fuzzer found two production-grade OOM bugs</title><link>https://originchain.ai/blogs/fuzzing-found-two-bugs</link><guid isPermaLink="false">originchain-blog:fuzzing-found-two-bugs</guid><description>Yesterday our nightly fuzz pipeline caught two real OOM-by-allocator bugs in OriginChain&apos;s snapshot and WAL-batch decoders. Both came from trusting a u32 count header before checking it against the input length. Crash inputs were under 200 bytes; both would crash a follower instantly. Here&apos;s the story.</description><pubDate>Wed, 06 May 2026 10:26:12 GMT</pubDate><category>fuzzing</category><category>engineering</category><category>wal</category><category>case-study</category><category>fault-injection</category><author>OriginChain Team (OriginChain)</author></item><item><title>OriginChain vs Redis: when each fits</title><link>https://originchain.ai/blogs/vs-redis</link><guid isPermaLink="false">originchain-blog:vs-redis</guid><description>Redis is the gold standard for in-memory KV with sub-ms reads. OriginChain matches Redis at small scale and exceeds it on durability semantics, atomic multi-shape writes, and built-in vector search. Redis is right when your data fits in RAM; OriginChain when you want a primary-store with the same speed feel and AI-native shapes.</description><pubDate>Wed, 06 May 2026 10:26:12 GMT</pubDate><category>redis</category><category>comparison</category><category>key-value</category><category>performance</category><category>architecture</category><author>OriginChain Team (OriginChain)</author></item><item><title>OriginChain vs Supabase: bundled stack vs focused database</title><link>https://originchain.ai/blogs/vs-supabase</link><guid isPermaLink="false">originchain-blog:vs-supabase</guid><description>Supabase bundles Postgres + Auth + Storage + Realtime + Edge Functions for prototype velocity. OriginChain is a focused database substrate purpose-built for AI workloads. If you want batteries-included, pick Supabase; if you want the database to be excellent at AI shapes and you&apos;ll bring your own auth/storage, pick OriginChain.</description><pubDate>Wed, 06 May 2026 10:26:12 GMT</pubDate><category>supabase</category><category>comparison</category><category>postgres</category><category>architecture</category><category>ai-native</category><author>OriginChain Team (OriginChain)</author></item><item><title>Backpressure done right: 429 + Retry-After in OriginChain</title><link>https://originchain.ai/blogs/backpressure-429</link><guid isPermaLink="false">originchain-blog:backpressure-429</guid><description>When a database can&apos;t keep up with writes, it has two choices: refuse cleanly (graceful) or accept and silently fail later (catastrophic). OriginChain&apos;s per-API-key backpressure is HTTP 429 with a precise Retry-After. Polite clients recover within seconds; impolite ones never crash the substrate. Here&apos;s how it works.</description><pubDate>Wed, 06 May 2026 10:26:12 GMT</pubDate><category>backpressure</category><category>rate-limiting</category><category>design</category><category>reliability</category><category>api</category><author>OriginChain Team (OriginChain)</author></item><item><title>The economics of agent memory at 100M tool calls/month</title><link>https://originchain.ai/blogs/agent-memory-economics</link><guid isPermaLink="false">originchain-blog:agent-memory-economics</guid><description>An autonomous agent at 100M tool calls/month produces ~50 GB of trace data and ~75 GB of embeddings. Storage is the small line; what dominates is retention policy and re-embedding. The LLM call itself is 99.8% of the bill. Here&apos;s the actual math, with the four levers that move it.</description><pubDate>Wed, 06 May 2026 10:26:12 GMT</pubDate><category>cost</category><category>agent-memory</category><category>scaling</category><category>ttl</category><category>tutorial</category><author>OriginChain Team (OriginChain)</author></item><item><title>OriginChain vs Postgres + pgvector: when each fits</title><link>https://originchain.ai/blogs/vs-postgres-pgvector</link><guid isPermaLink="false">originchain-blog:vs-postgres-pgvector</guid><description>Postgres + pgvector is great for teams already running Postgres and adding vectors as a feature. OriginChain is purpose-built for AI-agent workloads with thousands of writes per session, atomic multi-shape transactions, and sub-millisecond reads. Honest comparison.</description><pubDate>Tue, 05 May 2026 06:20:43 GMT</pubDate><category>postgres</category><category>pgvector</category><category>comparison</category><category>architecture</category><category>vector-database</category><author>OriginChain Team (OriginChain)</author></item><item><title>OriginChain vs DynamoDB: when each fits</title><link>https://originchain.ai/blogs/vs-dynamodb</link><guid isPermaLink="false">originchain-blog:vs-dynamodb</guid><description>DynamoDB is brilliant operational KV — flat fees, infinite scale, zero servers. OriginChain is right when vectors are first-class shapes, you need atomic multi-shape writes, and you&apos;re comfortable with a less-mature operational footprint in exchange for AI-shaped throughput.</description><pubDate>Tue, 05 May 2026 06:20:43 GMT</pubDate><category>dynamodb</category><category>comparison</category><category>key-value</category><category>architecture</category><category>aws</category><author>OriginChain Team (OriginChain)</author></item><item><title>OriginChain vs Pinecone: vectors with payloads vs vectors with sidecars</title><link>https://originchain.ai/blogs/vs-pinecone</link><guid isPermaLink="false">originchain-blog:vs-pinecone</guid><description>Pinecone stores vectors and assumes your entity data lives elsewhere — you dual-write. OriginChain stores vectors as shapes co-located with parent entities, atomic in one WAL frame. If you&apos;re paying the dual-write architecture&apos;s tax today, here&apos;s the alternative.</description><pubDate>Tue, 05 May 2026 06:20:43 GMT</pubDate><category>pinecone</category><category>vector-database</category><category>comparison</category><category>architecture</category><category>ann</category><author>OriginChain Team (OriginChain)</author></item><item><title>Why we don&apos;t need a separate vector database</title><link>https://originchain.ai/blogs/no-separate-vector-database</link><guid isPermaLink="false">originchain-blog:no-separate-vector-database</guid><description>A separate vector database is the wrong shape for AI-native applications. Vectors describe entities; entities have lifecycles. Splitting them across two systems creates dual-write bugs and operational overhead. The correct architecture is one substrate, multiple shapes.</description><pubDate>Tue, 05 May 2026 06:20:43 GMT</pubDate><category>architecture</category><category>vector-database</category><category>design</category><category>ai-native</category><author>OriginChain Team (OriginChain)</author></item><item><title>Per-key TTL: building agent memory that forgets</title><link>https://originchain.ai/blogs/per-key-ttl-agent-memory</link><guid isPermaLink="false">originchain-blog:per-key-ttl-agent-memory</guid><description>AI agent memory is full of records that should expire on their own — tool-call traces, session caches, embedding refreshes. Per-key TTL on a fast K/V substrate gets you that pattern without sweeper jobs. Here&apos;s how OriginChain handles it and what it lets you build.</description><pubDate>Tue, 05 May 2026 06:20:43 GMT</pubDate><category>ttl</category><category>agent-memory</category><category>ephemeral</category><category>design</category><category>tutorial</category><author>OriginChain Team (OriginChain)</author></item><item><title>Atomic compare-and-swap on a hash-keyed substrate</title><link>https://originchain.ai/blogs/atomic-cas</link><guid isPermaLink="false">originchain-blog:atomic-cas</guid><description>OriginChain&apos;s commit window resolves concurrent writes with last-writer-wins by default. For counters, locks, and optimistic concurrency, every write accepts an if_match predicate that bypasses LWW and atomically fails if state has changed. Here&apos;s how it works.</description><pubDate>Tue, 05 May 2026 06:20:43 GMT</pubDate><category>cas</category><category>concurrency</category><category>wal</category><category>design</category><category>consistency</category><author>OriginChain Team (OriginChain)</author></item><item><title>RAG at production scale: where the latency goes</title><link>https://originchain.ai/blogs/rag-latency-budget</link><guid isPermaLink="false">originchain-blog:rag-latency-budget</guid><description>Production RAG has a 1-3 second latency budget for retrieval-and-generation. Most is the LLM. Database round-trips add up surprisingly fast — and are the easiest place to lose 200ms you didn&apos;t have. Walking through where the time goes with realistic numbers.</description><pubDate>Tue, 05 May 2026 06:20:43 GMT</pubDate><category>rag</category><category>performance</category><category>latency</category><category>ann</category><category>tutorial</category><author>OriginChain Team (OriginChain)</author></item><item><title>Why we picked Rust for the database substrate</title><link>https://originchain.ai/blogs/why-rust</link><guid isPermaLink="false">originchain-blog:why-rust</guid><description>We picked Rust because the bugs we&apos;d fear in C++ and the throughput we&apos;d lose in Go and Java were both unacceptable for an AI-native database. Honest tradeoff analysis — what Rust costs us, what it earns us, and what we&apos;d have done with the alternatives.</description><pubDate>Tue, 05 May 2026 06:20:43 GMT</pubDate><category>rust</category><category>engineering</category><category>tech-stack</category><category>design</category><category>performance</category><author>OriginChain Team (OriginChain)</author></item><item><title>The cost of an AI feature at 100K, 1M, and 10M users</title><link>https://originchain.ai/blogs/ai-feature-cost-model</link><guid isPermaLink="false">originchain-blog:ai-feature-cost-model</guid><description>AI features look free in the prototype phase, then the bill hits. LLM tokens dominate (95%), embeddings are small (3%), database is rounding error (2%). Here&apos;s a realistic per-user cost model with line items at each scale, and what to optimize first.</description><pubDate>Tue, 05 May 2026 06:20:43 GMT</pubDate><category>pricing</category><category>cost</category><category>tutorial</category><category>scaling</category><category>llm</category><author>OriginChain Team (OriginChain)</author></item><item><title>Idempotent tool calls: building reliable agent loops</title><link>https://originchain.ai/blogs/idempotent-tool-calls</link><guid isPermaLink="false">originchain-blog:idempotent-tool-calls</guid><description>Production AI agents make tool calls that have side effects: emails, charges, webhooks. Network retries and LLM hallucinations mean you&apos;ll get the same call twice. Here&apos;s the idempotency-key playbook on OriginChain — atomic CAS + per-key TTL + the wait-for-result path.</description><pubDate>Tue, 05 May 2026 06:20:43 GMT</pubDate><category>agents</category><category>tool-calls</category><category>idempotency</category><category>reliability</category><category>tutorial</category><author>OriginChain Team (OriginChain)</author></item><item><title>HA snapshot bootstrap: failover with zero committed-write loss</title><link>https://originchain.ai/blogs/ha-snapshot-bootstrap</link><guid isPermaLink="false">originchain-blog:ha-snapshot-bootstrap</guid><description>Most database failovers silently drop writes that landed during the cutover window. Here&apos;s how OriginChain&apos;s snapshot-then-tail follower bootstrap design closes that window — verified by chaos drill on every release.</description><pubDate>Mon, 04 May 2026 20:59:15 GMT</pubDate><category>ha</category><category>replication</category><category>architecture</category><category>wal</category><category>consensus</category><author>OriginChain Team (OriginChain)</author></item><item><title>Key shapes, not schemas: the OriginChain data model</title><link>https://originchain.ai/blogs/key-shapes-not-schemas</link><guid isPermaLink="false">originchain-blog:key-shapes-not-schemas</guid><description>OriginChain stores everything as hash-keyed key-value pairs underneath, but the developer-facing API exposes typed key shapes — declarative templates for how a logical entity maps to keys. You write shapes, not schemas.</description><pubDate>Mon, 04 May 2026 20:59:15 GMT</pubDate><category>data-model</category><category>schema</category><category>key-value</category><category>architecture</category><category>design</category><author>OriginChain Team (OriginChain)</author></item><item><title>The 100 microsecond commit window: throughput vs isolation</title><link>https://originchain.ai/blogs/commit-window-design</link><guid isPermaLink="false">originchain-blog:commit-window-design</guid><description>OriginChain batches writes from up to 256 concurrent writers into a single fsync every 100 microseconds. Here&apos;s why we picked those numbers, what last-writer-wins resolution costs, and what it buys you in throughput.</description><pubDate>Mon, 04 May 2026 20:59:15 GMT</pubDate><category>wal</category><category>throughput</category><category>design</category><category>fsync</category><category>performance</category><author>OriginChain Team (OriginChain)</author></item><item><title>OriginChain quickstart: zero to your first AI feature in 5 minutes</title><link>https://originchain.ai/blogs/quickstart</link><guid isPermaLink="false">originchain-blog:quickstart</guid><description>Provision a managed instance, write a JSON record + vector embedding atomically, run a similarity search, hydrate the matched record. Five minutes, no schema migration, no glue code.</description><pubDate>Mon, 04 May 2026 20:59:15 GMT</pubDate><category>tutorial</category><category>quickstart</category><category>sdk</category><category>getting-started</category><category>vector-search</category><author>OriginChain Team (OriginChain)</author></item><item><title>Why our roadmap to 1.0 is depth-first (and what&apos;s not on it)</title><link>https://originchain.ai/blogs/depth-first-roadmap-to-1-0</link><guid isPermaLink="false">originchain-blog:depth-first-roadmap-to-1-0</guid><description>OriginChain&apos;s roadmap is HA → fuzzing → optimiser → EXPLAIN → multi-writer → online-schema. Six months. No graph, no FTS, no time-series specialty types until that&apos;s done. Here&apos;s why depth-first is the right bet for an AI-native database.</description><pubDate>Mon, 04 May 2026 20:59:15 GMT</pubDate><category>roadmap</category><category>product</category><category>design</category><category>architecture</category><author>OriginChain Team (OriginChain)</author></item><item><title>Eight competitor comparison pages, three parallel agents, one afternoon</title><link>https://originchain.ai/blogs/eight-vs-pages-three-agents-one-afternoon</link><guid isPermaLink="false">originchain-blog:eight-vs-pages-three-agents-one-afternoon</guid><description>We shipped /vs/postgres, /vs/pinecone, /vs/weaviate, /vs/qdrant, /vs/milvus, /vs/supabase, /vs/neon, and /vs/mongodb in under an afternoon by running three Claude Code agents in parallel. The pattern, the prompt, and the part agents don&apos;t replace.</description><pubDate>Mon, 04 May 2026 16:00:00 GMT</pubDate><category>engineering</category><category>ai-tools</category><category>agents</category><category>marketing-site</category><author>OriginChain (OriginChain)</author></item><item><title>Why we shipped a vanilla OpenAPI spec when nobody asked for one</title><link>https://originchain.ai/blogs/openapi-spec-and-the-ai-coding-loop</link><guid isPermaLink="false">originchain-blog:openapi-spec-and-the-ai-coding-loop</guid><description>In 2026, engineers don&apos;t write SDKs — they ask their AI agent to. The agent fetches your OpenAPI spec, runs openapi-generator, and ships a working client in 30 seconds. If you don&apos;t have a spec, you don&apos;t exist in that loop.</description><pubDate>Mon, 04 May 2026 13:00:00 GMT</pubDate><category>openapi</category><category>ai-tools</category><category>sdk</category><category>developer-experience</category><author>OriginChain (OriginChain)</author></item><item><title>One MCP server, every AI IDE: OriginChain inside Claude Desktop and Cursor</title><link>https://originchain.ai/blogs/mcp-server-launch</link><guid isPermaLink="false">originchain-blog:mcp-server-launch</guid><description>Database vendors who don&apos;t ship an MCP server in 2026 are invisible to the agents writing the code. Here&apos;s what @originchain/mcp-server exposes — five tools, stdio transport, env-var config — and what MCP doesn&apos;t yet solve.</description><pubDate>Mon, 04 May 2026 10:00:00 GMT</pubDate><category>mcp</category><category>ai-tools</category><category>claude</category><category>cursor</category><author>OriginChain (OriginChain)</author></item><item><title>Schedule a panic at every fsync boundary: WAL crash testing in OriginChain</title><link>https://originchain.ai/blogs/schedule-a-panic-at-every-fsync-boundary</link><guid isPermaLink="false">originchain-blog:schedule-a-panic-at-every-fsync-boundary</guid><description>Most databases say their WAL is crash-correct. We schedule panics at four WAL boundaries and prove every recovered state equals a prefix of the op stream. Here&apos;s the harness, the recovery invariant, and why it&apos;s the cheapest credibility purchase a database team can make.</description><pubDate>Sun, 03 May 2026 18:00:00 GMT</pubDate><category>wal</category><category>crash-testing</category><category>fault-injection</category><category>engineering</category><author>OriginChain (OriginChain)</author></item><item><title>One WAL frame, every shape: atomic writes across rows, vectors, full-text, and graph</title><link>https://originchain.ai/blogs/atomic-multi-shape-writes</link><guid isPermaLink="false">originchain-blog:atomic-multi-shape-writes</guid><description>A single INSERT in OriginChain commits the row, every secondary index, the FTS postings, the vector embedding, and the relation edges on one WAL frame. Here&apos;s what that looks like under the hood, and why it removes a class of bugs other AI stacks treat as eventual.</description><pubDate>Sun, 03 May 2026 10:00:00 GMT</pubDate><category>atomic-writes</category><category>wal</category><category>substrate</category><category>ai-native</category><author>OriginChain (OriginChain)</author></item><item><title>Building an AI-native database: SQL, vectors, full-text, and graph on one substrate</title><link>https://originchain.ai/blogs/building-an-ai-native-database</link><guid isPermaLink="false">originchain-blog:building-an-ai-native-database</guid><description>OriginChain runs SQL with JOINs, HNSW vector search, BM25 full-text, and graph traversal against the same store — one bearer, one endpoint, one round-trip. Here&apos;s how the architecture makes that possible.</description><pubDate>Tue, 28 Apr 2026 10:00:00 GMT</pubDate><category>ai-native</category><category>vector-search</category><category>database</category><author>OriginChain (OriginChain)</author></item></channel></rss>