Whereas vector databases nonetheless have many legitimate use circumstances, organizations together with OpenAI are leaning on PostgreSQL to get issues executed.
In a weblog submit on Thursday, OpenAI disclosed how it’s utilizing the open-source PostgreSQL database.
OpenAI runs ChatGPT and its API platform for 800 million customers on a single-primary PostgreSQL occasion — not a distributed database, not a sharded cluster. One Azure PostgreSQL Versatile Server handles all writes. Practically 50 learn replicas unfold throughout a number of areas deal with reads. The system processes tens of millions of queries per second whereas sustaining low double-digit millisecond p99 latency and five-nines availability.
The setup challenges typical scaling knowledge and gives enterprise architects perception into what truly works at large scale.
The lesson right here isn’t to repeat OpenAI’s stack. It’s that architectural selections ought to be pushed by workload patterns and operational constraints — not by scale panic or trendy infrastructure decisions. OpenAI’s PostgreSQL setup reveals how far confirmed methods can stretch when groups optimize intentionally as a substitute of re-architecting prematurely.
"For years, PostgreSQL has been one of the important, under-the-hood information methods powering core merchandise like ChatGPT and OpenAI’s API," OpenAI engineer Bohan Zhang wrote in a technical disclosure. "Over the previous 12 months, our PostgreSQL load has grown by greater than 10x, and it continues to rise shortly."
The corporate achieved this scale via focused optimizations, together with connection pooling that reduce connection time from 50 milliseconds to five milliseconds and cache locking to stop 'thundering herd' issues the place cache misses set off database overload.
Why PostgreSQL issues for enterprises
PostgreSQL handles operational information for ChatGPT and OpenAI's API platform. The workload is closely read-oriented, which makes PostgreSQL a great match. Nonetheless, PostgreSQL's multiversion concurrency management (MVCC) creates challenges underneath heavy write hundreds.
When updating information, PostgreSQL copies whole rows to create new variations, inflicting write amplification and forcing queries to scan via a number of variations to search out present information.
Relatively than preventing this limitation, OpenAI constructed its technique round it. At OpenAI’s scale, these tradeoffs aren’t theoretical — they decide which workloads keep on PostgreSQL and which of them should transfer elsewhere.
How OpenAI is optimizing PostgreSQL
At massive scale, typical database knowledge factors to one in every of two paths: shard PostgreSQL throughout a number of main cases so writes could be distributed, or migrate to a distributed SQL database like CockroachDB or YugabyteDB designed to deal with large scale from the beginning. Most organizations would have taken one in every of these paths years in the past, effectively earlier than reaching 800 million customers.
Sharding or shifting to a distributed SQL database eliminates the single-writer bottleneck. A distributed SQL database handles this coordination robotically, however each approaches introduce vital complexity: software code should route queries to the proper shard, distributed transactions turn into tougher to handle and operational overhead will increase considerably.
As an alternative of sharding PostgreSQL, OpenAI established a hybrid technique: no new tables in PostgreSQL. New workloads default to sharded methods like Azure Cosmos DB. Current write-heavy workloads that may be horizontally partitioned get migrated out. The whole lot else stays in PostgreSQL with aggressive optimization.
This method gives enterprises a sensible various to wholesale re-architecture. Relatively than spending years rewriting a whole lot of endpoints, groups can determine particular bottlenecks and transfer solely these workloads to purpose-built methods.
Why this issues
OpenAI's expertise scaling PostgreSQL reveals a number of practices that enterprises can undertake no matter their scale.
Construct operational defenses at a number of layers. OpenAI's method combines cache locking to stop "thundering herd" issues, connection pooling (which dropped their connection time from 50ms to 5ms), and charge limiting at software, proxy and question ranges. Workload isolation routes low-priority and high-priority site visitors to separate cases, guaranteeing a poorly optimized new function can't degrade core companies.
Overview and monitor ORM-generated SQL in manufacturing. Object-Relational Mapping (ORM) frameworks like Django, SQLAlchemy, and Hibernate robotically generate database queries from software code, which is handy for builders. Nonetheless, OpenAI discovered one ORM-generated question becoming a member of 12 tables that brought about a number of high-severity incidents when site visitors spiked. The comfort of letting frameworks generate SQL creates hidden scaling dangers that solely floor underneath manufacturing load. Make reviewing these queries a typical apply.
Implement strict operational self-discipline. OpenAI permits solely light-weight schema modifications — something triggering a full desk rewrite is prohibited. Schema modifications have a 5-second timeout. Lengthy-running queries get robotically terminated to stop blocking database upkeep operations. When backfilling information, they implement charge limits so aggressive that operations can take over per week.
Learn-heavy workloads with burst writes can run on single-primary PostgreSQL longer than generally assumed. The choice to shard ought to rely on workload patterns relatively than consumer counts.
This method is especially related for AI functions, which regularly have closely read-oriented workloads with unpredictable site visitors spikes. These traits align with the sample the place single-primary PostgreSQL scales successfully.
The lesson is easy: determine precise bottlenecks, optimize confirmed infrastructure the place attainable, and migrate selectively when needed. Wholesale re-architecture isn't all the time the reply to scaling challenges.

