Postgres is enough.
We've shipped 18 production apps. Postgres has been the right choice 16 times. Here's when it isn't.
We've shipped 18 production apps. Postgres has been the right choice 16 times. Here's when it isn't.
We've shipped eighteen production applications at Microsive.
Sixteen of them run on Postgres. One runs on a combination of Postgres and DynamoDB. One runs on Postgres and Mongo. The Mongo one was a mistake we made in year one that we still think about.
This post is about why Postgres keeps being the right answer and what the two exceptions actually taught us.
The arguments against Postgres for new projects go roughly like this:
The underlying assumption in all of these is that your application is going to hit scale problems that require horizontal write distribution. Most applications never get there. And by the time they do, you have the engineering resources and the usage data to make a good decision. Making that decision in week one, for a product that hasn't launched, is optimising for a problem you don't have.
When people say "just use Postgres," what they're really saying is: use something that gives you transactions, foreign key constraints, rich query language, a mature ecosystem, and decades of operational knowledge from people who've run it at scale.
Postgres gives you all of those things, plus:
On the Kestrel AI project, we used pgvector to store and query document embeddings. Average query time for a semantic search across ten thousand document chunks was under 200ms on a Railway-hosted Postgres instance. We never needed to introduce Pinecone until the document volume hit a threshold that took eight months to reach. By then we had the data to justify the infrastructure cost.
Case 1 — DynamoDB: This was on a project with genuine multi-region write requirements. A client with users in India, Singapore, and Germany who needed writes to land locally. DynamoDB Global Tables solved this cleanly. We still used Postgres for the parts of the system that didn't need global distribution — billing, user accounts, audit logs. DynamoDB handled the session state and event stream.
The lesson: DynamoDB is the right answer when you have genuine global write distribution requirements and you know it at the start. If you're adding it because it sounds scalable, you're borrowing complexity you don't need.
Case 2 — MongoDB: This was the mistake. Early 2023. We were building an internal tool for a client that needed to store highly variable document structures — configurations that were completely different shapes per customer. MongoDB seemed like the natural fit. Flexible schema, easy to prototype.
Six months in, the codebase was full of defensive code checking for the presence of fields that might or might not exist. Queries that should have been joins were application-level loops. A simple report took three minutes to generate because the data model didn't support the aggregation we needed.
We migrated to Postgres. It took two weeks. The JSONB column handled the variable document structures. The rest of the data went into proper tables. The report took four seconds.
Before reaching for a non-Postgres database, ask yourself honestly: is my problem a data storage problem, or is it an application design problem?
Most of the time, the answer is the second one. A poorly designed relational schema is not evidence that you need a document store. It's evidence that the schema needs work.
For everything else: use Postgres. Learn to use it well. Use connection pooling. Understand your query plans. Add the right indexes. Run read replicas when you need them.
The boring choice has shipped more production software than all the exciting alternatives combined.