MICROSIVE
All posts
Engineering·Jan 30, 2026·7 min

Postgres is enough.

We've shipped 18 production apps. Postgres has been the right choice 16 times. Here's when it isn't.

We've shipped eighteen production applications at Microsive.

Sixteen of them run on Postgres. One runs on a combination of Postgres and DynamoDB. One runs on Postgres and Mongo. The Mongo one was a mistake we made in year one that we still think about.

This post is about why Postgres keeps being the right answer and what the two exceptions actually taught us.

The case you keep hearing against Postgres

The arguments against Postgres for new projects go roughly like this:

"It doesn't scale horizontally." True for writes. Rarely the actual problem. → "NoSQL is more flexible for unstructured data." True. Also largely unnecessary. → "You'll need to shard eventually." Eventually is doing a lot of work in that sentence. → "Mongo is faster to prototype with." This one has some merit, and we'll get to it.

The underlying assumption in all of these is that your application is going to hit scale problems that require horizontal write distribution. Most applications never get there. And by the time they do, you have the engineering resources and the usage data to make a good decision. Making that decision in week one, for a product that hasn't launched, is optimising for a problem you don't have.

What Postgres actually gives you

When people say "just use Postgres," what they're really saying is: use something that gives you transactions, foreign key constraints, rich query language, a mature ecosystem, and decades of operational knowledge from people who've run it at scale.

Postgres gives you all of those things, plus:

JSONB columns when your data is genuinely semi-structured, without giving up relational integrity on the rest → Full-text search good enough to ship without Elasticsearch on most products → pgvector for embedding storage and similarity search, which we now use on every AI-adjacent project → Row-level security, which lets you build multi-tenant access control inside the database rather than in application code → Logical replication for read replicas, event streaming, and change data capture

On the Kestrel AI project, we used pgvector to store and query document embeddings. Average query time for a semantic search across ten thousand document chunks was under 200ms on a Railway-hosted Postgres instance. We never needed to introduce Pinecone until the document volume hit a threshold that took eight months to reach. By then we had the data to justify the infrastructure cost.

The two cases where we didn't use Postgres

Case 1 — DynamoDB: This was on a project with genuine multi-region write requirements. A client with users in India, Singapore, and Germany who needed writes to land locally. DynamoDB Global Tables solved this cleanly. We still used Postgres for the parts of the system that didn't need global distribution — billing, user accounts, audit logs. DynamoDB handled the session state and event stream.

The lesson: DynamoDB is the right answer when you have genuine global write distribution requirements and you know it at the start. If you're adding it because it sounds scalable, you're borrowing complexity you don't need.

Case 2 — MongoDB: This was the mistake. Early 2023. We were building an internal tool for a client that needed to store highly variable document structures — configurations that were completely different shapes per customer. MongoDB seemed like the natural fit. Flexible schema, easy to prototype.

Six months in, the codebase was full of defensive code checking for the presence of fields that might or might not exist. Queries that should have been joins were application-level loops. A simple report took three minutes to generate because the data model didn't support the aggregation we needed.

We migrated to Postgres. It took two weeks. The JSONB column handled the variable document structures. The rest of the data went into proper tables. The report took four seconds.

The actual question to ask

Before reaching for a non-Postgres database, ask yourself honestly: is my problem a data storage problem, or is it an application design problem?

Most of the time, the answer is the second one. A poorly designed relational schema is not evidence that you need a document store. It's evidence that the schema needs work.

When Postgres genuinely isn't the answer

You need global multi-region writes from day one (DynamoDB, CockroachDB, PlanetScale) → Your data is a pure graph with complex traversal requirements (Neo4j, but check whether Postgres recursive CTEs actually solve it first) → You need sub-millisecond time-series writes at very high volume (TimescaleDB is Postgres with extensions, so it counts, but InfluxDB if you're serious about it) → You're building something with genuinely document-oriented data where schema flexibility is a core product requirement and you have a team that will manage the trade-offs

For everything else: use Postgres. Learn to use it well. Use connection pooling. Understand your query plans. Add the right indexes. Run read replicas when you need them.

The boring choice has shipped more production software than all the exciting alternatives combined.

Written by
Microsive Studio