The most useful database advice anyone can give in 2026 is still “just use Postgres” — but that shortcut is doing a lot of work, and knowing when it fails matters as much as knowing why it is usually right. The decision to choose a database in 2026 across Postgres, SQLite, and Redis looks deceptively simple until you have spent six months migrating away from the wrong one. Switching databases is expensive, disruptive, and rarely worth celebrating. The real cost of a poor database choice is not the migration weekend — it is the eighteen months of accumulated workarounds before you finally commit to changing it.
This is a practitioner-level breakdown of when each database wins, when the default answer breaks down, and how to reason through the decision before you write a line of schema code.
Postgres: Why It Is the Default and When That Is Correct
Postgres earned its default status through genuine breadth. It is not merely a relational database that happens to be open source — it is an extensible data platform that handles structured data, semi-structured data, full-text search, geospatial queries, and vector similarity in a single coherent system. For most applications, this breadth means you never need to reach for a second database.
The specific capabilities that make Postgres the right choice for the majority of production workloads:
- JSONB storage and indexing. Postgres stores JSON as binary, enables GIN indexes on JSON fields, and lets you query nested JSON with operators that are fast at scale. The argument “I need a document store” is no longer sufficient justification for MongoDB when Postgres handles mixed relational-document models with a single index type and full ACID guarantees.
- Full-text search. The built-in
tsvectorandtsquerytypes cover the majority of search use cases without reaching for Elasticsearch. For applications where a basic search feature serves users adequately, adding a separate search index is operational overhead that Postgres eliminates. - pgvector. Semantic search and retrieval-augmented generation require vector similarity lookups. The
pgvectorextension turns your existing Postgres instance into a vector store. For applications that do not require extreme query-per-second vector throughput, this eliminates Pinecone, Weaviate, or a dedicated vector database entirely. - Foreign data wrappers, logical replication, and a mature extension ecosystem. The extension model means Postgres adapts to requirements that would force a database change on most other systems.
The honest caveat: Postgres scales vertically well and horizontally awkwardly. Read replicas are straightforward. Horizontal write sharding requires tools like Citus, careful application-layer partitioning, or a move to a distributed Postgres service. If you are building toward hundreds of thousands of concurrent writes across partitioned data, vanilla Postgres will require architecture work. For the other 95% of applications, vertical scaling with a read replica is sufficient for years.
SQLite: The Underestimated Option That 2026 Made Legitimate
SQLite spent most of its life as the database for embedded applications, mobile apps, and development environments. That characterization was accurate and limiting. The combination of Litestream, Turso, and a broader shift toward edge computing has made SQLite a genuine production option for a class of applications where it is not a compromise but the right answer.
SQLite wins when you are building a single-server application where the operational simplicity of a file-based database outweighs the flexibility of a client-server one. A SQLite database is a file. Backup is a file copy. Restore is overwriting a file. The database process is your application process. There is no connection pooling, no network latency between application and database, and no separate service to monitor. For applications that do not need multi-server concurrency, this simplicity is not a limitation — it is a feature that reduces infrastructure surface area meaningfully.
The specific tools that expanded SQLite’s legitimacy:
- Litestream. Streams SQLite changes to S3 (or compatible object storage) continuously, enabling near-zero-RPO disaster recovery without a secondary database server. The replication model is append-only WAL shipping. For a solo developer or small team running a single application server, Litestream makes the “but what about data loss?” objection to SQLite tractable.
- Turso. Distributed SQLite based on libSQL, designed for edge deployments. Turso gives you a SQLite-compatible database with replicas distributed across regions, making it relevant for latency-sensitive applications that want data close to users without the operational weight of a globally distributed Postgres setup.
Where SQLite is the wrong choice: any application requiring concurrent writes from multiple processes or servers, high-throughput write workloads, or complex access control patterns that benefit from database-level user management. SQLite’s write concurrency model is file-level locking; it handles concurrent reads well and serialized writes adequately, but falls apart under write contention from multiple connections.
The practical sweet spot: SaaS tools, internal applications, personal projects, and any deployment where “one server handles this load” is the honest capacity plan.
Redis: What It Is For and What It Is Not
Redis is consistently misunderstood in two directions: underused as a caching layer by teams that reinvent it manually, and overused as a primary database by teams who discover it is fast and conclude it is general-purpose. Neither error is rare.
Redis is a data structure server. It excels at operations that map cleanly to its in-memory data structures and expire naturally or have tolerable loss semantics. The canonical correct uses:
- Caching. Database query results, rendered HTML fragments, computed aggregations — anything expensive to produce and cheaper to cache. Redis’s TTL support and LRU eviction make it purpose-built for this. The pattern of cache-aside (check Redis, fallback to Postgres, write result to Redis) is simple, correct, and the right answer for most high-read applications.
- Session storage. Session data is small, read-heavy, expiry-native, and does not need the durability guarantees of a relational database. Redis handles millions of sessions with negligible memory overhead and sub-millisecond reads.
- Rate limiting. The
INCRandEXPIREprimitives implement a token bucket or sliding window counter atomically. Redis-based rate limiting is simple to implement correctly and scales to handle rate limiting across distributed application servers without coordination overhead. - Queues and pub/sub. Redis Streams and the List-based queue pattern (BullMQ uses this in the Node.js ecosystem) handle background job queues, event buses, and lightweight pub/sub well. For applications that do not need the durability guarantees of Kafka or RabbitMQ, Redis queues are operationally simpler and adequate.
- Leaderboards and counters. Sorted sets make real-time leaderboards and rank lookups trivial in ways that would require expensive queries against a relational database.
What Redis is not: a primary database for application data you cannot afford to lose. Redis’s persistence modes (RDB snapshots and AOF logging) reduce data loss risk, but the operational model is fundamentally in-memory with optional durability, not durable storage with optional caching. Teams that route user records, financial transactions, or primary application state through Redis as a primary store are accumulating a category of risk that is difficult to quantify until it materializes.
MongoDB in 2026: Where It Still Wins
MongoDB absorbed years of criticism, much of it earned, and emerged as a more mature database than it was in the early “web scale” era. The honest assessment of where it still belongs in the decision matrix:
MongoDB is the right choice when your data is genuinely document-centric, deeply nested, and schema-flexible in ways that relational normalization actively fights. Content management systems with highly variable document structures, product catalogs with heterogeneous attribute sets, and event logs with evolving payload shapes are cases where a document model is genuinely better, not just familiar.
The honest limitation: Postgres’s JSONB has absorbed many of the workloads that once justified MongoDB. If your application is primarily relational with a few document-shaped edges, JSONB columns in Postgres handle those edges without a separate operational dependency. MongoDB earns its place when the majority of queries are document-oriented and the schema variability is real, not theoretical.
MongoDB Atlas has also become a competent managed offering. If your team is already operating in the MongoDB mental model and the workload is genuinely document-oriented, Atlas is a reasonable hosting choice. The argument for switching to Postgres primarily to reduce operational complexity is weaker when Atlas handles that complexity for you.
The New Contenders: SurrealDB, Turso, PlanetScale, Neon
Several databases positioned themselves aggressively in 2024 and 2025. Their actual production readiness varies.
SurrealDB is a multi-model database attempting to unify relational, document, and graph models under a single SQL-like query language. The ambition is real; the production maturity is still catching up. Early adopters report it as promising for graph-heavy applications where the join complexity in Postgres becomes unwieldy. It is not a safe choice for teams that cannot absorb breaking changes or limited community support when debugging obscure issues.
Turso (distributed SQLite, mentioned above) is the most practically useful of the new entrants for a specific use case. If edge deployments with SQLite compatibility are your requirement, Turso is production-viable. Outside that use case, it is a solution to a problem you may not have.
PlanetScale disrupted MySQL hosting with its branching workflow (database schema changes as pull requests) and serverless scaling model. The 2024 removal of its free tier reduced its appeal for early-stage projects, and the Vitess-based sharding model introduces operational complexity that most applications do not need until they are at significant scale. For high-scale MySQL workloads with teams that value the branching workflow, it remains a strong option.
Neon is serverless Postgres with a separation of storage and compute, enabling scale-to-zero and branching workflows similar to PlanetScale’s. For development workflows where database branching is valuable and cost efficiency on low-traffic instances matters, Neon is worth evaluating. For high-throughput production workloads, the serverless model introduces latency characteristics that are difficult to predict.
Managed vs. Self-Hosted: The Decision Framework
The managed vs. self-hosted question is not primarily technical — it is economic and operational. The real question is: what is the per-hour cost of your team’s time when something breaks at 3 AM, and how does that compare to the monthly premium of a managed service?
For early-stage products and small teams, managed databases are almost always correct. The operational expertise required to run Postgres reliably — configuration tuning, WAL management, vacuum analysis, backup validation, failover testing — takes years to develop. Paying a managed provider to maintain that expertise is efficient use of capital. The monthly premium for RDS or Supabase over a raw VPS running Postgres self-managed is real; the hidden cost of the alternative is the engineering hours diverted from product work to database operations.
Self-hosted Postgres becomes economically rational at the point where either the managed service premium is material relative to your infrastructure budget, or you have the in-house expertise to operate it reliably. The crossover point is higher than most teams assume, because the failure modes of self-hosted Postgres at scale (replication lag during write spikes, autovacuum interference with query plans, connection pool exhaustion) are non-obvious and difficult to debug without experience.
The Cost Trap: RDS vs. Self-Managed Postgres
AWS RDS is the most common production Postgres deployment and also one of the most expensive ways to run Postgres relative to what you are getting. RDS charges for the compute instance, storage, storage I/O (on gp2 volumes), data transfer out, and Multi-AZ standby instances. A moderately sized RDS instance with Multi-AZ and adequate storage costs meaningfully more than the equivalent compute on EC2 running self-managed Postgres with streaming replication to a replica.
The RDS premium is justified when your team lacks database operations expertise, when the managed failover is the feature you are actually paying for, or when the operational simplicity of not owning the database is worth the margin. It is not justified when your team has the skills to manage Postgres, your workload is predictable, and you are in a cost optimization phase. At scale, moving from RDS to self-managed Postgres on EC2 or Hetzner (for European deployments) is one of the higher-leverage infrastructure cost reductions available.
Alternatives to RDS worth evaluating: Supabase for teams that want Postgres plus a built-in auth layer and object storage; Railway for teams that want simple managed deployments without AWS complexity; Fly.io with the built-in Postgres offering for globally distributed applications.
Decision Matrix by Use Case
| Use Case | Primary Database | Supporting | Avoid |
|---|---|---|---|
| Standard SaaS application | Postgres | Redis (cache, sessions) | MongoDB unless document-heavy |
| Single-server indie app / internal tool | SQLite + Litestream | — | Managed services (cost overhead) |
| Edge / globally distributed | Turso or Neon | Redis at edge for caching | Single-region Postgres as primary |
| AI / RAG application with vector search | Postgres + pgvector | Redis (rate limiting) | Separate vector DB unless QPS demands it |
| Document-heavy, schema-variable | MongoDB Atlas | Redis (cache) | Postgres if schema variability is theoretical |
| Real-time leaderboards / queues | Postgres | Redis (queues, sorted sets) | Postgres for ephemeral queue state |
| High-scale write-heavy | Postgres + Citus or PlanetScale | Redis (rate limiting) | SQLite, single-node Postgres |
Migration Difficulty: The Underestimated Cost of Getting It Wrong
Database migrations are not like library upgrades. They cannot be done incrementally in a weekend, and they rarely stay contained to the data layer. A migration from MongoDB to Postgres requires schema design (a discipline MongoDB avoided), data transformation, query rewrites, application-layer changes, index rebuilding, and a cutover strategy that keeps the application running during the transition. At any meaningful data volume, the migration itself requires a dual-write period where both databases stay in sync, which is operationally fragile and difficult to test exhaustively.
The specific costs that teams consistently underestimate:
- Query semantics diverge. MongoDB’s aggregation pipeline does not map cleanly to SQL. Redis commands have no SQL equivalent. A migration does not just move data — it rewrites the data access layer, which touches every feature that reads or writes the database.
- Performance characteristics differ. A query that ran fast on MongoDB’s document model may require a join strategy and specific index design in Postgres. The migration uncovers this only when you run the new queries against production data volumes.
- Migration windows are risky. A large-scale data migration over a live production database, with writes happening continuously, is one of the highest-risk operations a production engineering team can run. Every migration plan looks cleaner than the execution.
The practical implication: treat your initial database choice with the seriousness of a decision you will live with for three to five years. The “just use Postgres” default is not conservative — it is based on Postgres’s track record of growing with applications that outgrow their initial requirements. Choosing a database that fits the prototype stage but creates migration pressure as the application matures is a technical debt that compounds in ways that are difficult to predict from the beginning.
The Actual Framework: How to Choose
Strip the decision down to four questions:
- Is your data primarily relational, with defined relationships between entities? Postgres. This covers the majority of applications built around users, accounts, transactions, content, and similar entities with foreign key relationships.
- Are you building a single-server application where operational simplicity matters more than distribution? SQLite with Litestream. The file-based model eliminates a service dependency and the operational overhead that comes with it.
- Do you have a specific caching, session, queue, or rate-limiting requirement? Add Redis as a supporting service alongside your primary database, not instead of it.
- Is your data genuinely document-centric with real schema variability at the field level? Evaluate MongoDB. If the schema variability is mostly theoretical and the actual data is relational, stay with Postgres and use JSONB columns for the variable portions.
The new contenders (SurrealDB, Turso, PlanetScale, Neon) are worth evaluating when they directly address a specific constraint — edge distribution, cost structure, or the branching workflow — rather than as general-purpose alternatives. They solve real problems for specific profiles; they are not improvements on Postgres for typical web applications.
The database choice is one of the few infrastructure decisions that genuinely constrains your future options. It deserves more deliberate analysis than most teams give it before a project starts, and significantly less revisiting once the choice is made and production traffic validates it.
Frequently Asked Questions
Is “just use Postgres” still good advice in 2026?
For the majority of web applications and SaaS products, yes. Postgres handles relational data, semi-structured JSON, full-text search, and vector similarity through pgvector in a single system with strong ACID guarantees and a mature operations ecosystem. The cases where Postgres is the wrong answer — edge distribution requirements, single-server simplicity favoring SQLite, or genuine document-model data — are real but represent a minority of typical application workloads.
When should I add Redis to a Postgres-backed application?
Add Redis when you have a specific, concrete requirement that maps to its data structures: caching expensive queries or rendered fragments, storing session data, implementing rate limiting across distributed servers, or running background job queues. Do not add Redis speculatively. The operational complexity of running a second data store is only justified when the requirement is real and the Redis model solves it cleanly.
Is SQLite actually production-ready in 2026?
For single-server applications, yes — particularly with Litestream providing continuous replication to object storage. SQLite running embedded in an application process with WAL mode enabled handles read-heavy workloads and moderate write rates without issue. It is not appropriate for applications requiring concurrent writes from multiple servers or horizontal scaling. The Turso project has also made distributed SQLite viable for edge deployments where Postgres would require more infrastructure complexity.
What is the actual cost difference between RDS and self-managed Postgres?
A representative comparison: an RDS db.t3.medium Multi-AZ instance with 100GB gp3 storage runs roughly $150-200 per month. An equivalent EC2 t3.medium with the same storage running self-managed Postgres runs approximately $40-60 per month plus your team’s operational time. The managed premium is substantial and justified for teams without database operations expertise or when managed failover is the core value. For teams with the skills to manage Postgres, the self-managed option becomes economically rational at scale, particularly when combined with tools like pgBackRest for backup management.
How difficult is migrating from MongoDB to Postgres?
Significantly more difficult than most teams anticipate. Beyond the data transformation itself, the migration requires redesigning the schema relationally, rewriting all queries from aggregation pipeline syntax to SQL, rebuilding indexes, and validating query performance against production data volumes. A reasonable estimate for a medium-scale application is four to eight weeks of dedicated engineering time, including the dual-write transition period. The migration is worth doing if the original MongoDB choice was a poor fit for the data model — it is not worth doing for operational or reputational reasons alone.