Microservices won the marketing war. Every conference talk, every architecture blog, every job posting seems to assume that splitting your application into dozens of independently deployed services is the natural endpoint of software evolution. But after a decade of industry-wide adoption, the evidence is clear: for most small-to-medium teams, microservices cost more than they deliver. The modular monolith — a well-structured single deployable with clear internal boundaries — deserves serious consideration as your default architecture.

Editorial Comment: The pendulum is finally swinging back. After years of premature microservices adoption driving complexity in teams that did not need it, the industry is having a more honest conversation. Michael lays out the real tradeoffs with numbers, not dogma.

Why Microservices Became the Default Answer

The microservices pitch is compelling: independent deployability, technology diversity, team autonomy, and horizontal scaling of individual components. These are real benefits — at a certain scale. Netflix, Amazon, and Uber genuinely need microservices because they have hundreds of engineering teams that cannot coordinate deployments across a single codebase.

The problem is that most teams adopted microservices not because they had these scaling challenges, but because microservices became synonymous with “modern architecture.” A five-person team building a SaaS product does not have Netflix’s coordination problems. They have a coordination surface they can manage with a shared codebase, a Slack channel, and a standup meeting.

The Costs Nobody Mentions in Conference Talks

Operational Overhead

Every microservice you add is another thing to deploy, monitor, log, scale, and debug. A monolith has one deployment pipeline. Ten microservices have ten deployment pipelines, each with their own CI configuration, Docker images, health checks, and rollback procedures. For a small team, this operational surface area is not free — it is measured in hours per week spent on infrastructure that is not your product.

Consider a concrete example: a typical SaaS application split into user service, billing service, notification service, API gateway, and a frontend BFF. That is five services to maintain. Each needs its own:

  • Dockerfile and build pipeline (5x CI config maintenance)
  • Health check endpoint and liveness probes
  • Logging configuration and log aggregation
  • Error tracking setup
  • Database or schema management (if using database-per-service)
  • Network policies and service mesh configuration

A conservative estimate: each service adds 2-4 hours per week of operational overhead. For five services, that is 10-20 hours per week — one to two full engineering days — spent on infrastructure instead of features.

Distributed System Complexity

The moment you split a monolith into services communicating over a network, you inherit every problem in distributed systems. Network calls fail. Services go down independently. Data consistency across services requires careful orchestration. A database transaction that was trivial in a monolith becomes a saga pattern with compensating transactions, dead letter queues, and eventual consistency semantics that your users may not appreciate.

Debugging a request that spans five services is qualitatively harder than debugging one that stays in a single process. You need distributed tracing (Jaeger, Zipkin), correlated logging, and the mental model to follow a request across service boundaries. These tools exist, but they add another layer of infrastructure to maintain.

Data Management Headaches

The microservices orthodoxy says each service should own its data. In practice, this means you cannot join across service boundaries. Need user information alongside billing data? That is an API call, not a SQL join. Need to run a report that crosses three domains? You are building a data pipeline or an API aggregation layer.

Many teams compromise by sharing a database across services. This works, but it couples your services at the data layer, defeating much of the architectural benefit while keeping all of the operational cost.

The Modular Monolith Alternative

A modular monolith is a single deployable application with well-defined internal modules that have clear boundaries, explicit interfaces, and minimal coupling. Think of it as microservices architecture without the network boundary — you get the organizational benefits of separation without the operational costs of distribution.

What This Looks Like

Your codebase is organized into modules: users/, billing/, notifications/, each with a public interface and private internals. Modules communicate through function calls or an internal event bus, not HTTP requests. The database can be shared but with schema ownership — the billing module owns its tables and exposes data through its public API, not through direct table access.

In languages with strong module systems (Elixir, Rust, Go), the compiler enforces boundaries. In languages without them (Node.js, Python), you enforce them through convention, linting rules, or tools like dependency-cruiser.

What You Keep

  • Clear boundaries: Modules have defined interfaces. Changes inside a module do not leak to other modules.
  • Team ownership: Different people or teams can own different modules.
  • Independent development: Modules can evolve at their own pace as long as their interfaces remain stable.
  • Testability: Each module can be tested in isolation with its dependencies mocked at the interface level.

What You Gain

  • Single deployment: One pipeline, one Docker image, one rollback.
  • Local function calls: No network latency, no serialization overhead, no retry logic.
  • Database transactions: ACID guarantees across module boundaries when you need them.
  • Simpler debugging: A stack trace, not a distributed trace. A log file, not a log aggregation system.
  • Lower infrastructure cost: One server process instead of five. One load balancer. One monitoring target.

When Microservices Actually Make Sense

Microservices are not wrong — they are a solution to specific problems. Consider them when:

Your team exceeds 30-50 engineers. At this size, coordination overhead in a single codebase becomes a genuine bottleneck. Teams step on each other, CI times become painful, and deployment risk increases because every deployment contains changes from many teams.

You have genuinely different scaling requirements. If your image processing service needs GPU instances while your API server needs CPU, running them in the same process is wasteful. Different scaling profiles justify different services.

You need technology diversity for good reasons. If your ML pipeline is Python but your API is Go, a service boundary makes sense. But “we want to try Rust for this one service” is not a good reason — it is a maintenance burden.

You need independent deployment for regulatory or reliability reasons. If a failure in your payment system must not affect your user-facing application, a hard service boundary with independent deployment and failure isolation is justified.

The Migration Path

The beauty of starting with a modular monolith is that the migration path to microservices is straightforward. If a module genuinely needs to become a service — because of scaling, deployment, or team autonomy requirements — you extract it. The interface is already defined. The data ownership is already clear. You are adding a network boundary to an existing module boundary.

Going the other direction — from microservices back to a monolith — is much harder. You are removing network boundaries, merging databases, consolidating deployment pipelines, and untangling distributed transaction logic. Several prominent companies have done this (Segment, Istio, Prime Video), and they all describe it as painful but worth it.

A Decision Framework

Before you split something into a service, ask these questions:

  1. Is our team large enough that we cannot coordinate deployments? If everyone fits in one Slack channel, the answer is probably no.
  2. Do we have different scaling requirements? Not theoretical — actual. Are you running out of capacity on a specific workload today?
  3. Is the operational cost justified by the architectural benefit? Add up the hours spent on service infrastructure and compare it to the time saved by independent deployment.
  4. Have we exhausted simpler alternatives? Module boundaries, background job queues, and read replicas solve many of the problems that microservices are used for, at a fraction of the cost.

If you answer “no” to most of these, a modular monolith is your better choice. Not because microservices are bad, but because they solve problems you do not have yet — and the overhead of solving problems you do not have is very real.

My Take

I have worked with both architectures. The best systems I have seen were modular monoliths that extracted services only when the pain of not doing so was undeniable. The worst systems I have seen were premature microservices architectures where a three-person team spent more time on infrastructure than on their product.

Start with a monolith. Make it modular. Extract services when you have evidence — not speculation — that you need them. This is not a conservative position; it is an efficient one.

Key Takeaways

  • Microservices solve coordination problems at scale (30+ engineers), but most small teams adopt them prematurely, trading product development time for infrastructure overhead.
  • Each microservice adds 2-4 hours/week of operational cost — deployment pipelines, monitoring, logging, debugging across network boundaries. Five services can consume two engineering days per week.
  • A modular monolith gives you clear boundaries, team ownership, and testability while keeping single-deployment simplicity, database transactions, and straightforward debugging.
  • The monolith-to-microservices migration path is clean if your modules have well-defined interfaces. The reverse direction (microservices to monolith) is painful and expensive.
  • Before extracting a service, demand evidence: actual scaling bottlenecks, team coordination failures, or regulatory requirements — not architectural fashion.

By Michael Sun

Founder and Editor-in-Chief of NovVista. Software engineer with hands-on experience in cloud infrastructure, full-stack development, and DevOps. Writes about AI tools, developer workflows, server architecture, and the practical side of technology. Based in China.

Leave a Reply

Your email address will not be published. Required fields are marked *