Cloud Cost Optimization for Indie Projects: Real Numbers, Real Savings
Most indie SaaS founders I know have had the same moment: staring at a cloud bill that climbed from $40 to $220 over six months without any corresponding growth in revenue. The product never changed architecturally. Nobody made a deliberate choice. Services just accumulated, each one added for a reasonable-sounding reason at the time, until the monthly statement started looking like a funded startup’s infrastructure tab. Effective cloud cost optimization for indie projects is not about obsessive penny-pinching — it is about building deliberate awareness of where every dollar goes and having a principled strategy for each category of spend.
I have run production workloads ranging from solo side projects to products with several thousand monthly active users, across AWS, GCP, Hetzner, and Vultr. The patterns that cause overspending are remarkably consistent, and so are the fixes. The following breakdown starts with the structural causes, moves through every major cost category with specific numbers, and ends with a tiered budget framework you can apply directly.
The “Just Add a Service” Trap
Cloud providers are designed for incremental adoption. Adding a managed service takes three minutes and a few clicks. Evaluating the long-term cost of that service is optional — the bill arrives thirty days later. This asymmetry between the effort of adoption and the effort of evaluation is the root cause of most indie cloud overspending.
The trap compounds because each individual service seems cheap in isolation. A CloudWatch alarm is $0.10 per alarm per month. An Application Load Balancer is $0.008 per LCU-hour. AWS Secrets Manager is $0.40 per secret per month plus $0.05 per 10,000 API calls. None of these numbers looks alarming when you add them. Collectively, they represent $25 to $45 of monthly spend on infrastructure plumbing before any actual compute or storage is measured. On a $40 MRR product, that is a meaningful fraction of revenue going to auxiliary services the project would run fine without.
The discipline required for effective cloud cost optimization is reviewing your bill with genuine skepticism every month rather than treating it as a fixed expense. For most indie projects, two or three of the largest line items will have cheaper or free alternatives that impose no meaningful trade-off.
What a Typical Indie SaaS Actually Costs
Abstract percentages are easy to dismiss. The following breakdown covers a realistic production stack for a bootstrapped web application with 1,500 to 5,000 monthly active users: a Next.js application server, PostgreSQL database, Redis for session caching, basic monitoring, and file storage for user uploads averaging 15GB total.
AWS Baseline (us-east-1, on-demand)
| Category | Service | Monthly Cost |
|---|---|---|
| Compute | EC2 t3.small (2 vCPU / 2GB RAM) | ~$17 |
| Database | RDS PostgreSQL db.t3.micro (1 vCPU / 1GB RAM) | ~$28 |
| Cache | ElastiCache Redis cache.t3.micro | ~$18 |
| Storage | S3 (15GB stored + 50GB egress/month) | ~$5 |
| Monitoring | CloudWatch (basic metrics + 3 dashboards) | ~$10 |
| Networking | ALB + 100GB data transfer out | ~$27 |
| Total | ~$105/month |
That $105 figure assumes no NAT Gateway, no AWS Backup, no multi-AZ RDS, and no CloudTrail logging. Add those as the product matures and you are looking at $160 to $200 per month before the product generates $500 MRR. That ratio — infrastructure costs at 20 to 40 percent of revenue at early stage — is what cloud cost optimization targets.
Compute: Reserved, Spot, or Get Off AWS Entirely
Compute is the most visible line item and the most mismanaged. The on-demand pricing most developers default to is the most expensive option by a substantial margin.
A 1-year reserved EC2 t3.small in us-east-1 costs approximately $10.40 per month, compared to $16.79 on-demand — a 38 percent saving for a commitment you will almost certainly keep for a product that is running in production. A 3-year reserved instance drops to roughly $6.50 per month. For any compute instance running continuously, committing to reserved pricing is one of the highest-ROI changes available on pure AWS.
Spot instances go further: up to 90 percent below on-demand pricing, with the trade-off of interruption risk. For stateless workers — image processing jobs, email queue consumers, bulk data export tasks — spot instances are the correct default. Running them via an Auto Scaling Group with a mix-and-match spot strategy (multiple instance families, multiple availability zones) makes the interruption risk manageable for most async workloads. For anything stateful or user-facing, spot introduces more operational complexity than the savings justify for a small team.
The third option is the one most overlooked by developers who started on AWS: leave AWS entirely for non-enterprise workloads. A Hetzner CX22 (2 vCPU / 4GB RAM) costs $5.47 per month. A Vultr High Frequency instance at 2 vCPU / 2GB RAM costs $12. A DigitalOcean Basic Droplet at 2 vCPU / 2GB RAM costs $18. All three have significantly less ancillary pricing overhead than AWS — no charges for API calls, no egress complexity, no separate charges for the load balancer you may not even need at your current traffic level.
The GCP equivalent framing: an e2-micro is free for the first year and $6.11/month afterward. GCP Spot VMs for preemptible workloads run 60 to 80 percent below standard pricing. GCP’s free tier also includes 1 f1-micro per month in certain regions permanently, which is useful for lightweight monitoring or proxy instances.
The Bandwidth Trap
AWS egress pricing is the line item that most consistently surprises developers. Outbound data transfer from EC2 to the internet costs $0.09 per GB after the first 100GB per month. That sounds small. At 1TB of monthly egress — not unrealistic for a product serving media files or large API responses to several thousand users — you are paying $90 in transfer fees alone, on top of every other service cost. At 5TB, egress costs more than the compute running the application.
GCP has comparable egress pricing. Azure is slightly cheaper at 0.087/GB but structurally the same problem.
The fix is architectural. Assets, media, and any user-downloaded content should not be served directly from your EC2 instance or from S3 with direct bucket access. They should be served through Cloudflare, which has zero egress fees from its CDN — traffic is absorbed at Cloudflare’s edge and never charged to your origin. Cloudflare’s free tier covers unlimited bandwidth. The only cost is the origin request that populates the cache on first load.
For storage specifically, the comparison is stark:
- AWS S3: $0.023/GB storage + $0.09/GB egress to internet
- Backblaze B2: $0.006/GB storage + $0.01/GB egress (free to Cloudflare via the Bandwidth Alliance)
- Cloudflare R2: $0.015/GB storage + zero egress fees
- Hetzner Object Storage: ~$0.014/GB storage + free within Hetzner network
For a product storing 100GB with 200GB monthly downloads: AWS S3 costs approximately $20.60 per month. Cloudflare R2 costs $1.50 per month plus 100K Class B operations. Backblaze B2 behind Cloudflare costs approximately $0.60 per month. The migration involves updating one environment variable — your S3 endpoint — since both Cloudflare R2 and Backblaze B2 are S3-compatible.
Database Costs: The Managed Premium and Its Alternatives
Database spend is the second largest controllable cost category after compute, and the one where the pricing model differences between providers are sharpest.
AWS RDS PostgreSQL on a db.t3.micro (1 vCPU / 1GB RAM) costs $28.69 per month on-demand in us-east-1. For comparison, a Supabase free tier project includes a Postgres database, auth, object storage, and a REST API. The free tier is limited to 500MB database storage and 2 CPU hours per day, which is adequate for early-stage products and side projects under light load. Supabase Pro at $25 per month includes 8GB storage, 5GB database size, and removes the project pause — meaningfully cheaper than RDS for comparable specs.
PlanetScale offers horizontal scaling with a MySQL-compatible Vitess layer. Its free tier includes 5GB storage and 1 billion row reads per month — generous for most indie SaaS data volumes. The trade-off is MySQL semantics and the absence of foreign key enforcement in the Vitess sharding model, which matters for complex relational schemas.
Turso, built on LibSQL (a SQLite fork), is worth attention for products with read-heavy workloads distributed across regions. Its free tier covers 500 databases and 9GB total storage. The edge replica model reduces read latency dramatically for globally distributed users, at a price point that starts free and scales cheaply. It is not the right choice for write-heavy applications but is compelling for content-heavy products where most queries are reads.
Self-managed Postgres on a Hetzner or Vultr VPS remains the highest-ROI option for products where the team has any operational database comfort. PostgreSQL running on a $12/month Hetzner CX32 alongside the application and Redis handles most indie SaaS loads comfortably, with proper backups via pgBackRest or Borgmatic to object storage costing an additional $1 to $3 per month.
The correct database strategy for cloud cost optimization at indie scale is: Supabase free tier or self-hosted Postgres until $500 MRR, then re-evaluate based on actual growth trajectory. Do not pay for RDS until your database size or availability requirements justify a managed SLA you are actually relying on.
Monitoring Costs: Datadog’s Per-Host Model vs Open-Source Alternatives
Datadog is genuinely excellent observability software. Its per-host pricing of $15 to $23 per host per month (Infrastructure Pro plan) is also genuinely punishing for small projects. A modest four-server setup costs $60 to $90 per month in Datadog agent fees alone, before APM traces, log ingestion, or custom metrics — each of which is priced separately and can double or triple the bill if enabled carelessly.
For indie projects, the open-source monitoring stack — Prometheus, Grafana, and Alertmanager — provides equivalent infrastructure visibility for the cost of the compute running it. On a single-server setup where all services expose metrics locally, a Prometheus scrape configuration and a few Grafana dashboards take an afternoon to configure and require minimal maintenance afterward. Uptime Kuma handles external HTTP checks, TCP monitoring, and webhook alerting in under 200MB of RAM.
Grafana Cloud’s free tier covers 10,000 active series in Prometheus, 50GB of logs, and 50GB of traces per month. For most indie products, that ceiling covers the entire observability stack without charges. The hosted option removes the maintenance overhead of running Prometheus yourself while staying free at indie-scale data volumes.
The comparison for a two-server production setup:
- Datadog Infrastructure Pro: $46/month for 2 hosts
- New Relic (full-stack observability): Free up to 100GB data/month, then usage-based
- Grafana Cloud free tier: $0 up to limits above
- Self-hosted Prometheus + Grafana on $6 Hetzner CX11: $6/month, unlimited retention on local disk
Self-hosted Prometheus is the most powerful option when you want full control over retention, custom recording rules, and complex alert logic without per-series pricing. Grafana Cloud free tier is the right choice if you want zero infrastructure overhead and stay within its limits. Datadog and New Relic’s paid tiers earn their cost only when you need unified APM, distributed tracing, and support SLAs — meaningful for teams, not typically for solo indie products.
Free Tier Stacking: The Art of Zero-Cost Infrastructure Primitives
Several services offer genuinely useful free tiers that, combined thoughtfully, cover the entire auxiliary infrastructure of an early-stage product.
Cloudflare free plan includes CDN with unlimited bandwidth, DNS hosting, DDoS mitigation, SSL termination, and Cloudflare Workers (100,000 requests per day). Workers run JavaScript and TypeScript at the edge, enabling geolocation-based routing, A/B testing, request transformations, and lightweight API endpoints — all without a server, at zero cost for most indie traffic levels. Workers KV adds distributed key-value storage, and Cloudflare D1 adds a globally replicated SQLite database on the same free plan (5GB storage, 5 million row reads per day).
Supabase free tier provides Postgres (500MB), auth with magic links and social OAuth, object storage (1GB), and a REST/GraphQL API over your database schema. The project pauses after 7 days of inactivity in the free tier — acceptable for side projects, not for production products. Upgrading to Pro at $25/month removes that restriction while staying significantly cheaper than AWS equivalents.
Vercel hobby plan provides serverless function hosting, CDN, preview deployments, and analytics for non-commercial projects. The commercial restriction is the relevant constraint: any project generating revenue requires the Pro plan at $20/month per seat. For pre-launch or non-revenue side projects, it is a useful zero-cost deployment platform.
Railway free tier provides $5 monthly credit for compute, which covers a lightweight application server running continuously if you stay within the memory limits. Neon’s free tier provides serverless Postgres with 0.5GB storage and automatic scale-to-zero — suitable for development and early-stage production with infrequent queries.
The effective strategy is to stack these at early stage: Cloudflare for CDN and Workers, Supabase for database and auth, Vercel or Railway for application hosting. A product at this configuration can run for months with no infrastructure bill while validating the business model. The migration path to paid infrastructure is straightforward once revenue justifies the upgrade.
Monthly Budget Framework by Stage
$0–$50/month: Pre-Revenue and Early Validation
At this tier, the goal is to run a production-quality stack on free tiers and cheap primitives. A viable configuration:
- Cloudflare free plan (CDN, Workers, D1 for lightweight data)
- Supabase free tier (Postgres, auth, storage)
- Railway $5 credit or Fly.io free tier (application server)
- Uptime Kuma on a shared VPS or free tier cloud (monitoring)
- Backblaze B2 free tier (10GB storage)
This configuration handles pre-launch traffic and early paying users with zero monthly bill in many cases. The trade-offs are project pause risk on Supabase free tier and cold starts on serverless platforms — acceptable at this stage.
$50–$200/month: Early Revenue, Established Product
At $50 to $200 MRR, the priority shifts from free tiers to reliability and operational simplicity. A recommended stack:
- Hetzner CX22 or CX32 ($5–12/month): application + self-hosted Postgres + Redis
- Cloudflare Pro ($20/month if WAF rules are needed, otherwise free tier)
- Cloudflare R2 or Backblaze B2 (near-zero for typical indie file storage)
- Grafana Cloud free tier (monitoring)
- Resend or Postmark free tier (transactional email up to ~3,000/month)
- Clerk or Auth0 free tier (up to 10,000 MAU)
Total realistic cost: $25 to $60/month, with most of the budget on compute. This is the configuration that makes the strongest argument for cloud cost optimization at indie scale — meaningful savings versus AWS with no meaningful reliability trade-off.
$200–$500/month: Growth Stage, Multiple Services
At this stage, time-to-fix becomes more valuable and some managed services earn their cost. Recommended adjustments:
- Hetzner CX42 or CX52 ($20–35/month) or a dedicated Hetzner server for larger databases
- Supabase Pro ($25/month) or DigitalOcean Managed Postgres ($15/month) if separating database from compute is preferable
- Backblaze B2 for primary file storage with Cloudflare CDN in front
- Better Uptime or Freshping paid tier for multi-region uptime monitoring
- Sentry developer plan ($26/month) for error tracking with meaningful retention
Total realistic cost: $80 to $130/month. Revenue at this stage should comfortably absorb the infrastructure cost while leaving meaningful margin. The managed database question is worth re-evaluating here — if the team does not have clear operational ownership of the self-hosted database, migrating to a managed option at this revenue level is prudent.
When to Stop Optimizing and Start Earning
Cloud cost optimization has a diminishing-returns problem that is worth naming explicitly. Saving $15 per month by migrating from S3 to Cloudflare R2 is a good use of two hours. Spending eight hours migrating authentication infrastructure to save $12 per month is not — especially if that time could be spent on a feature driving user retention or conversion.
The right threshold for optimization effort is roughly: spend time on cost reduction when the annual saving exceeds three times your hourly opportunity cost. At $100/hour opportunity cost, a change saving $360/year ($30/month) justifies up to three hours of migration work. Changes below that threshold belong on a low-priority list, not on this month’s sprint.
There is also a category of optimization that actively harms the product. Over-optimizing a database to self-hosted Postgres to save $25/month on Supabase, then experiencing a data loss incident because backups were not configured correctly, costs far more than the managed service ever would. Infrastructure that touches user data deserves conservative choices, not aggressive cost minimization.
The practical framework: audit your cloud bill once per quarter. For each line item above $20/month, answer three questions. Is there a cheaper alternative with equivalent reliability for my workload? Does migrating require skills my team has? Does the annual saving justify the migration time? Act on the items that clear all three. Leave the rest alone and focus on revenue.
The developers who run the most cost-efficient infrastructure are not the ones who optimize obsessively — they are the ones who made good initial decisions, review their bills with genuine attention, and resist the incremental service additions that seem free until the invoice arrives.