Most server hardening guides are written for teams. They assume you have a dedicated DevOps person, a change management process, and enough bandwidth to read 40-page security policies. If you’re a solo developer running one or three VPS instances for your own projects — maybe a WordPress site, a Node.js API, a Docker-based side project — those guides will either overwhelm you or send you down a rabbit hole of configuration that you’ll abandon by the third step.

This guide is different. It’s built around what a solo developer actually needs to do in the first 30 minutes after spinning up a new VPS, what ongoing hardening actually protects you versus what’s just security theater, and how to set up enough automated monitoring that you don’t need to stare at logs every night. It reflects lessons learned the hard way from running lean infrastructure for multiple projects simultaneously.

Why Solo Developers Are a Uniquely Attractive Target

There’s a common misconception that solo developers aren’t worth attacking. The reality is almost the opposite. Your servers are often more vulnerable than enterprise servers, not because you’re less skilled, but because you’re time-constrained. You’re balancing feature development, customer support, and infrastructure maintenance alone. Security tasks get deferred.

Automated scanners don’t care whether you’re a Fortune 500 company or a one-person operation. Within minutes of provisioning a new DigitalOcean droplet or Linode instance, bots are already probing your SSH port. A server left with default settings and a weak root password will typically be compromised within hours. Brute-force attacks on SSH are not targeted — they’re automated and relentless.

Generic hardening guides usually fail solo developers in two specific ways. First, they assume you have time for defense-in-depth layering across multiple systems. Second, they often recommend security measures that matter in enterprise contexts but add real operational friction for a solo operator without proportionate benefit. The goal here is to identify the 20% of measures that stop 80% of real threats.

The First 30 Minutes: What to Do Before Anything Else

The moment you spin up a new VPS, the clock is running. Before you install your application, before you configure your web server, before you do anything else, run through this sequence.

Step 1: Create a non-root user immediately

Log in as root just long enough to create a non-privileged user that you’ll use for everything else.

adduser deploy
usermod -aG sudo deploy

Then copy your SSH key to that user:

rsync --archive --chown=deploy:deploy ~/.ssh /home/deploy

Step 2: Update the system before touching anything

Before installing any packages:

apt update && apt upgrade -y

On a fresh Debian or Ubuntu system, this frequently patches 20 to 40 known vulnerabilities that shipped with the base image.

Step 3: Lock down SSH right away

Open /etc/ssh/sshd_config and change these values:

PermitRootLogin no
PasswordAuthentication no
PubkeyAuthentication yes
Port 2222

Changing the SSH port to something other than 22 will not stop a determined attacker, but it will eliminate roughly 90% of the automated noise in your auth logs. That makes real threat signals much easier to spot. After editing, restart the service:

systemctl restart sshd

Do not close your existing SSH session until you’ve verified the new configuration works from a second terminal window.

Step 4: Set up automatic security updates

Install unattended-upgrades and configure it to auto-apply security patches:

apt install unattended-upgrades
dpkg-reconfigure unattended-upgrades

Edit /etc/apt/apt.conf.d/50unattended-upgrades and make sure the security line is uncommented. This is not optional. Unpatched servers are the single most common entry point for real attacks on small infrastructure.

Step 5: Install and configure fail2ban

apt install fail2ban

Create /etc/fail2ban/jail.local with:

[sshd]
enabled = true
port = 2222
maxretry = 3
bantime = 3600
findtime = 600

This bans any IP that fails SSH authentication three times within 10 minutes for one hour. Adjust bantime upward if you want longer bans. For most solo dev setups, this configuration alone stops the vast majority of automated brute-force attempts.

SSH Hardening: What Actually Matters vs Security Theater

SSH is where most solo developers spend disproportionate time on security measures that don’t move the needle. Let’s be direct about what matters.

What actually protects you

  • Disabling password authentication entirely. If you’re still using passwords for SSH, this is your single highest-priority fix. Key-based authentication means brute force against SSH becomes computationally infeasible.
  • Ed25519 keys instead of RSA. If you’re generating new SSH keys, use ssh-keygen -t ed25519. Shorter, faster, and more resistant to side-channel attacks than older RSA 2048-bit keys.
  • AllowUsers directive. Add AllowUsers deploy to your sshd_config. This means even if someone creates another user account through some other vulnerability, they can’t SSH in.
  • SSH key passphrases. If your laptop gets stolen, your private key is exposed. A passphrase on the key adds a second factor without requiring any server-side configuration changes.

What is mostly security theater for solo devs

  • Port knocking. Legitimate in some contexts, but the operational overhead of configuring knock sequences across multiple client machines makes this more painful than beneficial for a solo setup.
  • Two-factor authentication for SSH via TOTP. The configuration complexity and the risk of locking yourself out — especially when you’re troubleshooting an outage at 2am — often outweighs the benefit when you’re already using key-based auth with a passphrase.
  • Hiding your SSH port on a very high, obscure port number. Bots now scan full port ranges. Obscuring the port reduces log noise but provides very little actual security.

Firewall Rules for a Typical Solo Dev Stack

A typical solo developer’s VPS runs some combination of WordPress, Node.js, and Docker containers. Here’s a practical UFW configuration for that stack.

Install and reset UFW to defaults:

apt install ufw
ufw default deny incoming
ufw default allow outgoing

Then open only what you actually need:

ufw allow 2222/tcp    # SSH on your custom port
ufw allow 80/tcp      # HTTP
ufw allow 443/tcp     # HTTPS
ufw enable

If you’re running a Node.js app that listens on, say, port 3000, do not open that port publicly. Let Nginx proxy to it and only expose 80 and 443. Your Nginx config for this would look like:

location / {
    proxy_pass http://127.0.0.1:3000;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection 'upgrade';
    proxy_set_header Host $host;
    proxy_cache_bypass $http_upgrade;
}

Docker and the firewall problem you might not know about

This is a specific and serious issue that catches solo developers off guard. Docker modifies iptables rules directly and bypasses UFW. If you run a Docker container with -p 3000:3000, Docker opens that port to the world regardless of your UFW rules.

To fix this, edit /etc/docker/daemon.json:

{
  "iptables": false
}

Then manage your own iptables rules, or use the alternative of binding containers to localhost only: -p 127.0.0.1:3000:3000. The localhost binding approach is simpler for most solo dev setups and doesn’t require touching iptables directly.

Automated Monitoring: What to Set Up When You’re the Only Person On Call

When you’re a team of one, you cannot watch logs. The solution is to make logs watch themselves and notify you about things that matter.

Logwatch for daily digests

apt install logwatch
logwatch --output mail --mailto your@email.com --detail medium

Configure it to send a daily summary. You’ll get a readable digest of authentication attempts, sudo usage, service failures, and disk space. It takes less than two minutes to scan each morning and surfaces anomalies you’d otherwise miss entirely.

Monit for service health

Monit is a lightweight process monitor that can restart services automatically and alert you when something dies:

apt install monit

A minimal /etc/monit/monitrc entry for Nginx looks like:

check process nginx with pidfile /run/nginx.pid
  start program = "/bin/systemctl start nginx"
  stop  program = "/bin/systemctl stop nginx"
  if failed port 80 protocol http then restart
  if 3 restarts within 5 cycles then timeout

Set up similar blocks for your Node.js process, MySQL or Postgres, and any other critical services. Monit will attempt to restart the service automatically and email you if it can’t recover within the threshold you specify.

Simple uptime monitoring with an external service

Monit watches processes on the server, but it can’t tell you when the entire server is unreachable. Use an external free-tier service like UptimeRobot or Better Uptime to ping your domain every five minutes. The free tier on both is more than sufficient for most solo dev projects and takes about 10 minutes to configure.

Disk space alerts before they become crises

One of the most common causes of unexpected outages in solo dev environments is a full disk. A simple cron job handles this:

0 8 * * * df -h | awk '$5 > 80 {print}' | mail -s "Disk usage alert" your@email.com

This runs at 8am daily and emails you if any partition is over 80% full. Primitive, but effective.

Backup Strategy That Doesn’t Rely on Getting Around to It

The honest thing to say about solo developer backups is this: if it requires manual steps, it will eventually not get done. The only reliable backup is the one that runs without you thinking about it.

The 3-2-1 baseline for a solo project

  • 3 copies of your data
  • 2 different storage types
  • 1 copy offsite

For a typical solo setup, this means: the live data on your VPS, automated snapshots within your hosting provider (DigitalOcean, Linode, and Vultr all offer scheduled snapshots), and a weekly or daily backup to an offsite location like Backblaze B2 or AWS S3.

Automating MySQL backups to S3

Install the AWS CLI and configure it with a bucket-specific IAM user. Then create /usr/local/bin/backup-db.sh:

#!/bin/bash
DATE=$(date +%Y%m%d_%H%M%S)
mysqldump -u root -p"$MYSQL_ROOT_PASSWORD" --all-databases | gzip > /tmp/backup_$DATE.sql.gz
aws s3 cp /tmp/backup_$DATE.sql.gz s3://your-backup-bucket/mysql/
rm /tmp/backup_$DATE.sql.gz

Make it executable and add to crontab:

chmod +x /usr/local/bin/backup-db.sh
0 2 * * * /usr/local/bin/backup-db.sh

Store MYSQL_ROOT_PASSWORD in /etc/environment rather than hardcoding it in the script. The IAM user you create for this S3 bucket should have only write access to that specific bucket and nothing else.

Test your backups

This is the part almost everyone skips. Once a month, restore a backup to a test instance and verify the data is intact. A backup you’ve never tested is not a backup you can trust.

The Personal Setup and What It Took to Get There

The configuration described in this guide is close to what runs across several production servers handling real traffic. It was not always this clean. An earlier setup included WordPress installs that hadn’t been updated in months, MySQL listening on a public port because the application had been misconfigured during a late-night deployment, and log files that had grown to fill the entire disk, bringing down services without any warning.

The lessons were expensive in time if not in money. A few that proved most useful:

The first attack vector is almost always something old. In every incident worth examining, the entry point was an outdated plugin, an unpatched library, or a service left running that should have been disabled. Automated updates and regular apt upgrade runs eliminate a large percentage of exploitable surface area.

Audit what’s actually listening. Run ss -tlnp periodically. It’s common to find services listening on 0.0.0.0 that should only be accessible on 127.0.0.1. Redis, in particular, has a long history of being exposed publicly by developers who didn’t realize it was accessible.

Keep a server setup checklist in version control. Maintaining a short Markdown file that documents what’s installed, why, and how it’s configured is not bureaucracy — it’s essential when you’re troubleshooting at midnight three months after you set something up.

The monitoring overhead is front-loaded. Setting up Logwatch, fail2ban, Monit, and UptimeRobot takes about two hours total. After that, they require almost no ongoing attention. That two-hour investment has surfaced issues many times that would have otherwise become multi-hour outages.

What to Prioritize If You Only Have an Hour

If you’re reading this article with a VPS already running and only have limited time, the priority order is:

  1. Disable root SSH login and password authentication
  2. Enable automatic security updates
  3. Install and configure fail2ban
  4. Review what ports are publicly accessible with ss -tlnp and ufw status
  5. Set up at least one external uptime monitor
  6. Confirm automated database backups are running and test one

Steps 1 through 3 stop the most common automated attacks. Steps 4 and 5 give you visibility. Step 6 is what saves you when everything else fails.

Server hardening for solo developers is not about achieving perfect security. It’s about raising the cost of attack above the value of your target, removing the low-hanging fruit that automated scanners rely on, and ensuring that when something does go wrong — and eventually something will — you have the data and backups needed to recover quickly. Start with the first 30 minutes checklist, build from there, and automate everything that would otherwise get skipped.

By Michael Sun

Founder and Editor-in-Chief of NovVista. Software engineer with hands-on experience in cloud infrastructure, full-stack development, and DevOps. Writes about AI tools, developer workflows, server architecture, and the practical side of technology. Based in China.

Leave a Reply

Your email address will not be published. Required fields are marked *