Skip to main content

Server & Hosting Setup

We set up VPS hosting on Hetzner, Contabo, or DigitalOcean with Ubuntu LTS, Nginx, PHP-FPM, and PostgreSQL or MySQL. Zero-downtime deploys via rsync or Deployer, Let's Encrypt on a cron, UFW firewalls, and daily offsite backups. Typical project: migrate a merchant off shared hosting onto a 20 EUR/month Hetzner box and watch page speed halve because the bottleneck was always the shared CPU credit.

Best fit
teams on shared hosting who have outgrown it, anyone migrating off a managed WordPress host that's getting expensive, or workloads with GDPR data-residency requirements.
Not a fit
globally distributed apps with multi-region needs (AWS or GCP does that better), or teams who want a fully-managed PaaS with zero ops (use Fly.io or Render).

What's actually different about how we do this

We pick boring infrastructure on purpose. A Hetzner CX22 (4 GB RAM, ~6 EUR/month) or CX42 (16 GB RAM, ~20 EUR/month) running Ubuntu 24.04 LTS handles most small-to-mid web workloads better than a managed PaaS at five times the cost, and we'd rather spend the saved budget on actually tuning the stack than on a nicer dashboard. For most of our clients the right answer is one VPS, not three, and we'll tell you when you don't need the architecture you came in asking for.

The stack we install almost always reads the same way: Nginx as the web server, PHP-FPM with OPcache and validate_timestamps=0 in production, PostgreSQL 16 or MySQL 8 depending on the app, and ufw plus fail2ban for the firewall layer. We don't use Docker on the VPS for the web tier, because for a single-VPS setup it adds complexity without much benefit and the deploy story gets worse, not better. Containers are great for some workloads. They're rarely the right answer for a 5,000-visit-a-day Laravel site.

Deploys go out via Deployer or a small custom rsync script, with a symlinked current/ directory so a deploy is atomic and a rollback is one symlink swap. SSL is Let's Encrypt with a cron-based renewal, not Cloudflare's flexible mode. Backups go to Backblaze B2 nightly with a 30-day retention, plus a weekly verification script that restores the latest dump into a throwaway PostgreSQL container and runs a row count. A backup you've never restored is a backup you don't have.

What can go wrong (and what we do about it)

The failure mode we see most often is a migration that goes well on day one and then breaks something on day three because of a config detail that wasn't in the runbook. PHP upload_max_filesize, post_max_size, Nginx client_max_body_size, the WordPress admin upload limit. Each one has a different default and a different config file, and a merchant trying to upload a 12 MB product photo on a Tuesday morning won't care which layer is wrong. Before we hand over a server we run a written checklist of about 40 settings that we've collected from previous incidents, including the upload chain.

The second one is DNS. We've watched a flawless migration get blown up by a zone file that nobody updated, or by a third-party service (a payment gateway, an ESP, a webmail provider) that has the old IP cached past TTL. We schedule cutovers for early Sunday morning with a 7-day TTL drop two days ahead, and we keep the old VPS running for at least 10 days after the move. If something breaks on day eight, we want the rollback to be instant, not let me re-provision the box.

The third one is monitoring blind spots. A new VPS without monitoring is a server you only hear about when it falls over. We install Netdata or a small Prometheus + Grafana setup before we hand over, with alerts to email and (if the client wants) Slack. Disk fills up before CPU does on most workloads, so the disk-usage alert is the one we tune most carefully. 80% triggers a warning, 90% pages.

Got a project that needs sorting?

Tell us what's broken and we'll tell you whether we can help.