Jun 7, 2020

Fast deployment of Nginx/PHP/MySQL with LetsEncrypt, HTTP2 and IPv6 using Docker Swarm.

Periodically I deploy simple web sites with Nginx, PHP and MySQL. It is usually one or two virtual servers, all with similar requirements and configs. I came to a "setup-and-forget" set of configs to deploy a site and upgrade in a few years.

Here is my "Infrastructure As A Code" for LEMP sites you can deploy pretty fast.

Functional requirements:
  1. HTTPS with ACME-issued certificate (Let'sEncrypt, BuyPass, and others)
  2. Self-issued certificate for local deployments for a local domain
  3. Nginx with HTTP2
  4. IPv6 support with a single IP
  5. Services are deployed in containers, so they could be easily replaced and migrated
  6. Use environment variables for database passwords and other secrets
  7. Keepalive checks for services
  8. Works in Linux as production environment
  9. Works in Docker Desktop for both Mac and Windows as development environment
Non-functional requirements:
  1. Lightweight to fit a cheap virtual server with 1Gb RAM.
  2. Vendor-agnostic, use official public repositories, no third-party dependencies
  3. Single site per server. VPS are cheap enough. It can serve multiple domains, of course.


Why Docker Swarm?

Kubernetes could be a great choice, but it consumes gigabytes of RAM. It is just not an option for tiny sites. Ansible/Vagrant are popular tools among system administrators, but they don't provide service decoupling. I like upgrading PHP, Nginx and MySQL intantly with a single command and ability to switch back.

Problems:
  1. Docker Swarm does not provide cron jobs scheduling, and I need to run ACME renewal each 2 months. Nginx does not have an ACME plugin to renew certificates. I need a simple solution to renew Let's Encrypt certificates within a docker container.
  2. Swarm does not support IPv6 options from a docker-compose file.
  3. Nginx needs a certificate file to start, and ACME needs a web server to confirm a request. A chicken and egg problem for the first run in production.
  4. Local deployment is done with a self-signed certificate, and no renewal tasks should run for it.
Keypoints:

* A decent solution for issuing and renewing an ACME certificate in a docker container is found in the nginx-le image. It contains an outdated the Nginx version, so I use an algorithm with an official image "nginx:alpine".

* You can see an "acme.sh" package used instead of Certbot. Acme.sh in the "Nginx mode" configures Nginx to issue certificates. This way certificates can be obtained before starting Nginx with production configs. Local deployment generates a self-signed certificate.

* Recent Nginx images support init scripts in "/docker-entrypoint.d/". You can see scripts mounted to the Nginx container install openssl, curl and acme.sh packages when container starts. This way I avoid maintaining a custom image. If your container does not try to generate certificates - remove the certificates volume and pull a fresh image for Nginx.

* Same way I init MySQL database from an "init.sql" file by mounting it to a container.

* Official PHP images do not support init scripts. I substitute the entyrypoint script with a custom one.
In most cases I use images from Docker Hub with extensions compiled over official PHP images: grigori/phpextensions.

* There is no simple solution to support IPv6 in Docker Swarm. Docker uses NAT to route traffic, and IPv6 was designed to avoid NAT.
One possible way is to use Compose-file version 2, ask a provider to grant an IPv6 /80 range, and set the "enable_ipv6" flag for docker engine. This is offered by a Docker documentation, but feels too complicated for such a small task.
Another way is to add an IPv6 NAT service in a Docker container. I did not try it because found an easier way.

* A "clear_env = no" clause in "fpm.conf" file allows passing environment variables to PHP scripts. Only the variables listed in "docker-compose.yml" are be passed, so it's safe.
Environment variables, if defined, take precedence over values in the "mysql/db.env" file. Values from a file can be used in a local deployment, environment variables should be defined in production.

I run Nginx in a "host" network, which means it is not isolated from server. I can't replicate, scale it, make a failover, migrate among servers, but I don't need to. I just want to run Nginx web service like I did for years.

* New problem: a host network can't be used in Docker desktop in Win and Mac. This is solved with YAML inheritance by overriding settings in a "docker-compose.override.yml" file.
A local deployment with "docker-compose up" reads both "docker-compose.yml" and "docker-compose.override.yml" files. In production the command "docker stack up -c docker-compose.yml mystack" deploys Nginx to a host network.

* PHP scripts access MySQL service using a separate network, while Nginx communicates with PHP over Unix socket. Database can be easily moved and replicated to separate servers in a cluster with minor configuration changes.


A single-instance PHP site is quite productive if done right, it may handle traffic at a million daily users scale, and is very cost-effective. Just keep in mind that a single-instance architecture has SPOFs, does not provide failover and adds problems for implementing the blue-green deployment.

That's it, now I can deploy simple PHP scripts with Nginx, MySQL, TLS with ACME certificates, HTTP2 and IPv6 using a single command.