Let's Encrypt's Certbot Auto is a great way to obtain free SSL certification, but renewal can be quite a pain, especially if you're trying to maintain several servers, and are renewing manually. Since certificates expire so often, your mailbox may become inundated with emails about expirations coming up soon!
Of course, you can create a cron job to schedule automatic renewal of certificates, but what if you also want to run Certbot's Docker container and use a web server like Nginx in Docker as well?
I've come up with a scheme that will incorporate all of these features, and I've packaged them into a format that allows anyone on my team to deploy Certbot for *any web service!
(*Updates will be made to handle services that don't fit well into the scheme.)
The previous examples I've seen that use Certbot and Docker are a bit kludgy to say the least. The most useful one I've read was by
@pentacent on Medium: Nginx and Let’s Encrypt with Docker in Less Than 5 Minutes
I had an idea to improve on this scheme. One of the major pain points with automating Certbot's
HTTP-01 in tandem with Nginx is that Nginx needs to host the challenge file, but the certificates specified in its config must exist in order Nginx to even start.
@pentacent's solution was to create fake certificates, start Nginx, delete the fake certs, then get real ones with Certbot.
Basically, this strategy ended up hinging on executing a script that does all of this for you. This is, of course, all good - presuming that everything works; but I wondered if I could omit the script step altogether? An interesting challenge.
My solution was to create two Dockerfiles and run one initially with Nginx set up to only host the
/.well-known/acme_challenge file. Once that was done, I could theoretically add anything else to the Nginx config after the fact, as long as the containers all mount the same directory that holds the certificates.
Implementing the solution
As mentioned, I created two separate Dockerfiles to solve this issue. The two are nearly identical, except that the first runs
certbot certonly and the other runs
certbot renew. Additionally, for convenience, I created two sets of
nginx/conf.d directories to mount. They can technically use the same directories, but I also checked in the Nginx configurations for different projects into their respective branches. By keeping a
/init-data/nginx/conf.d separately, I can leave just the webroot configuration (acme_challenge) while having any additional configurations checked in to the repo under
Certbot will request certificates and store them in a mounted directory, which is read by the Nginx machine. Once the entire system is up and running, you can just call
docker-compose up certbot-renew again at any time to update the certs.
Instead of changing the entrypoint script for the Certbot container, I added a crontab generator that starts up the stopped
certbot-renew container, which runs again and checks if any certificates need to be renewed. This ensures that the container only runs when it needs to; a good way to improve efficiency. Once the renew script completes, the next step in the crontab script is to restart Nginx service (to reload with new certs).
To make use of all of this, the end user (whomever is setting up a new web server) pulls my base project, writes an Nginx config in the
/data/nginx/conf.d directory, and modifies the
docker-compose.yml file to include their project, making sure to add a
depends_on line to the Nginx docker-compose spec to make sure their container is linked properly. This way, we can refer to the new service by name in places like the Nginx configuration.
Once that's done, they start the services with
docker-compose up -d and execute the crontab generator.
Caveats of interest
Why mount only the
/etc/nginx/conf.d directory for Nginx and not the entire
/etc/nginx directory? The reason is actually not very apparent; it deals with how Docker handles mounted directories. If something is already there, it doesn't get deleted, it just gets hidden under the new mount (ie, it exists, but is inaccessible to the filesystem). This is good because you can recover what was there before, but it's also not quite intuitive...
If you expected Nginx to work when mounting
/etc/nginx and adding your own files, you'll find that there are a bunch of files missing! These files were previously in
/etc/nginx but were hidden when you told Docker to mount. Luckily, the installed configuration for Nginx's Docker image includes all configurations in
/etc/nginx/conf.d! Therefore, you can safely mount
conf.d and add all of your custom configurations there :)
My implementation restarts the entire Nginx container using a
docker-compose command. Why not just restart the service inside the container? This one's pretty simple!
When the Nginx container starts, its entrypoint is the Nginx server command itself. If that command finishes (in this case, it is terminated in order to do the restart), then Docker will consider the task "complete" and terminate the container. Because of this a service restart will end up in container termination! Better to just use
docker-compose to restart the container in the first place ;]