In my previous post I showed how I dockerized my blog and had it served via the Traefik reverse proxy. In this episode I am going to show you how I automated the build of this blog in Jenkins and trigger these builds using Bitbucket pipelines. Also make sure you check out the next episode that shows you how to deploy a Spring Boot application and the final episode where I show how I enabled HTTPS.
Jenkins
Jenkins is the leading automation server and it’s also the server that we use on the project I’m currently working on to deploy our micro services to our Kubernetes cluster so it would be a logical choice. I also briefly considered Drone because it seems a natural fit but Jenkins being the de facto standard made me choose Jenkins. Unexpectedly it was quite a large challenge to get Jenkins up and running. I had to jump through quite a few hurdles to get it to run docker and docker-compose correctly. Jenkins does not seem to fit in the world of containers as well as I hoped.
So I went through quite a few iterations to get Jenkins running. A vanilla Jenkins container is easy to start:
docker run --name myjenkins -p 8080:8080 -p 50000:50000 -v /var/jenkins_home jenkins
As long as you make sure that you can mount that path (Docker on MacOS seems to default to /private/var/* instead of /var) you should be able to see the set-up screen when you navigate to http://localhost:8080. Unfortunately (or fortunately if you like puzzles ;)) it’s not that simple if you want Jenkins to build Docker containers.
Docker-in-docker
Since we’re running Jenkins inside a docker container (which gets exposed to the outside via Traefik) we need to find a mechanism to run docker build
and docker-compose
inside that container. So we’ll just install docker inside of it right? Nope!
So first and foremost; do NOT run docker in a docker container! It’s very likely you’ll run into problems. The solution instead is to have a Docker client inside your Docker container contact the Docker daemon outside. This makes sense obviously; the Docker daemon that is going to actually start these containers is outside; it’s the one that also spun up our Jenkins container.
So aside from Jenkins I will also need Docker CLI and Docker-compose inside my Jenkins image. Also since I use JBake to build the blog I am going to add it as well (I can also use the maven plugin instead). To satisfy these requirements I’m going to build my own Docker image on top of the Jenkins base image.
Dockerfile
So let’s take a look at the Dockerfile for my personal Jenkins image:
FROM jenkins ADD http://jbake.org/files/jbake-2.5.1-bin.zip /opt/jbake.zip USER root RUN apt-get update \ && apt-get install -y sudo \ && rm -rf /var/lib/apt/lists/* RUN echo "jenkins ALL=NOPASSWD: ALL" >> /etc/sudoers RUN cd /opt && unzip -o jbake.zip RUN cd /opt && mv jbake-2.5.1 jbake RUN cd /opt && rm jbake.zip RUN chown -R jenkins:jenkins /opt/jbake RUN wget -qO- https://get.docker.com/gpg | apt-key add - RUN wget -qO- https://get.docker.com/ | sh RUN usermod -aG docker jenkins && usermod -aG users jenkins RUN curl -L https://github.com/docker/compose/releases/download/1.11.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose RUN chmod +x /usr/local/bin/docker-compose USER jenkins
I will break it down step by step: FROM jenkins
grabs the latest jenkins image to build upon. I have not tried the (much smaller) Alpine Linux version yet. In theory it should work fine, just replace the apt-get commands with the corresponding apk instructions. The ADD
line downloads and adds JBake to the image in /opt/jbake.zip.
Note
|
Yes; Docker can just download files like that. It’s awesome :) |
I ran into an issue that no matter what I could not let docker connect to the outside docker daemon via a socket. Someone suggested just using sudo which means I first needed to install it:
RUN apt-get update \ && apt-get install -y sudo \ && rm -rf /var/lib/apt/lists/* RUN echo "jenkins ALL=NOPASSWD: ALL" >> /etc/sudoers
This managed to solve the problem of letting docker talk to the outside daemon. The 4 RUN
lines after that install the JBake binaries in /opt/jbake. Then I install Docker:
RUN wget -qO- https://get.docker.com/gpg | apt-key add - RUN wget -qO- https://get.docker.com/ | sh RUN usermod -aG docker jenkins && usermod -aG users jenkins
And Docker-compose:
RUN curl -L https://github.com/docker/compose/releases/download/1.11.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose RUN chmod +x /usr/local/bin/docker-compose
If someone has suggestions to improve the Dockerfile I’d be much obliged! I’m not too happy about having to run docker via sudo and frankly this still feels rather cumbersome. According to the guides online the usermod -aG docker jenkins && usermod -aG users jenkins
bit should solve that but in my situation it unfortunately doesn’t.
The last line, USER jenkins
, switches from root back to the jenkins user as a best practice.
Note
|
You can also pre-install plugins from a Dockerfile if you want to. |
Running jenkins
So after a docker build and docker push I now have my own special snowflake Jenkins image! Let’s now add it to my Traefik 'cluster' through docker-compose
:
version: '2' services: jenkins: image: nielsutrecht/jenkins:latest container_name: jenkins restart: always volumes: - /var/jenkins_home:/var/jenkins_home - /var/run/docker.sock:/var/run/docker.sock networks: - web labels: - "traefik.backend=jenkins" - "traefik.frontend.rule=Host: jenkins.localhost, jenkins.niels.nu" - "trafik.port=8080" networks: web: external: name: traefik_webgateway
A docker-compose up -d
and it starts up. If you wait a bit and then visit http://jenkins.niels.nu it will prompt you for an admin password it wrote to an initialAdminPassword file. This will be stored in your container and you can read it via docker. First find the docker container (docker ps|grep jenkins
) ID for Jenkins and then grab the password with docker exec:
docker exec -it <containerid> cat /var/jenkins_home/secrets/initialAdminPassword
You can now install any plugins you want and start using Jenkins.
Creating a Docker build item
So now that we have Jenkins up and running it’s time for the next step. Let’s first check if we can run Docker and Docker-compose from a build script. Click on "New item" and select "Freestyle project". In that project add an "Execute shell" build step and add the following lines:
docker version docker-compose version
Save it and run it. You should get console output similar to this:
+ sudo docker version Client: Version: 1.13.1 API version: 1.26 Go version: go1.7.5 Git commit: 092cba3 Built: Wed Feb 8 06:36:34 2017 OS/Arch: linux/amd64 Server: Version: 1.13.1 API version: 1.26 (minimum version 1.12) Go version: go1.7.5 Git commit: 092cba3 Built: Wed Feb 8 06:50:14 2017 OS/Arch: linux/amd64 Experimental: false + sudo docker-compose version docker-compose version 1.11.0, build 6de1806 docker-py version: 2.0.2 CPython version: 2.7.13 OpenSSL version: OpenSSL 1.0.1t 3 May 2016 Finished: SUCCESS
If you instead get a failure like this:
+ docker version Client: Version: 1.13.1 API version: 1.26 Go version: go1.7.5 Git commit: 092cba3 Built: Wed Feb 8 06:36:34 2017 OS/Arch: linux/amd64 Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.26/version: dial unix /var/run/docker.sock: connect: permission denied
It means that the Jenkins user doesn’t have the rights to connect to the Docker daemon. I solved that by prepending all docker
and docker-compose
calls with sudo
(which I installed in the Dockerfile).
Building the blog
So let’s now add a build item for my blog. I ran into a little snag where I found out that Docker is rather peculiar with the :latest image tag. In my naivety I assumed that a push of a new :latest image and then a docker-compose up
would always grab the latest image. This does not seem to be the case; if it already has a 'latest' image running Docker compose seems to go 'meh, good enough' and typically won’t replace the running instance with the latest version.
This can be solved by building and deploying specific tags. For this I had to make a small change to the docker-compose.yml file in my blog:
version: '2' services: blog: image: "nielsutrecht/app-blog:${TAG}" networks: - web labels: - "traefik.backend=blog" - "traefik.frontend.rule=Host: localhost, niels.nu, www.niels.nu, nibado.com, www.nibado.com, vps2.niels.nu" - "traefik.port=80" networks: web: external: name: traefik_webgateway
Docker-compose can take environment variables (TAG
in this case) and substitute them inside a docker-compose.yml file. It can take them from the environment it’s running in or from a local .env
file. I’m using it to force it to replace the container with a newer version but it’s also a great way way make parametrized docker-compose.yml
files with per-environment parameter sets.
My build item starts with pulling the Bitbucket repo that contains my blog. I won’t go into this since this is typical Jenkins fare for most builds. The next steps are where I bake the blog using JBake and then let Docker do it’s magic:
/opt/jbake/bin/jbake -b TAG=0.2.$BUILD_NUMBER IMG=nielsutrecht/app-blog sudo docker login -u $DOCKER_USER -p $DOCKER_PASS sudo docker build -t $IMG:$TAG -t $IMG:latest . sudo docker push $IMG:$TAG sudo docker-compose pull blog echo "TAG=$TAG" > .env sudo docker-compose up -d
So the script starts by creating the static HTML from AsciiDoc via JBake. I then create two environment variables that contain the TAG (based on a version and Jenkins build number) and IMG name. These then get used to build, tag, push and run the image.
You’ll notice that I echo the TAG to an .env file. The TAG variable is local only to the script; docker-compose
can’t see it. It will however pick up an .env file if it exists in the same directory. This is where the TAG variable is set so that it can be used inside the docker-compose.yml file.
Running this build pulls the latest master for my blog, creates the HTML, builds an image and restarts it. All automatically! Seeing it go through the entire chain the first time made me a lot happier than I expected :)
Enabling remote triggers
Jenkins makes automatic triggering of builds pretty easy. In your build check the "Trigger builds remotely" checkbox. This lets you input a token (can be any string, I just use 16 random bytes: openssl rand -hex 16
) which needs to be appended on to the URL. You can then trigger the URL via a GET on http://<hostname>/job/<jobname>/build?token=<token>.
I did run into a few snags again though. That bit of explanation in the build setup does not explain that Jenkins by default has CSRF protection enabled and also doesn’t allow anonymous logins. I solved this by (for now) disabling CSRF protection and creating a separate user that can be used to trigger builds. So now the URL is http://<user>:<pass>@<hostname>/job/<jobname>/build?token=<token>. A GET on this URL now triggers the build.
I am going to revisit this in the near future to find a solution that does not involve me disabling important security features.
Continuous Delivery with Bitbucket Pipelines
I have my blog in a private Bitbucket repository (I considered moving to Gitlab until recently ಠ_ಠ) which I want to trigger builds whenever I push a new version. Bitbucket has had webhooks for quite some time but they had a rather annoying downside; you could not specify which branch they should trigger on so they would trigger on any change. There are some plugins that claim to solve this but they are unstable at best.
Fortunately they released Bitbucket pipelines last year that can very easily do this for you. You create a small bitbucket-pipelines.yml
file in your project root with the configuration. Mine looks like this:
pipelines: branches: master: - step: script: - curl http://<user>:<pass>@jenkins.niels.nu/job/Blog/build?token=<token>
It’s a simple script that only triggers on a change on master
which will then GET an URL on jenkins that triggers a job remotely.
As a test I pushed a change to master
and it starts a build and pushes a new release to Docker. That’s a bingo! \(• ◡ •)/
Conclusion
I’m making a note here: Huge success! Not only is my system now fully running on Docker containers it also always stays up to date with the latest version. Soon I am also going to make a separate release path so that I have a 'dev' environment I can use to review my blog before releasing it. This would involve deploying a separate instance that triggers on changes to a 'dev' branch instead of 'master'.
This also concludes this episode. In the previous episode I dockerized my blog and put it online behind a Traefik reverse proxy. In this episode I set up a continuous delivery pipeline with Jenkins and Bitbucket. In the next episode I will show how to deploy a Java service in this environment.
Thank you for reading! And any feedback is more than welcome!