Recently I have been experimenting with different ways of building multi architecture Docker images. As part of this process I wrote about Docker image manifests and the different ways you can package multi architecture builds into a single Docker image. Packaging the images is only half the problem though. You basically need to create the different Docker images for the different architectures first, before you are able to package them into manifests.
There are several ways to go about building the Docker images for various architectures. In the remainder of this post I will be showing how you can build Docker images natively against arm64 only as well as amd64/arm64 simultaneously using some slick features provided by the folks at Shippable. Having the ability to automate multi architecture builds with CI is really powerful because it avoids having to use other tools or tricks which can complicate the process.
Shippable recently announced integrated support for arm64 builds. The steps for creating these cross platform builds is fairly straight forward and is documented on their website. The only downside to using this method is that currently you must explicitly contact Shippable and requests access to use the arm64 pool of nodes for running jobs, but after that multi arch builds should be available.
For reference, here is the full shippable.yml file I used to test out the various types of builds and their options.
Arm64 only builds
After enabling the shippable_shared_aarch64 node pool (from the instruction above) you should have access to arm64 builds, just add the following block to your shippable.yml file.
runtime: nodePool: shippable_shared_aarch64
The only other change that needs to be made is to point the shippable.yaml file at the newly added node pool and you should be ready to build on arm64. You can use the default “managed” build type in Shippable to create builds.
Below I have a very simple example shippable.yml file for building a Dockerfile and pushing its image to my Dockerhub account. The shippable.yml file for this build lives in the GitHub repo I configured Shippable to track.
language: none runtime: nodePool: - shippable_shared_aarch64 - default_node_pool build: ci: - sed -i 's|registry.fedoraproject.org/||' Dockerfile.fedora-28 - docker build -t local/freeipa-server -f Dockerfile.fedora-28 . - tests/run-master-and-replica.sh local/freeipa-server post_ci: - docker tag local/freeipa-server jmreicha/freeipa-server:test - docker push jmreicha/freeipa-server:test integrations: hub: - integrationName: dockerhub type: dockerRegistryLogin
Once you have a shippable.yml file in a repo that you would like to track and also have things set up on the Shippable side, then every time a commit/merge happens on the master branch (or whatever branch you set up Shippable to track) an arm64 Docker image gets built and pushed to the Dockerhub.
Docs for settings up this CI style job can be found here. There are many other configuration settings available to tune so I would encourage you to read the docs and also play around with the various options.
Parallel arm64 and amd64 builds
The approach for doing the simultaneous parallel builds is a little bit different and adds a little bit more complexity, but I think is worth it for the ability to automate cross platform builds. There are a few things to note about the below configuration. You can use templates in either style job. Also, notice the use of the shipctl command. This tool basically allows you to mimic some of the other functionality that exists in the default runCI jobs, including the ability to login to Docker registries via shell commands and manage other tricky parts of the build pipeline, like moving into the correct directory to build from.
Most of the rest of the config is pretty straight forward. The top level jobs directive lets you create multiple different jobs, which in turn allows you to set the runtime to use different node pools, which is how we build against amd64 and arm64. Jobs also allow for setting different environment variables among other things. The full docs for jobs shows all of the various capabilities of these jobs.
templates: &build-test-push - export HUB_USERNAME=$(shipctl get_integration_field "dockerhub" "username") - export HUB_PASSWORD=$(shipctl get_integration_field "dockerhub" "password") - docker login --username $HUB_USERNAME --password $HUB_PASSWORD - cd $(shipctl get_resource_state "freeipa-container-gitRepo") - sed -i 's|registry.fedoraproject.org/||' Dockerfile.fedora-27 - sed -i 's/^# debug:\s*//' Dockerfile.fedora-27 - docker build -t local/freeipa-server -f Dockerfile.fedora-27 . - tests/run-master-and-replica.sh local/freeipa-server - docker tag local/freeipa-server jmreicha/freeipa-server:$arch - docker push jmreicha/freeipa-server:$arch resources: - name: freeipa-container-gitRepo type: gitRepo integration: freeipa-container-gitRepo versionTemplate: sourceName: jmreicha/freeipa-container branch: master jobs: - name: build_amd64 type: runSh runtime: nodePool: default_node_pool container: true integrations: - dockerhub steps: - IN: freeipa-container-gitRepo - TASK: runtime: options: env: - privileged: --privileged # Also look at using SHIPPABLE_NODE_ARCHITECTURE env var - arch: amd64 script: - *build-test-push - name: build_arm64 type: runSh runtime: nodePool: shippable_shared_aarch64 container: true integrations: - dockerhub steps: - IN: freeipa-container-gitRepo - TASK: runtime: options: env: - privileged: --privileged - arch: arm64 script: - *build-test-push
As you can see, there is a lot more manual configuration going on here than the first job.
I decided to use the top level templates directive to basically DRY the configuration so that it can be reused. I am also setting environment variables per job to ensure the correct architecture gets built and pushed for the various platforms. Otherwise the configuration is mostly straight forward. The confusion with these types of jobs if you haven’t set them up before mostly comes from figuring out where things get configured in the Shippable UI.
I must admit, Shippable is really easy to get started with, has good support and has good documentation. I am definitely a fan and will recommend and use their products whenever I get a chance. If you are familiar with Travis then using Shippable is easy. Shippable even supports the use of Travis compatible environment variables, which makes porting over Travis configs really easy. I hope to see more platforms and architectures supported in the future but for now arm64 is a great start.
There are some downside to using the parallel builds for multi architecture builds. Namely there is more overhead in setting up the job initially. With the runSh (and other unmanaged jobs) you don’t really have access to some of the top level yml declarations that come with managed jobs, so you will need to spend more time figuring out how to wire up the logic manually using shell commands and the shipctl tool as depicted in my above example. This ends up being more flexible in the long run but also harder to understand and get working to begin with.
Another downside of the assembly line style jobs like runSh is that they currently can’t leverage all the features that the runCI job can, including the matrix generation (though there is a feature request to add it in the future) and report parsing.
The last downside when setting up unmanaged jobs is trying to figure out how to wire up the different components on the Shippable side of things. For example you don’t just create a runCI job like the first example. You have to first create an integration with the repo that you are configuring so that shippable can make an rSync and serveral runSh jobs to connect with the repo and be able to work correctly.
Overall though, I love both of the runSh and runCI jobs. Both types of jobs lend themselves to being flexible and composable and are very easy to work with. I’d also like to mention that the support has been excellent, which is a big deal to me. The support team was super responsive and helpful trying to sort out my issues. They even opened some PRs on my test repo to fix some issues. And as far as I know, there are no other CI systems currently offering native arm64 builds which I believe will become more important as the arm architecture continues to gain momentum.