skip to content
Scott's Ramblings
Photo by Thais Morais / Unsplash

Faster multi-arch container builds

/ 7 min read

In this blog, I’ll walk you through the main options you have to build multi-arch docker containers, and how you can take advantage of native cross-compilation to make your builds run faster.

In the dark days of yore when everything was x86 and you could set your pants on fire simply by daring to open Chrome on your macbook, container life was simpler; you’d build your images on your laptop or CI system and they’d run everywhere because everything serious was x86 1. These days ARM is prevalent both on the desktop in the DC, and we need multi-arch containers. These let us embed metadata into a container image that provides architecture-specific variants of the layers - concretely, I can pull ubuntu:latest on my ARM macbook and on an x86 server, and get something that works more or less the same.

This is well and good, but it turns out that building these is still kind of a pain. Let’s go through the different options we have to make this work, seen through the lens of a Rust service I’ve been working on.

Easy mode - let buildx do it

Buildx has been built into docker desktop for a while now, and includes a helpful incantation - —platform to set the target platform of the build. If you use this on your ordinary docker desktop, with your ordinary Dockerfile 2, most of the time everything will just work, and an image for your target architecture will shake out. If you list multiple architectures, you’ll even get a multi-arch docker image:

Terminal window
docker buildx build . -f Dockerfile --platform linux/amd64,linux/arm64 -t my-multiarch-image

You can also pass this through easily on Github actions, if that’s where you happen to be doing your CI:

- name: Build and push Docker image
id: push
uses: docker/build-push-action@f2a1d5e99d037542a71f64918e516c093c6f3fc4
with:
context: .
platforms: linux/amd64,linux/arm64
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
build-args: |
GIT_COMMIT_SHA=${{ env.COMMIT_SHA_FULL }}
GIT_REPOSITORY_URL=${{ env.REPOSITORY_URL }}

But - what is this strange magic? Well, buildx is going to go and run the build on a virtual machine emulating the target architecture in the background. This is great - set this one simple flag and get a multiarch image is a straightforward workflow.

There is one downside, though, and that’s speed - in my case I my GitHub actions builds were something like 5x slower for the ARM64 builds - which is often going to be slow enough to want to improve things.

Harder mode - cross-compile within buildx

Lots of compilers support cross-compilation these days - that is, building natively on one target, for another target. For example, you can use Rust on your x86-64 desktop to build a ARM64 binary without emulation - the native compiler just happens to output for a different architecture. If you are dockerizing this, you then take that binary and drop it into a native image for the target architecture. This is cool, because the compilation happens at native speed as it is not emulated, and the bit that needs to run in the target architecture is limited to installing some packages and copying the build in. Buildx supports this by letting you explicitly set the architecture that different stages of a build should run at:

#
# buildx sets a number of arguments for us, including:
#
# - BUILDPLATFORM - the platform the build is currently running on
# - TARGETPLATFORM - the platform the build is targeting
# - TARGETARCH - the target architecture
#
# "...PLATFORM" looks like "linux/amd64"
# "...ARCH" is just the "amd64" part
#
# Make sure that our build is running on _our current platform_, not the target platform. This forces buildx not to emulate this stage.
FROM --platform=${BUILDPLATFORM} rust:1.82-slim-bookworm AS builder
#
# Do the very serious building stuff
#
# If we have specified a different target architecture
# with ---platform, use that now. If we haven't, use the platform
# we're building on.
# This will cause buildx to kick back into emulation mode, but only
# as we assemble the runtime image.
FROM --platform=${TARGETARCH:-$BUILDPLATFORM} debian:bookworm-slim
# Install all the native runtime deps we need
# ....
# Copy in our compiled app
COPY --from=builder /app/my-app .

We could build a Dockerfile structured like this exactly like we built the previous one, it’ll just be faster, because we’ve told it to stick with the native architecture of the machine its running on for the compilation phase:

Terminal window
docker buildx build . -f Dockerfile --platform linux/amd64,linux/arm64 -t my-multiarch-image

But there’s still a catch - “Do the very serious building stuff” comment here is doing a lot of heavy lifting - you need to line up a few different things to make this work.

cross-compiliation: the less great bits

The rest of this blog is going to focus on the nuance of setting up compilation for a rust service using code I’ve written over in Datadog/ sdlc-gitops-sample-stack as an example. You can view the whole TODO TODO Dockerfile TODO TODO here!

Target Toolchain

Rust needs the GCC toolchain to be installed for the architecture it is targetting. If we are building ARM64 and x86-64, we can make sure we have the targets installed for both - this way our build will work regardless of whether we are running on a Mac ARM laptop or an x86-64 cloud build environment.

RUN apt-get update && apt-get install -y --no-install-recommends \
ca-certificates \
pkgconf \
gcc-aarch64-linux-gnu \
gcc-x86-64-linux-gnu \
libc6-dev-arm64-cross \
libc6-dev-amd64-cross \
&& rm -rf /var/lib/apt/lists/
# To access the buildx-specified arguments for platform and arch in a
# RUN command, we need to include an 'ARG' for the argument. buildx
# will automatically supply the value for it.
ARG TARGETPLATFORM

Cargo Target Toolchain

We also need to make sure cargo has what it needs for the target’s compilation:

# Note the leading '.' - this pulls the environment variables from the
# target.sh script 'up', making them available to the subsequent
# rustup invocation.
# This is a bit of a hacky way to move the platform mapping out
# into the script!
RUN . ./scripts/target.sh rustup target add $RUST_TARGET

RUST_TARGET takes target triple values - in our case these are chosen for Debian images linked against glibc. The script scripts/target.sh converts the buildx-style target platform specifier to a target triple:

#!/bin/bash
set -e
# Map TARGETPLATFORM to Rust target
case "$TARGETPLATFORM" in
"linux/amd64")
export RUST_TARGET="x86_64-unknown-linux-gnu"
;;
"linux/arm64")
export RUST_TARGET="aarch64-unknown-linux-gnu"
;;
*)
echo "Unsupported platform: $TARGETPLATFORM"
exit 1
;;
esac

Next, we need to tell cargo to use the right linker - not just cc - for our target platform. For this we rely on some magic environment variables:

ENV CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_LINKER=aarch64-linux-gnu-gcc
ENV CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_LINKER=x86_64-linux-gnu-gcc

Finally, when we run the build, we tell cargo which architecture too build for, again using our script from above:

RUN . scripts/target.sh && cargo build --release --target $RUST_TARGET

You can drop into your build environment and check that these are there - we installed them earlier with the target compilation chains!

Native Libraries

If you can avoid native library dependencies, your life will be easier. A classic one is subbing in rustls for openssl - some crates, like reqwest, will take a dependency on the latter by default, but can by configured via Cargo.toml to depend on the former:

reqwest = { version = "0.12.8", features = [ "rustls-tls", "json" ], default-features = false }

If you find yourself hitting -lssl not found and similiar errors at the linking stage - the final step of your build - its worth trying to get rid of the native dep first. If you can’t there’s a bit of a rabbit hole I will leave for another day 3!

Footnotes

  1. Even in the early days of Docker some enthusiastic folks were usingRaspberryPIs and the like, but they suffered for it.

  2. Without any architecture-specific bits in it, and using base images and tools that are available for your target platform. If you aren’t doing anything too weird, this is getting easier and easier to line up.

  3. You have to decide between static linking of the library - that is, compiling it directly into your application, and dynamic linking, which requires lining the library up the same in both build stages. You might also want to switch to a different libc runtime such as MUSL.