In this post I describe how I improved the build performance of my ASP.NET Core Docker containers when building on a serverless host that doesn't provide any layer caching. I used a combination of multi-stage builds and caching from a remote repository to improve performance by avoiding repetitive work.
tl;dr; Use
--target
to build specific stages of your multi-stage builds, and push these images to a remote repository. In subsequent builds, pull these images and use as the build cache by using--cache-from
. See below for a complete script.
Building applications in Docker
One of the big selling points of Docker containers for application hosting is their reliability and immutability. You can run an image on any host, and it will run the same (within reason), regardless of the underlying operating system. It's also incredibly useful for building applications.
Applications often require many more dependencies to build then they do to run. Take an ASP.NET Core application for example. To build it you need the .NET Core SDK, but depending on your application, you may also need various extra tools like Node.js for front-end building and minification, or Cake for writing your build scripts. In comparison, you only need the .NET Core runtime to run an ASP.NET Core application, or if you're building a standalone app, not even that!
Using Docker to build your applications allows you tame these dependencies, ensuring you don't end up with clashes between different applications. Without Docker you have to keep a careful eye on the version of Node used by all your applications and installed on your build server. Instead, you can happily upgrade an application's Docker image without affecting any other app on the build server.
As well as isolation, building apps in Docker containers can bring performance benefits. I've written many posts about building ASP.NET Core apps in Docker, but one of the common themes is trying to optimise the amount of layer caching Docker uses. The more caching, the less work your build process has to do, and the faster the build.
Optimising ASP.NET Core app Docker files
In previous posts I've used an example of an optimised Dockerfile for building ASP.NET Core apps. I typically use Cake for building my apps, even in Docker, but for simplicity the example below uses raw dotnet
commands:
# Builder image
FROM microsoft/dotnet:2.1.402-sdk AS builder
WORKDIR /sln
COPY ./*.sln ./NuGet.config /*.props /*.targets ./
# Copy the main source project files
COPY src/*/*.csproj ./
RUN for file in $(ls *.csproj); do mkdir -p src/${file%.*}/ && mv $file src/${file%.*}/; done
# Copy the test project files
COPY test/*/*.csproj ./
RUN for file in $(ls *.csproj); do mkdir -p test/${file%.*}/ && mv $file test/${file%.*}/; done
RUN dotnet restore
# Copy across the rest of the source files
COPY ./test ./test
COPY ./src ./src
RUN dotnet build -c Release
RUN dotnet test "./test/AspNetCoreInDocker.Web.Tests/AspNetCoreInDocker.Web.Tests.csproj" \
-c Release --no-build --no-restore
RUN dotnet publish "./src/AspNetCoreInDocker.Web/AspNetCoreInDocker.Web.csproj" \
-c Release -o "../../dist" --no-restore
# App image
FROM microsoft/dotnet:2.1.3-aspnetcore-runtime-alpine
WORKDIR /app
ENTRYPOINT ["dotnet", "AspNetCoreInDocker.Web.dll"]
COPY --from=builder /sln/dist .
This docker file has multiple optimisations:
- It's a multi-stage build. The
builder
image uses the SDK image to build and publish the app. The final app output is copied into the tiny alpine-based runtime image. - Each
dotnet
command (restore
,build
,publish
) is run individually, instead of lettingdotnet publish
run all the stages at once. This allows Docker to cache the output of each command if nothing has changed since it was last run. This is the layer caching. - We manually copy across the .csproj and .sln files and run
dotnet restore
before copying across the rest of the source code. That way if none of the .csproj files have changed since the last build, we can used the cached output of thedotnet restore
layer.
In reality, I've found the dotnet restore
layer caching is the most important. If anything meaningful has changed about your app (e.g. source code files or test files), then the layer cache for dotnet build
will be invalid. This will generally also be true if you're embedding version numbers in your output dlls, especially if you're embedding a unique per-build version.
I've found that Dockerfiles like this (that rely on Docker's layer caching) work really well when you're building locally, or if you have a single build server you're using for your apps. Where it falls down is when you're building using a hosted platform, where build agents are ephemeral and provisioned on demand.
The upside of building Docker images on hosted agents
I experienced both the highs and lows of moving to a hosted build platform recently. I was tasked with moving an ASP.NET Core application from building on a self-hosted Jenkins agent to using AWS's CoreBuild platform. CodeBuild, like many other CI products, allows you to provision a build agent in response to demand, e.g. a PR request, or a push to master
in your GitHub repo.
The process of migrating to CodeBuild had inevitable hurdles associated with migrating to any new service. But the process of building the ASP.NET Core application was fundamentally identical to building with Jenkins, as it was encapsulated in a Dockerfile. The actual build script was essentially nothing more than:
# DOCKER_IMAGE_VERSION calculated elsewhere and passed in
DOCKER_IMAGE_VERSION=1.2.3_someversion
docker build \
-t my-images/AspNetCoreInDocker.Web:$DOCKER_IMAGE_VERSION \
-t my-images/AspNetCoreInDocker.Web:latest \
-f "path/to/AspNetCoreInDocker.Web.Dockerfile" \
"."
# Push to remote repository
docker push my-images/AspNetCoreInDocker.Web:$DOCKER_IMAGE_VERSION
docker push my-images/AspNetCoreInDocker.Web:latest
This bash script builds the Docker image based on the dockerfile AspNetCoreInDocker.Web.Dockerfile. It tags the output image with both a commit-specific version number $DOCKER_IMAGE_VERSION
and the special latest
tag. It then pushes the image to our private repository, and the build process is done!
Our actual build script does a lot more than this, but it's all that's relevant for this post.
The downside of building Docker images on hosted agents
While the build was working, one thing was bugging me about the solution. In using a hosted agent, we'd completely lost the advantages of layer caching that the Dockerfiles are designed to take advantage of. Every build used a new agent that had none of the layers cached from previous builds. The builds would still succeed (it's only a "cache" after all), they just took longer than they would have done if caching was available.
Unfortunately, CodeBuild doesn't have anything built-in to take advantage of this Docker feature. While you can cache files to an S3 bucket, that's not so useful here. You can use docker save
and docker load
to save an image to a tar file and rehydrate it later, but it didn't provide much time benefit in my case. The best solution in my case (based on the scripts in this issue), was to leverage two docker features I didn't previously know about: the --cache-from
and --target
arguments.
The --target
argument
When you create a multi stage build, you can provide names for each stage. For example, in the Dockerfile I showed earlier, I used the name builder
for the first stage:
# This stage is called 'builder'
FROM microsoft/dotnet:2.1.402-sdk AS builder
# ...
# This stage doesn't have a name
FROM microsoft/dotnet:2.1.3-aspnetcore-runtime-alpine
WORKDIR /app
ENTRYPOINT ["dotnet", "AspNetCoreInDocker.Web.dll"]
# Copy files from the 'builder' stage
COPY --from=builder /sln/dist .
By providing a name for your stages you can reference them later in the Dockerfile. In the previous example I copy the contents of /sln/dist
from the output of the builder
stage to the final alpine runtime image.
What I didn't realise is that you can tell Docker to only build some of the stages by using the --target
argument. For example, to only build the builder
stage (and not the final runtime image stage) you could run
docker build \
--target builder \
-t my-images/AspNetCoreInDocker.Web:builder \
-f "path/to/AspNetCoreInDocker.Web.Dockerfile" \
"."
The output of this command would contain only the builder stage, not the runtime stage. Notice I've also tagged this stage using the :builder
tag - I'll come back to this later when we put together the final script.
The --cache-from
argument
By default, when you build Docker images, Docker uses it's build cache to check if it can skip any of the steps in your Dockerfile. On a hosted agent, that build cache will be empty, as a new host is spun up for every request. The --cache-from
argument allows you to tell Docker to also consider a specific image as part of it's build cache. If the provided image and your current build have layers in common, you get the same speed up as if the image had been built on the same machine.
For example, imagine briefly that we're not using multi-stage builds, so the final image pushed to the remote repository contains all the build layers. Without using --cache-from
our build script would always have to execute every command in the Dockerfile, as the build cache would be empty:
# As the build cache is empty, this docker build command has to execute every layer
docker build \
-t my-images/AspNetCoreInDocker.Web:latest \
-f "path/to/AspNetCoreInDocker.Web.Dockerfile" \
"."
Instead, we can use --cache-from
in combination with an explicit docker pull
:
# Pull the image from remote repository (|| true avoids errors if the image hasn't been pushed before)
docker pull my-images/AspNetCoreInDocker.Web:latest || true
# Use the pulled image as the build cache for the next build
docker build \
--cache-from my-images/AspNetCoreInDocker.Web:latest \
-t my-images/AspNetCoreInDocker.Web:latest \
-f "path/to/AspNetCoreInDocker.Web.Dockerfile" \
"."
# Push the image to the repository. Subsequent builds can pull this and use it as the cache
docker push my-images/AspNetCoreInDocker.Web:latest
This simple approach works well if your final built image contains all your docker build layers, but if you're using multi-stage builds (and you should be!) then there's a problem. The final image that is pushed to (and pulled from) the remote repository is only the runtime stage.
That's fundamentally the point of multi-stage builds - we don't want our build layers in our runtime image. So how can we get round this? By using --target
and --cache-from
together!
Using --cache-from
and --target
with multi-stage builds
Currently we have a problem - we want to reuse the build layers of the builder
stage in our multi-stage Dockerfile, but we don't push those layers to a repository, so we can't pull them in subsequent builds.
The solution is to explicitly build and tag the builder
stage of the multi-stage Dockerfile, so we can push that to the remote repository for subsequent builds. We can then build the runtime stage of the Dockerfile and push that too.
DOCKER_IMAGE_VERSION=1.2.3_someversion
# Pull the latest builder image from remote repository
docker pull my-images/AspNetCoreInDocker.Web:builder || true
# Only build the 'builder' stage, using pulled image as cache
docker build \
--target builder \
--cache-from my-images/AspNetCoreInDocker.Web:builder \
-t my-images/AspNetCoreInDocker.Web:builder \
-f "path/to/AspNetCoreInDocker.Web.Dockerfile" \
"."
# Pull the latest runtime image from remote repository
# (This may or may not be worthwhile, depending on your exact image)
docker pull my-images/AspNetCoreInDocker.Web:latest || true
# Don't specify target (build whole Dockerfile)
# Uses the just-built builder image and the pulled runtime image as cache
docker build \
--cache-from my-images/AspNetCoreInDocker.Web:builder \
--cache-from my-images/AspNetCoreInDocker.Web:latest \
-t my-images/AspNetCoreInDocker.Web:$DOCKER_IMAGE_VERSION \
-t my-images/AspNetCoreInDocker.Web:latest \
-f "path/to/AspNetCoreInDocker.Web.Dockerfile" \
"."
# Push runtime images to remote repository
docker push my-images/AspNetCoreInDocker.Web:$DOCKER_IMAGE_VERSION
docker push my-images/AspNetCoreInDocker.Web:latest
# Push builder image to remote repository for next build
docker push my-images/AspNetCoreInDocker.Web:builder
With this approach, you keep your runtime images small by using multi-stage builds, but you also benefit from the build cache by building the builder
stage separately.
Bonus: toggling between build approaches
As with many things, the exact speedup you see will depend on the particulars of your app and its Dockerfile. If you're doing a lot of setup at the start of your Dockerfile (installing tools etc) then you may well see a significant speed up. In my case, using --cache-from
to cache the install of Cake and dotnet restore
on a modest sized application shaved about 2 minutes off a 10 minute build time. At $0.005 per minute, that means my efforts saved the company a whopping 1¢ per build. Ooh yeah, time to crack out the champagne.
A 20% reduction in build time isn't to be sniffed at, but your mileage may vary. I wanted to be able to test my build with and without the explicit caching. Also, I wanted to be able to just use the standard build cache when building locally. Consequently, I created the following bash script, which either builds using the build-cache or uses --cache-from
based on the presence of the variable USE_REMOTE_DOCKER_CACHE
:
#!/bin/bash -eu
# If USE_REMOTE_DOCKER_CACHE is not set, set it to an empty variable
USE_REMOTE_DOCKER_CACHE="${USE_REMOTE_DOCKER_CACHE:-""}"
DOCKER_IMAGE_VERSION=1.2.3_someversion
if [[ -z "${USE_REMOTE_DOCKER_CACHE}" ]]; then
# Use multi-stage build and buit-in layer caching
docker build \
-t my-images/AspNetCoreInDocker.Web:$DOCKER_IMAGE_VERSION \
-t my-images/AspNetCoreInDocker.Web:latest \
-f "path/to/AspNetCoreInDocker.Web.Dockerfile" \
"."
else
# Use the remote cache
docker pull my-images/AspNetCoreInDocker.Web:builder || true
docker build \
--target builder \
--cache-from my-images/AspNetCoreInDocker.Web:builder \
-t my-images/AspNetCoreInDocker.Web:builder \
-f "path/to/AspNetCoreInDocker.Web.Dockerfile" \
"."
docker pull my-images/AspNetCoreInDocker.Web:latest || true
docker build \
--cache-from my-images/AspNetCoreInDocker.Web:builder \
--cache-from my-images/AspNetCoreInDocker.Web:latest \
-t my-images/AspNetCoreInDocker.Web:$DOCKER_IMAGE_VERSION \
-t my-images/AspNetCoreInDocker.Web:latest \
-f "path/to/AspNetCoreInDocker.Web.Dockerfile" \
"."
fi
# Push runtime images to remote repository
docker push my-images/AspNetCoreInDocker.Web:$DOCKER_IMAGE_VERSION
docker push my-images/AspNetCoreInDocker.Web:latest
if [[ -z "${USE_REMOTE_DOCKER_CACHE}" ]]; then
echo 'Skipping builder push as not using remote docker cache'
else
docker push my-images/AspNetCoreInDocker.Web:builder
fi
Summary
Moving your CI build process to use a hosted provider makes a lot of sense compared to managing your own build agents, but you have to be aware of the trade-offs. One such trade-off for building Docker images is the loss of the build cache. In this post I showed how I worked around this problem by using --target
and --cache-from
with multi-stage builds to explicitly save builder image layers to a remote repository, and to retrieve them on the next build. Depending on your specific Dockerfile and how well it is designed for layer caching, this can give a significant performance boost compared to building the image from scratch on every build.