10 Docker Security Best Practices

Written by:
Omer Levi Hevroni
Omer Levi Hevroni
wordpress-sync/Docker-image-security-best-practices-blog-small

March 6, 2019

0 mins read

Docker container security

The topic of Docker container security raises concerns ranging from Dockerfile security—relating to the Docker base images and potential security misconfigurations,—to the Docker container security at runtime regarding network ports, user privileges, Docker mounted filesystem access, and others. In this article, we will focus on the Docker container security aspects related to building a Docker image, reducing the security vulnerabilities count introduced by Docker base images as well as Dockerfile security best practices.

What is Docker security?

Docker security refers to the build, runtime, and orchestration aspects of Docker containers. It includes the Dockerfile security aspects of Docker base images, as well as the Docker container security runtime aspects—such as user privileges, Docker daemon, proper CPU controls for a container, and further concerns around the orchestration of Docker containers at scale.

The state of Docker container security unfolds into 4 main docker security issues:

  1. Dockerfile security and best practices

  2. Docker container security at runtime

  3. Supply chain security risks with Docker Hub and how they impact Docker container images

  4. Cloud native container orchestration security aspects related to Kubernetes and Helm

In this installment of our cheat sheets, we’d like to focus on Docker security and discuss docker security best practices and guidelines that ensure a more secure and quality Docker images.

Also check out our Docker security report: Shifting Docker security left

wordpress-sync/Docker-Image-Security-Best-Practices-

Let’s get started with our list of 10 Docker security best practices

1. Prefer minimal base images

A common docker container security issue is that you end up with big images for your docker containers. Often times, you might start projects with a generic Docker container image such as writing a Dockerfile with a FROM node, as your “default”. However, when specifying the node image, you should take into consideration that the fully installed Debian Stretch distribution is the underlying image that is used to build it. If your project doesn’t require any general system libraries or system utilities then it is better to avoid using a full blown operating system (OS) as a base image.

In Snyk’s State of open source security report 2020 , we found that many of the popular Docker containers that are featured on the Docker Hub website are bundling images that contain many known vulnerabilities. For example, when you use a generic and popularly downloaded node image such as docker pull node, you are actually introducing a full blown operating system into your application that is known to have 642 vulnerabilities in its system libraries. This ends up adding unnecessary docker security issues from the get-go

wordpress-sync/Screen-Shot-2020-07-12-at-11.14.58
Top ten most popular docker images each contain at least 30 vulnerabilities

Taken from the open source security report 2020, as can be seen, each of the top ten Docker images we inspected on Docker Hub contained known vulnerabilities, except for Ubuntu. By preferring minimal images that bundle only the necessary system tools and libraries required to run your project, you are also minimizing the attack surface for attackers and ensuring that you ship a secure OS.

2. Least privileged user

When a Dockerfile doesn’t specify a USER, it defaults to executing the container using the root user. In practice, there are very few reasons why the container should have root privileges and it could very well manifest as a docker security issue. Docker defaults to running containers using the root user. When that namespace is then mapped to the root user in the running container, it means that the container potentially has root access on the Docker host. Having an application on the container run with the root user further broadens the attack surface and enables an easy path to privilege escalation if the application itself is vulnerable to exploitation.

To minimize exposure, opt-in to create a dedicated user and a dedicated group in the Docker image for the application; use the USER directive in the Dockerfile to ensure the container runs the application with the least privileged access possible.

A specific user might not exist in the image; create that user using the instructions in the Dockerfile.The following demonstrates a complete example of how to do this for a generic Ubuntu image:

1FROM ubuntu
2RUN mkdir /app
3RUN groupadd -r lirantal && useradd -r -s /bin/false -g lirantal lirantal
4WORKDIR /app
5COPY . /app
6RUN chown -R lirantal:lirantal /app
7USER lirantal
8CMD node index.js

The example above:

  • creates a system user (-r), with no password, no home directory set, and no shell

  • adds the user we created to an existing group that we created beforehand (using groupadd)

  • adds a final argument set to the user name we want to create, in association with the group we created

If you’re a fan of Node.js and alpine images, they already bundle a generic user for you called node. Here’s a Node.js example, making use of the generic node user:

1FROM node:10-alpine 
2RUN mkdir /app
3COPY . /app
4RUN chown -R node:node /app
5USER node
6CMD [“node”, “index.js”]

If you’re developing Node.js applications, you may want to consult with the official Docker and Node.js Best Practices.

3. Sign and verify images to mitigate MITM attacks

Authenticity of Docker images is a challenge. We put a lot of trust into these images as we are literally using them as the container that runs our code in production. Therefore, it is critical to make sure the image we pull is the one that is pushed by the publisher, and that no party has modified it. Tampering may occur over the wire, between the Docker client and the registry, or by compromising the registry of the owner’s account in order to push a malicious image to.

Verify docker images

Docker defaults allow pulling Docker images without validating their authenticity, thus potentially exposing you to arbitrary Docker images whose origin and author aren’t verified.

Make it a best practice that you always verify images before pulling them in, regardless of policy. To experiment with verification, temporarily enable Docker Content Trust with the following command:

1export DOCKER_CONTENT_TRUST=1

Now attempt to pull an image that you know is not signed—the request is denied and the image is not pulled.

Sign docker images

Prefer Docker Certified images that come from trusted partners who have been vetted and curated by Docker Hub rather than images whose origin and authenticity you can’t validate.

Docker allows signing images, and by this, provides another layer of protection. To sign images, use Docker Notary. Notary verifies the image signature for you, and blocks you from running an image if the signature of the image is invalid.

When Docker Content Trust is enabled, as we exhibited above, a Docker image build signs the image. When the image is signed for the first time, Docker generates and saves a private key in ~/docker/trust for your user. This private key is then used to sign any additional images as they are built.

For detailed instructions on setting up signed images, refer to Docker’s official documentation.

How is signing docker images with Docker's Content Trust and Notary different from using GPG? Diogo Mónica has a great talk on this but essentially GPG helps you with verification, not with replay attacks.

4. Find, fix and monitor for open source vulnerabilities

When we choose a base image for our Docker container, we indirectly take upon ourselves the risk of all the container security concerns that the base image is bundled with. This can be poorly configured defaults that don’t contribute to the security of the operating system, as well as system libraries that are bundled with the base image we chose.

A good first step is to make use of as minimal a base image as is possible while still being able to run your application without issues. This helps reduce the attack surface by limiting exposure to vulnerabilities; on the other hand, it doesn’t run any audits on its own, nor does it protect you from future vulnerabilities that may be disclosed for the version of the base image that you are using.

Therefore, one way of protecting against vulnerabilities in open source security software is to use tools such as Snyk, to add continuous docker security scanning and monitoring of vulnerabilities that may exist across all of the Docker image layers that are in use.

wordpress-sync/snyk-docker-scanning

Scan a Docker image for known vulnerabilities with these commands:

1# fetch the image to be tested so it exists locally
2$ docker pull node:10
3# scan the image with snyk
4$ snyk test --docker node:10 --file=path/to/Dockerfile

Monitor a Docker image for known vulnerabilities so that once newly discovered vulnerabilities are found in the image, Snyk can notify and provide remediation advice:

1$ snyk monitor --docker node:10

"Based on scans performed by Snyk users, we found that 44% of Docker image scans had known vulnerabilities, and for which there were newer and more secure base images available. This remediation advise is unique to Snyk, based on which developers can take action and upgrade their Docker images."

Snyk also found that for 20% out of all Docker image scans, only a rebuild of the Docker image would be necessary in order to reduce the number of vulnerabilities.

How to audit docker container security?

It is an essential task to scan your Linux-based container project for known vulnerabilities to ensure the security your environment. To achieve this, Snyk scans the base image for its dependencies: The operating system (OS) packages installed and managed by the package manager and key binaries—layers that were not installed through the package manager.

Use Snyk, a free tool for container security. Based on the scan results, Snyk offers remediation advice and guidance for public Docker Hub images by indicating base image recommendation, Dockerfile layer in which a vulnerability was found and more.

5. Don’t leak sensitive information to Docker images

Sometimes, when building an application inside a Docker image, you need secrets such as an SSH private key to pull code from a private repository, or you need tokens to install private packages. If you copy them into the Docker intermediate container they are cached on the layer to which they were added, even if you delete them later on. These tokens and keys must be kept outside of the Dockerfile.

Using multi-stage builds

Another aspect of improving docker container security is through the use of multi-stage builds. By leveraging Docker support for multi-stage builds, fetch and manage secrets in an intermediate image layer that is later disposed of so that no sensitive data reaches the image build. Use code to add secrets to said intermediate layer, such as in the following example:

1FROM ubuntu as intermediate
2
3WORKDIR /app
4COPY secret/key /tmp/
5RUN scp -i /tmp/key build@acme/files .
6
7FROM ubuntu
8WORKDIR /app
9COPY --from=intermediate /app .

Using Docker secret commands

Use an alpha feature in Docker for managing secrets to mount sensitive files without caching them, similar to the following:

Read more about Docker secrets on their site.

1# syntax = docker/dockerfile:1.0-experimental
2FROM alpine
3
4# shows secret from default secret location
5RUN --mount=type=secret,id=mysecret cat /run/secrets/mysecre
6
7# shows secret from custom secret location
8RUN --mount=type=secret,id=mysecret,dst=/foobar cat /foobar

Beware of recursive copy

You should also be mindful when copying files into the image that is being built. For example, the following command copies the entire build context folder, recursively, to the Docker image, which could end up copying sensitive files as well:

1COPY . .

If you have sensitive files in your folder, either remove them or use .dockerignore to ignore them:

1private.key
2appsettings.json

How do you protect a docker container?

Ensure you use multi-state builds so that the container image built for production is free of development assets and any secrets or tokens.

Furthermore, ensure you are using a container security tool, such as Snyk which you can use for free to scan your docker images from the CLI, directly from Docker Hub, or those deployed to production using Amazon ECR, Google GCR or others.

6. Use fixed tags for immutability

Each Docker image can have multiple tags, which are variants of the same images. The most common tag is latest, which represents the latest version of the image. Image tags are not immutable, and the author of the images can publish the same tag multiple times.

This means that the base image for your Docker file might change between builds. This could result in inconsistent behavior because of changes made to the base image. There are multiple ways to mitigate this issue and improve your Docker security posture:

  • Prefer the most specific tag available. If the image has multiple tags, such as :8 and :8.0.1 or even :8.0.1-alpine, prefer the latter, as it is the most specific image reference. Avoid using the most generic tags, such as latest. Keep in mind that when pinning a specific tag, it might be deleted eventually.

  • To mitigate the issue of a specific image tag becoming unavailable and becoming a show-stopper for teams that rely on it, consider running a local mirror of this image in a registry or account that is under your own control. It’s important to take into account the maintenance overhead required for this approach—because it means you need to maintain a registry. Replicating the image you want to use in a registry that you own is good practice to make sure that the image you use does not change.

  • Be very specific! Instead of pulling a tag, pull an image using the specific SHA256 reference of the Docker image, which guarantees you get the same image for every pull. However notice that using a SHA256 reference can be risky, if the image changes that hash might not exist anymore.

Are Docker images secure?

Docker images might be based on open source Linux distributions, and bundle within them open source software and libraries. A recent state of open source security research conducted by Snyk found that the top most popular docker images contain at least 30 vulnerabilities.

7. Use COPY instead of ADD

Docker provides two commands for copying files from the host to the Docker image when building it: COPY and ADD. The instructions are similar in nature, but differ in their functionality and can result in a Docker container security issues for the image:

  • COPY — copies local files recursively, given explicit source and destination files or directories. With COPY, you must declare the locations.

  • ADD — copies local files recursively, implicitly creates the destination directory when it doesn’t exist, and accepts archives as local or remote URLs as its source, which it expands or downloads respectively into the destination directory.

While subtle, the differences between ADD and COPY are important. Be aware of these differences to avoid potential security issues:

  • When remote URLs are used to download data directly into a source location, they could result in man-in-the-middle attacks that modify the content of the file being downloaded. Moreover, the origin and authenticity of remote URLs need to be further validated. When using COPY the source for the files to be downloaded from remote URLs should be declared over a secure TLS connection and their origins need to be validated as well.

  • Space and image layer considerations: using COPY allows separating the addition of an archive from remote locations and unpacking it as different layers, which optimizes the image cache. If remote files are needed, combining all of them into one RUN command that downloads, extracts, and cleans-up afterwards optimizes a single layer operation over several layers that would be required if ADD were used.

  • When local archives are used, ADD automatically extracts them to the destination directory. While this may be acceptable, it adds the risk of zip bombs and Zip Slip vulnerabilities that could then be triggered automatically.

8. Use metadata labels

Image labels provide metadata for the image you’re building. This help users understand how to use the image easily. The most common label is “maintainer”, which specifies the email address and the name of the person maintaining this image. Add metadata with the following LABEL command:

1LABEL maintainer="me@acme.com"

In addition to a maintainer contact, add any metadata that is important to you. This metadata could contain: a commit hash, a link to the relevant build, quality status (did all tests pass?), source code, a reference to your SECURITY.TXT file location and so on.

It is good practice to adopt a SECURITY.TXT (RFC5785) file that points to your responsible disclosure policy for your Docker label schema when adding labels, such as the following:

1LABEL securitytxt="https://www.example.com/.well-known/security.txt"

See more information about labels for Docker images: http://label-schema.org/rc1/

9. Use multi-stage build for small and secure docker images

While building your application with a Dockerfile, many artifacts are created that are required only during build-time. These can be packages such as development tooling and libraries that are required for compiling, or dependencies that are required for running unit tests, temporary files, secrets, and so on.

Keeping these artifacts in the base image, which may be used for production, results in an increased Docker image size, and this can badly affect the time spent downloading it as well as increase the attack surface because more packages are installed as a result. The same is true for the Docker image you’re using—you might need a specific Docker image for building, but not for running the code of your application.

Golang is a great example. To build a Golang application, you need the Go compiler. The compiler produces an executable that runs on any operating system, without dependencies, including scratch images.

This is a good reason why Docker has the multi-stage build capability. This feature allows you to use multiple temporary images in the build process, keeping only the latest image along with the information you copied into it. In this way, you have two images:

  • First image—a very big image size, bundled with many dependencies that are used in order to build your app and run tests.

  • Second image—a very thin image in terms of size and number of libraries, with only a copy of the artifacts required to run the app in production.

10. Use a linter

Adopt the use of a linter to avoid common mistakes and establish best practice guidelines that engineers can follow in an automated way. This is a helpful docker security scanning task to statically analyze Dockerfile security issues.

One such linter is hadolint. It parses a Dockerfile and shows a warning for any errors that do not match its best practice rules.

hadolint-docker-image-security

"hadolint-docker-image-security/hadolint-docker-image-security"Hadolint is even more powerful when it is used inside an integrated development environment (IDE). For example, when using hadolint as a VSCode extension, linting errors appear while typing. This helps in writing better Dockerfiles faster.

Be sure to print out the cheat sheet and pin it up somewhere to remind you of some of the docker image security best practices you should follow when building and working with docker images!

How do you harden a docker container image?

You may use linters such as hadolint or dockle to ensure the Dockerfile has secure configuration. Make sure you also scan your container images to avoid vulnerabilities with a severe security impact in your production containers. A recommended read is the following 10 Docker image security best practices to ensure a secure image.

Is Docker a security risk?

Docker is a technology for software virtualization which gained popularity and widespread adoption. When we build and deploy Docker, we need to do so with security best practices in mind, in order to mitigate concerns such as security vulnerabilities bundled with the Docker base images, or data breaches due to misconfigured Docker containers.

How do I secure a Docker container?

Follow Docker security best practices to ensure you are using Docker base images with least or no known vulnerabilities, Dockerfile security settings, and monitor your deployed containers to ensure there is no image drift between dev and production. Furthermore, ensure you are following Infrastructure as Code best practices for your container orchestration solutions.

To wrap up, if you want to keep up with security best practices for building optimal Docker images for Node.js and Java applications:

  1. Are you a Java developer? You’ll find this resource valuable: Docker for Java developers: 5 things you need to know not to fail your security

  2. 10 best practices to build a Java container with Docker - A great in-depth cheat sheet on how to build production-grade containers for Java application.

  3. 10 best practices to containerize Node.js web applications with Docker - If you’re a Node.js developer you are going to love this step by step walkthrough, showing you how to build performant and secure Docker base images for your Node.js applications.

Patch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo Segment

Snyk is a developer security platform. Integrating directly into development tools, workflows, and automation pipelines, Snyk makes it easy for teams to find, prioritize, and fix security vulnerabilities in code, dependencies, containers, and infrastructure as code. Supported by industry-leading application and security intelligence, Snyk puts security expertise in any developer’s toolkit.

Start freeBook a live demo