balenalib
is the central home for 26000+ IoT focused Docker images built specifically for balenaCloud and balenaOS. This set of images provide a way to get up and running quickly and easily, while still providing the option to deploy slim secure images to the edge when you go to production.
Docker frequently asked questions (FAQ) Estimated reading time: 9 minutes. Does Docker run on Linux, macOS, and Windows? You can run both Linux and Windows programs and executables in Docker containers. The Docker platform runs natively on Linux (on x86-64, ARM and many other CPU architectures) and on Windows (x86-64). Now that you have installed on your Debian 10, let’s go over the basic docker concepts and commands. Docker Images #. A Docker image is made up of a series of filesystem layers representing instructions in the image’s Dockerfile that make up an executable software application. Docker is an application that simplifies the process of managing application processes in containers. In this tutorial, you'll install and use Docker Community Edition (CE) on Debian 9.
Features Overview
- Multiple Architectures:
- armv5e
- armv6
- armv7hf
- aarch64
- amd64
- i386
- Multiple Distributions:
- Debian: jessie (8), stretch (9), buster (10), bullseye (11), and sid
- Alpine: 3.9, 3.10, 3.11, 3.12 and edge
- Ubuntu: xenial (16.04), bionic (18.04), cosmic (18.10), disco (19.04), eoan (19.10) and focal (20.04)
- Fedora: 30, 31, 32, 33 and 34
- Multiple language stacks:
- Node.js: 15.7.0, 14.16.0, 12.21.0 and 10.23.1
- Python: 2.7.18 (deprecated), 3.5.10, 3.6.12, 3.7.9, 3.8.6 and 3.9.1
- openJDK: 7-jdk/jre, 8-jdk/jre and 11-jdk/jre
- Golang: 1.16, 1.15.3 and 1.14.10
- Dotnet: 2.1-sdk/runtime/aspnet, 2.2-sdk/runtime/aspnet, 3.1-sdk/runtime/aspnet and 5.0-sdk/runtime/aspnet
run
andbuild
variants designed for multistage builds.- cross-build functionality for building ARM containers on x86.
- Helpful package installer script called
install_packages
inspired by minideb.
How to Pick a Base Image
When starting out a project, it's generally easier to have a 'fatter' image, which contains a lot of prebuilt dependencies and tools. These images help you get setup faster and work out the requirements for your project. For this reason, it's recommended to start with -build
variants, and as your project progresses, switch to a -run
variant with some docker multistage build magic to slim your deploy image down. In most cases, your project can just use a Debian based distribution, which is the default if not specified, but if you know the requirements of your project or prefer specific distros, Ubuntu, Alpine, and Fedora images are available. The set of balenalib
base images follow a simple naming scheme described below, which will help you select a base image for your specific needs.
How the Image Naming Scheme Works
With over 26000 balenalib
base images to choose from, it can be overwhelming to decide which image and tag are correct for your project. To pick the correct image, it helps to understand how the images are named as that indicates what is installed in the image. In general, the naming scheme for the balenalib
image set follows the pattern below:
Image Names
<hw>
is either architecture or device type and is mandatory. If usingDockerfile.template
, you can replace this with%%BALENA_MACHINE_NAME%%
or%%BALENA_ARCH%%
. For a list of available device names and architectures, see the Device types.<distro>
is the Linux distribution. Currently there are 4 distributions, namely Debian, Alpine, Ubuntu and Fedora. This field is optional and will default to Debian if left out.<lang_stack>
is the programming language pack, currently we support Node.js, Python, OpenJDK, .Net, and Go. This field is optional, and if left out, no language pack will be installed, so you will just have the distribution and you can later install and use any language in your image/container.
Image Tags
In the tags, all of the fields are optional, and if they are left out, they will default to their latest
pointer.
<lang_ver>
is the version of the language stack, for example, Node.js 10.10, it can also be substituted forlatest
.<distro_ver>
is the version of the Linux distro, for example in the case of Debian, there are 4 valid versions, namelysid
,jessie
,buster
andstretch
.- For each combination of distro and stack, we have two variants called
run
andbuild
. The build variant is much heavier as it has a number of tools preinstalled to help with building source code. You can see an example of the tools that are included in the Debian Stretch variant here. Therun
variants are stripped down and only include a few useful runtime tools, see an example here. If no variant is specified, the image defaults torun
- The last optional field on tags is the date tag
<yyyymmdd>
. If a date tag is specified, the pinned release will always be pulled from Docker Hub, even if there is a new one available.
Note: Pinning to a date-frozen base image is highly recommended if you are running a fleet in production. This ensures that all your dependencies have a fixed version and won't get randomly updated until you decide to pin the image to a newer release.
Examples
balenalib/raspberrypi3-node:10.18
<hw>
: raspberrypi3 - The Raspberry Pi 3 device type.<distro>
: omitted, so it defaults to Debian.<lang>
: node - the Node.js runtime and npm will be installed<lang_ver>
: 10.18 - This gives us Node.js version 10.18.x whatever is the latest patch version provided on balenalib<distro_ver>
: omitted, so it defaults tobuster
(build|run)
: omitted, so the image defaults to the slimmed downrun
variant<yyyymmdd>
: omitted, we don't have a date frozen image, so new updates pushed to our 10.18 tag, for example patch versions from Node.js will automatically be inherited when they are available.
balenalib/i386-ubuntu-python:latest-bionic-build-20191029
<hw>
: i386 - the intel 32 bit architecture that runs on Intel Edison<distro>
: ubuntu<lang>
: python<lang_ver>
:latest
points to the latest Python 2 version, which currently is 2.7.17<distro_ver>
: bionic is Ubuntu 18.04(build|run)
:build
- to include things likebuild-essential
andgcc
<yyyymmdd>
: 20191029 is a date frozen image - so this image will never be updated on Docker Hub.
run vs. build
For each combination of <hw>
-<distro>
-<lang>
there is both a run
and a build
variant. These variants are provided to allow for easier multistage builds.
The run
variant is designed to be a slim and minimal variant with only runtime essentials packaged into it. An example of the packages installed in can be seen in the Dockerfile
of balenalib/armv7hf-debian:run
.
The build
variant is a heavier image that includes many of the tools required for building from source such as build-essential
, gcc
, etc. As an example, you can see the types of packages installed in the balenalib/armv7hf-debian:build
variant here.
These variants make building multistage projects easier, take for example, installing an I2C node.js package, which requires a number of build time dependencies to build the native i2c
node module, but we don't want to send all of those down to our device. This is the perfect time for multistage builds and to use the build
and run
variants.
Supported Architectures, Distros and Languages
Currently, balenalib supports the following OS distributions and Language stacks, if you would like to see others added, create an issue on the balena base images repo.
Distribution | Default (latest) | Supported Architectures |
---|---|---|
Debian | Debian GNU/Linux 10 (buster) | armv5e, armv6, armv7hf, aarch64, amd64, i386 |
Alpine | Alpine Linux v3.12 | armv6, armv7hf, aarch64, amd64, i386 |
Ubuntu | 18.04 LTS (bionic) | armv7hf, aarch64, amd64, i386 |
Fedora | Fedora 32 | armv7hf, aarch64, amd64, i386 |
Language | Default (latest) | Supported Architectures |
---|---|---|
Node.js | 15.7.0 | armv6, armv7hf, aarch64, amd64, i386 |
Python | 3.9.1 | armv5e, armv6, armv7hf, aarch64, amd64, i386 |
OpenJDK | 11-jdk | armv7hf, aarch64, amd64, i386, armv6 |
Go | 1.16 | armv7hf, aarch64, amd64, i386, armv6 |
Dotnet | 5.0-sdk | armv7hf, aarch64, amd64 |
Notes
Devices with a device type of raspberry-pi
(Raspberry Pi1 and Zero) will be built from balenalib/rpi-raspbian
and will be Raspbian base images. The raspberry-pi2
and raspberrypi3
device types Debian base images have the Raspbian package source added, and Raspbian userland pre-installed.
Not all OS distro and language stack versions are compatible with each other. Notice that there are some combinations that are not available in the balenalib
base images.
- Node.js dropped 32-bit builds a while ago so i386-based nodejs images (Debian, Fedora and Ubuntu) v8.x and v6.x are official. New series (v10.x and v12.x) are using unofficial builds.
- armv6 binaries were officially dropped from Node.js v12 and v12 armv6 support is now considered unofficial.
- The Node.js v6.x and v8.x series are not available for i386 Alpine Linux base images v3.9 and edge as node crashes with segfault error, we are investigating the issue and will add them back as soon as the issue is resolved.
Installing Packages
Installing software packages in balenalib containers is very easy, and in most cases, you can just use the base image operating system package manager. However to make things even easier, every balenalib image includes a small install_packages
script that abstracts away the specifics of the underlying package managers, and adds the following useful features:
- Install the named packages, skipping prompts etc.
- Clean up the package manager metadata afterward to keep the resulting image small.
- Retries if package install fails. Sometimes a package will fail to download due to a network issue, and retrying may fix this, which is particularly useful in an automated build pipeline.
An example of this in action is as follows:
This will run an apt-get update -qq
, then install wget
and git
via apt-get with -y --no-install-recommends
flags, and it will by default try this 2 times before failing. You can see the source of install_packages
here.
How the Images Work at Runtime
Each balenalib
base image has a default ENTRYPOINT
which is defined as ENTRYPOINT ['/usr/bin/entry.sh']
. This ensures that entry.sh is run before your code defined in CMD
of your Dockerfile
.
On container startup, the entry.sh script first checks if the UDEV
flag is set to true
or false
. In the case where it is false
, the CMD
is then executed. In the case it is true
(or 1
), the entry.sh will check if the container is running privileged, if it is, it will mount /dev
to a devtmpfs and then start udevd
. In the case the container is an unprivileged container, no mount will be performed, and udevd
will be started (although it won't be very much use without the privilege).
At the end of a container's lifecycle, when a request to container restart, reboot or shutdown is sent to the supervisor, the balenaEngine will send a SIGTERM
(signal 15) to the containers, and 10 seconds later it will issue a SIGKILL
if the container is still running. This timeout can also be configured via the stop_grace_period in your docker-compose.yml.
Working with Dynamically Plugged Devices
In many IoT projects, your containers will want to interact with some hardware, often this hardware is plugged in at runtime, in the case of USB or serial devices. In these cases, you will want to enable udevd
in your container. In balenalib
images this can easily be done either by adding ENV UDEV=1
in your Dockerfile
or by setting an environment variable.
You will also need to run your container privileged
. By default, any balenaCloud projects that don't contain a docker-compose.yml
will run their containers privileged
. If you are using a multicontainer project, you will need to add privileged: true
to each of the service definitions for the services that need hardware access.
When a balenalib
container runs with UDEV=1
it will first detect if it is running on a privileged
container. If it is, it will mount the host OS /dev
to a devtmpfs and then start udevd
. Now anytime a new device is plugged in, the kernel will notify the container udevd
daemon and the relevant device nodes in the container /dev
will appear.
Note: The new balenalib base images make sure udevd
runs in its own network namespace, so as to not interfere with cellular modems. These images should not have any of the past udev restrictions of the resin/
base images.
Major Changes
When moving from the legacy resin/...
base images to the balenalib
ones, there are a number of breaking changes that you should take note of, namely:
UDEV
now defaults tooff
, so if you have code that relies on detecting dynamically plugged devices, you will need to enable this in either your Dockerfile or via a device environment variable. See Working with Dynamically Plugged Devices.- The
INITSYSTEM
functionality has been completely removed, so applications that rely on systemd or openRC should install and set up the initsystem in their apps. See Installing your own Initsystem. - Mounting of
/dev
to a devtmpfs will now only occur whenUDEV=on
and the container is running asprivileged
.1
,true
andon
are valid value forUDEV
and will be evaluated asUDEV=on
, all other values will turnUDEV
off. - Support for Debian Wheezy has been dropped.
armel
architecture has been renamed toarmv5e
.
Installing your own Initsystem
Since the release of multicontainer on the balenaCloud platform, we now recommend the use of multiple containers and no longer recommend the use of an initsystem, particularly systemd, in the container as it tends to cause a myriad of issues, undefined behavior and requires the container to run fully privileged.
However, if your application relies on initsystem features, it is fairly easy to add this functionality to a balenalib base image. We have provided some examples for systemd and openRC. Please note that different systemd versions require different implementation so for Debian Jessie and older, please refer to this example and for Debian Stretch and later, please refer to this example.
Generally, for systemd, it just requires installing the systemd package, masking a number of services and defining a new entry.sh
and a balena.service
. The Dockerfile
below demonstrates this:
Building ARM Containers on x86 Machines
This is a unique feature of balenalib ARM base images that allows you to run them anywhere (running ARM image on x86/x86_64 machines). A tool called resin-xbuild
and QEMU are installed inside any balenalib ARM base image and can be triggered by RUN ['cross-build-start']
and RUN ['cross-build-end']
. QEMU will emulate any instructions between cross-build-start
and cross-build-end
. So this Dockerfile:
Debian 9 Docker Package
can run on your x86 machine and there will be no Exec format error
, which is the error when you run an ARM binary on x86. This approach works only if the image is being built on x86 systems. Use the --emulated
flag in balena push
to trigger a qemu emulated build targetting the x86 architecture. More details can be found in our blog post here. You can find the full source code for the two cross-build scripts here.
Estimated reading time: 9 minutes
Does Docker run on Linux, macOS, and Windows?
You can run both Linux and Windows programs and executables in Docker containers. The Docker platform runs natively on Linux (on x86-64, ARM and many other CPU architectures) and on Windows (x86-64).
Docker Inc. builds products that let you build and run containers on Linux, Windows and macOS.
What does Docker technology add to just plain LXC?
Docker technology is not a replacement for LXC. “LXC” refers to capabilities ofthe Linux kernel (specifically namespaces and control groups) which allowsandboxing processes from one another, and controlling their resourceallocations. On top of this low-level foundation of kernel features, Dockeroffers a high-level tool with several powerful functionalities:
Portable deployment across machines. Docker defines a format for bundling an application and all its dependencies into a single object called a container. This container can be transferred to any Docker-enabled machine. The container can be executed there with the guarantee that the execution environment exposed to the application is the same in development, testing, and production. LXC implements process sandboxing, which is an important pre-requisite for portable deployment, but is not sufficient for portable deployment. If you sent me a copy of your application installed in a custom LXC configuration, it would almost certainly not run on my machine the way it does on yours. The app you sent me is tied to your machine’s specific configuration: networking, storage, logging, etc. Docker defines an abstraction for these machine-specific settings. The exact same Docker container can run - unchanged - on many different machines, with many different configurations.
Application-centric. Docker is optimized for the deployment of applications, as opposed to machines. This is reflected in its API, user interface, design philosophy and documentation. By contrast, the
lxc
helper scripts focus on containers as lightweight machines - basically servers that boot faster and need less RAM. We think there’s more to containers than just that.Automatic build. Docker includes a tool for developers to automatically assemble a container from their source code, with full control over application dependencies, build tools, packaging etc. They are free to use
make
,maven
,chef
,puppet
,salt,
Debian packages, RPMs, source tarballs, or any combination of the above, regardless of the configuration of the machines.Versioning. Docker includes git-like capabilities for tracking successive versions of a container, inspecting the diff between versions, committing new versions, rolling back etc. The history also includes how a container was assembled and by whom, so you get full traceability from the production server all the way back to the upstream developer. Docker also implements incremental uploads and downloads, similar to
git pull
, so new versions of a container can be transferred by only sending diffs.Component re-use. Any container can be used as a parent image tocreate more specialized components. This can be done manually or as part of anautomated build. For example you can prepare the ideal Python environment, anduse it as a base for 10 different applications. Your ideal PostgreSQL setup canbe re-used for all your future projects. And so on.
Sharing. Docker has access to a public registry on DockerHub where thousands ofpeople have uploaded useful images: anything from Redis, CouchDB, PostgreSQL toIRC bouncers to Rails app servers to Hadoop to base images for various Linuxdistros. The registry also includes an official “standardlibrary” of useful containers maintained by the Docker team. The registry itselfis open-source, so anyone can deploy their own registry to store and transferprivate containers, for internal server deployments for example.
Tool ecosystem. Docker defines an API for automating and customizing the creation and deployment of containers. There are a huge number of tools integrating with Docker to extend its capabilities. PaaS-like deployment (Dokku, Deis, Flynn), multi-node orchestration (Maestro, Salt, Mesos, Openstack Nova), management dashboards (docker-ui, Openstack Horizon, Shipyard), configuration management (Chef, Puppet), continuous integration (Jenkins, Strider, Travis), etc. Docker is rapidly establishing itself as the standard for container-based tooling.
What is different between a Docker container and a VM?
There’s a great StackOverflow answer showing the differences.
Do I lose my data when the container exits?
Not at all! Any data that your application writes to disk gets preserved in itscontainer until you explicitly delete the container. The file system for thecontainer persists even after the container halts.
How far do Docker containers scale?
Some of the largest server farms in the world today are based on containers.Large web deployments like Google and Twitter, and platform providers such asHeroku run on container technology, at a scale of hundreds ofthousands or even millions of containers.
Debian 9 Docker Install
How do I connect Docker containers?
Currently the recommended way to connect containers is via the Docker networkfeature. You can see details of how to work with Docker networks.
How do I run more than one process in a Docker container?
This approach is discouraged for most use cases. For maximum efficiency andisolation, each container should address one specific area of concern. However,if you need to run multiple services within a single container, seeRun multiple services in a container.
How do I report a security issue with Docker?
You can learn about the project’s security policyhere and report security issues to thismailbox.
Why do I need to sign my commits to Docker with the DCO?
Read our blog post on the introduction of the DCO.
When building an image, should I prefer system libraries or bundled ones?
This is a summary of a discussion on the docker-dev mailing list.
Virtually all programs depend on third-party libraries. Most frequently, theyuse dynamic linking and some kind of package dependency, so that whenmultiple programs need the same library, it is installed only once.
Some programs, however, bundle their third-party libraries, because theyrely on very specific versions of those libraries.
When creating a Docker image, is it better to use the bundled libraries, orshould you build those programs so that they use the default system librariesinstead?
The key point about system libraries is not about saving disk or memory space.It is about security. All major distributions handle security seriously, byhaving dedicated security teams, following up closely with publishedvulnerabilities, and disclosing advisories themselves. (Look at the DebianSecurity Informationfor an example of those procedures.) Upstream developers, however, do not alwaysimplement similar practices.
Before setting up a Docker image to compile a program from source, if you wantto use bundled libraries, you should check if the upstream authors provide aconvenient way to announce security vulnerabilities, and if they update theirbundled libraries in a timely manner. If they don’t, you are exposing yourself(and the users of your image) to security vulnerabilities.
Likewise, before using packages built by others, you should check if thechannels providing those packages implement similar security best practices.Downloading and installing an “all-in-one” .deb or .rpm sounds great at first,except if you have no way to figure out that it contains a copy of the OpenSSLlibrary vulnerable to the Heartbleed bug.
Why is DEBIAN_FRONTEND=noninteractive
discouraged in Dockerfiles?
When building Docker images on Debian and Ubuntu you may have seen errors like:
These errors don’t stop the image from being built but inform you that theinstallation process tried to open a dialog box, but couldn’t. Generally,these errors are safe to ignore.
Some people circumvent these errors by changing the DEBIAN_FRONTEND
environment variable inside the Dockerfile using:
This prevents the installer from opening dialog boxes during installation whichstops the errors.
While this may sound like a good idea, it may have side effects. TheDEBIAN_FRONTEND
environment variable is inherited by all images andcontainers built from your image, effectively changing their behavior. Peopleusing those images run into problems when installing softwareinteractively, because installers do not show any dialog boxes.
Because of this, and because setting DEBIAN_FRONTEND
to noninteractive
ismainly a ‘cosmetic’ change, we discourage changing it.
If you really need to change its setting, make sure to change it back to itsdefault valueafterwards.
Why do I get Connection reset by peer
when making a request to a service running in a container?
Typically, this message is returned if the service is already bound to yourlocalhost. As a result, requests coming to the container from outside aredropped. To correct this problem, change the service’s configuration on yourlocalhost so that the service accepts requests from all IPs. If you aren’t surehow to do this, check the documentation for your OS.
Why do I get Cannot connect to the Docker daemon. Is the docker daemon running on this host?
when using docker-machine?
This error points out that the docker client cannot connect to the virtualmachine. This means that either the virtual machine that works underneathdocker-machine
is not running or that the client doesn’t correctly point atit.
To verify that the docker machine is running you can use the docker-machine ls
command and start it with docker-machine start
if needed.
You need to tell Docker to talk to that machine. You can do this with thedocker-machine env
command. For example,
Where can I find more answers?
You can find more answers on:
faq, questions, documentation, docker