- Dockerfile Install Conda
- Docker Install Conda Packages
- Docker Install Condado
- Docker Install Conda
- Docker Install Windows
- Docker Install Conda Ubuntu
When you’re choosing a base image for your Docker image, Alpine Linux is often recommended.Using Alpine, you’re told, will make your images smaller and speed up your builds.And if you’re using Go that’s reasonable advice.
Dec 09, 2020 That's my highest recommended solution but it may not be what you really want to do for many reasons. Especially if you are not familiar with docker and container usage, or really just want a good local install! In this post I will show you how to install NVIDIA's build of TensorFlow 1.15 into an Anaconda Python conda environment. Conda manages python environments, conda deactivate resets your shell, conda activate py37 sets your PATH. – bbaassssiiee Jan 14 '20 at 21:12 It could be usefull if you add where to put yaml file and how to install environment from it.
But if you’re using Python, Alpine Linux will quite often:
- Make your builds much slower.
- Make your images bigger.
- Waste your time.
- On occassion, introduce obscure runtime bugs.
Let’s see why Alpine is recommended, and why you probably shouldn’t use it for your Python application.
Why people recommend Alpine
Let’s say we need to install gcc
as part of our image build, and we want to see how Alpine Linux compares to Ubuntu 18.04 in terms of build time and image size.
- When you’re choosing a base image for your Docker image, Alpine Linux is often recommended. Using Alpine, you’re told, will make your images smaller and speed up your builds. And if you’re using Go that’s reasonable advice. But if you’re using Python, Alpine Linux will quite often: Make your builds much slower. Make your images bigger. Waste your time. On occassion, introduce obscure.
- The Conda package manager comes with all the dependencies you need, so you do not need to install everything separately. Both Conda and Docker are intended to solve the same problem, but one of the big differences/benefits of Conda is that you can use Conda without having root access. Conda should be easy to install if you follow these steps.
- That's my highest recommended solution but it may not be what you really want to do for many reasons. Especially if you are not familiar with docker and container usage, or really just want a good local install! In this post I will show you how to install NVIDIA's build of TensorFlow 1.15 into an Anaconda Python conda environment.
First, I’ll pull both images, and check their size:
As you can see, the base image for Alpine is much smaller.
Next, we’ll try installing gcc
in both of them.First, with Ubuntu:
Note: Outside the very specific topic under discussion, the Dockerfiles in this article are not examples of best practices, since the added complexity would obscure the main point of the article.
To ensure you’re writing secure, correct, fast Dockerfiles, consider my Python on Docker Production Handbook, which includes a packaging process and >70 best practices.
We can then build and time that:
Now let’s make the equivalent Alpine Dockerfile
:
And again, build the image and check its size:
As promised, Alpine images build faster and are smaller: 15 seconds instead of 30 seconds, and the image is 105MB instead of 150MB.That’s pretty good!
But when we switch to packaging a Python application, things start going wrong.
Let’s build a Python image
We want to package a Python application that uses pandas
and matplotlib
.So one option is to use the Debian-based official Python image (which I pulled in advance), with the following Dockerfile
:
And when we build it:
The resulting image is 363MB.
Can we do better with Alpine? Let’s try:
And now we build it:
What’s going on?
Standard PyPI wheels don’t work on Alpine
If you look at the Debian-based build above, you’ll see it’s downloading matplotlib-3.1.2-cp38-cp38-manylinux1_x86_64.whl
.This is a pre-compiled binary wheel.Alpine, in contrast, downloads the source code (matplotlib-3.1.2.tar.gz
), because standard Linux wheels don’t work on Alpine Linux.
Why?Most Linux distributions use the GNU version (glibc
) of the standard C library that is required by pretty much every C program, including Python.But Alpine Linux uses musl
, those binary wheels are compiled against glibc
, and therefore Alpine disabled Linux wheel support.
Most Python packages these days include binary wheels on PyPI, significantly speeding install time.But if you’re using Alpine Linux you need to compile all the C code in every Python package that you use.
Which also means you need to figure out every single system library dependency yourself.In this case, to figure out the dependencies I did some research, and ended up with the following updated Dockerfile
:
And then we build it, and it takes…
… 25 minutes, 57 seconds! And the resulting image is 851MB.
Here’s a comparison between the two base images:
Base image | Time to build | Image size | Research required |
---|---|---|---|
python:3.8-slim | 30 seconds | 363MB | No |
python:3.8-alpine | 1557 seconds | 851MB | Yes |
Alpine builds are vastly slower, the image is bigger, and I had to do a bunch of research.
Can’t you work around these issues?
Build time
For faster build times, Alpine Edge, which will eventually become the next stable release, does have matplotlib
and pandas
.And installing system packages is quite fast.As of January 2020, however, the current stable release does not include these popular packages.
Even when they are available, however, system packages almost always lag what’s on PyPI, and it’s unlikely that Alpine will ever package everything that’s on PyPI.In practice most Python teams I know don’t use system packages for Python dependencies, they rely on PyPI or Conda Forge.
Image size
Some readers pointed out that you can remove the originally installed packages, or add an option not to cache package downloads, or use a multi-stage build.One reader attempt resulted in a 470MB image.
So yes, you can get an image that’s in the ballpark of the slim-based image, but the whole motivation for Alpine Linux is smaller images and faster builds.With enough work you may be able to get a smaller image, but you’re still suffering from a 1500-second build time when they you get a 30-second build time using the python:3.8-slim
image.
But wait, there’s more!
Alpine Linux can cause unexpected runtime bugs
While in theory the musl
C library used by Alpine is mostly compatible with the glibc
used by other Linux distributions, in practice the differences can cause problems.And when problems do occur, they are going to be strange and unexpected.
Some examples:
- Alpine has a smaller default stack size for threads, which can lead to Python crashes.
- One Alpine user discovered that their Python application was much slower because of the way musl allocates memory vs. glibc.
- I once couldn’t do DNS lookups in Alpine images running on minikube (Kubernetes in a VM) when using the WeWork coworking space’s WiFi.The cause was a combination of a bad DNS setup by WeWork, the way Kubernetes and minikube do DNS, and musl’s handling of this edge case vs. what glibc does.musl wasn’t wrong (it matched the RFC), but I had to waste time figuring out the problem and then switching to a glibc-based image.
- Another user discovered issues with time formatting and parsing.
Most or perhaps all of these problems have already been fixed, but no doubt there are more problems to discover.Random breakage of this sort is just one more thing to worry about.
Don’t use Alpine Linux for Python images
Unless you want massively slower build times, larger images, more work, and the potential for obscure bugs, you’ll want to avoid Alpine Linux as a base image.For some recommendations on what you should use, see my article on choosing a good base image.
The Conda packaging tool implements environments, that enable different applications to have different libraries installed.So when you’re building a Docker image for a Conda-based application, you’ll need to activate a Conda environment.
Unfortunately, activating Conda environments is a bit complex, and interacts badly with the way Dockerfile
s works.
So how do you activate a Conda environment in a Dockerfile
?
For educational purposes I’m going to start with explaining the problem and showing some solutions that won’t work, but if you want you can just skip straight to the working solution.
The problem with conda activate
Conda environments provide a form of isolation: each environment has its own set of C libraries, Python libraries, binaries, and so on.Conda installs a base environment where it itself is installed, so to use a Conda-based application you need to create and then activate a new, application-specific environment.
Specifically, to activate a Conda environment, you usually run conda activate
.So let’s try that as our first attempt, and see how it fails.
We’ll start with an environment.yml
file defining the Conda environment:
And a small Python program, run.py
:
A first attempt at a Dockerfile
might look as follows:
If we build the resulting Docker image, here’s what happens:
Why emulating activation won’t work
Can you avoid using conda activate
, and just set a few environment variables?Probably not.
Unlike the activate
script for the Python virtualenv
tool, which just sets an environment variable or two, the Conda activation can also activate environment variables set by packages.
That means you can’t just emulate it, you need to use Conda’s own activation infrastructure.So how are we going to do that?
A failed solution, part #1: conda init
Can we make conda activate
work by doing the requested conda init
?It’s a start, but it won’t suffice.
conda init bash
will install some startup commands for bash
that will enable conda activate
to work, but that setup code only runs if you have a login bash shell.When you do:
What’s actually happening is that Docker is doing /bin/sh -c 'conda activate env'
.But, you can override the default shell with a SHELL
command.
That plus conda init bash
give us the following Dockerfile
:
Dockerfile Install Conda
Will this work?No it won’t:
The problem is that each RUN
in a Dockerfile
is a separate run of bash
.So when you do:
That just activates it for the first RUN
, and the later RUN
s are new shell sessions without activation happening.
A failed solution, part #2: Activate automatically
Docker Install Conda Packages
We want every RUN
command to be activated, so we add conda activate
to the ~/.bashrc
of the current user:
And now the image builds!
Docker Install Condado
We’re not done yet, though.If we run the image:
The problem is that the syntax we used for ENTRYPOINT
doesn’t actually start a shell session.Now, instead of doing ENTRYPOINT ['python', 'run.py']
, you can actually have ENTRYPOINT
use a shell with this alternative syntax:
Docker Install Conda
The problem with this syntax is that it breaks container shutdown, so you probably don’t want to use it.
Docker Install Windows
A working solution
Instead of using conda activate
, there’s another way to run a command inside an environment.conda run -n myenv yourcommand
will run yourcommand
inside the environment.You’ll also want to pass the --no-capture-output
flag to conda run
so it streams stdout and stderr (thanks to Joe Selvik for pointing this out).So that suggests the following Dockerfile
:
Docker Install Conda Ubuntu
And indeed: