Activating environment in dockerfile - related to #81
Opened this issue ยท 42 comments
We are building a docker image based on the miniconda3:latest image. The Dockerfile is the following:
FROM continuumio/miniconda3:latest
COPY environment.yml /home/files/environment.yml
RUN conda env create -f /home/files/environment.yml
RUN conda activate webapp
Where webapp is the name of the environment. However we get the error message:
CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'. If your shell is Bash or a Bourne variant, enable conda for the current user with
`$ echo ". /opt/conda/etc/profile.d/conda.sh" >> ~/.bashrc` ...
This seems to be related to #81.
We see however in the Dockerfile for miniconda3, that this commando is already run.
If run docker container run -it
then we can run conda activate webapp
? Are we missing something?
Can you share the contents of your environment.yml
file?
yes:
channels:
- defaults
dependencies:
- python=3.6.5
- pip=10.0.1
- pip:
- chardet==3.0.4
- click==6.7
- Cython==0.28.2
- dash==0.21.0
- dash-core-components==0.22.1
- dash-html-components==0.10.0
- dash-renderer==0.12.1
- decorator==4.3.0
- nbformat==4.4.0
- numpy==1.14.2
- pandas==0.22.0
- pandas-datareader==0.6.0
- plotly==2.5.1
- python-dateutil==2.7.2
- pytz==2018.4
- requests==2.18.4
- urllib3==1.22
- Werkzeug==0.14.1
- gunicorn==19.5.0
I see. Your line RUN conda activate webapp
is failing because conda activate
only gets hooked in as an interactive shell command when you're actually using a shell. The statement, even if executed correctly, would have no effect, as that state wouldn't be carried over to the next RUN
instruction.
What is your goal with RUN conda activate webapp
? Is that then environment you want active when you enter the container as, e.g., /bin/bash
?
The desired outcome is to activate the environment, such that we could run our Dash application/Flask application, with our desired version of python and dependencies.
I have the same issue but with a different goal. I just want one environment in my dockerfile, however I want to enable the environment variables that are set in https://conda.io/docs/user-guide/tasks/build-packages/compiler-tools.html
The only way to do that is using "source activate root". But the subsequent Dockerfile commands don't pick that up. For example if I do ENV source activate root and then follow it with RUN pip install regex...I don't have the correct gcc environment variables set.
Is there is a way to get the Dockerfile commands itself to see the environment being available...as well as when I run python from the built docker image ?
In a lot of ways, this is tantamount to asking - can I activate a conda env at boot and have it available everywhere
Once release conda 4.6, you can make use of conda run
in your Dockerfile CMD
/ ENTRYPOINT
.
its not just in the entrypoint, its actually while building the dockerfile itself.
for example, today I have to do
RUN ln -s /opt/conda/bin/x86_64-conda_cos6-linux-gnu-gcc /usr/bin/gcc
before running subsequent pip install commands inside the Dockerfile itself. Because otherwise my pip installs will not pick up the new gcc.
@sandys One possible workaround for your installation problem in the Dockerfile is to run the source activate and pip installs in one single /bin/bash command.
RUN /bin/bash -c ". activate myenv && \
pip install pandas && \
pip install ../mylocal_package/
This at least solved the installation problem for me.
Is that then environment you want active when you enter the container as, e.g., /bin/bash?
@kalefranz yes, this is my issue as well. Is there any solution for this?
@JakartaLaw @kalefranz I am also facing the same issue
Did we find any solution to this issue
one could try using the SHELL
directive in the Dockerfile, like SHELL ["/bin/bash", "-c"]
.
The problem could be the fact that the default shell in Linux is sh
(ref)
Just an idea, I didn't have the time to test myself.
I had to do stuff like this:
RUN . /opt/conda/etc/profile.d/conda.sh && \
conda activate myenv && \
pip install --user -e .
Then, I also had to define specific scripts that did similar things:
somescript.sh
#!/bin/bash
. /opt/conda/etc/profile.d/conda.sh
conda activate myenv
gunicorn -b 0.0.0.0:8002 --log-config gunicorn_logging.conf -w 2 run:app
Then, in the Dockerfile, I copied these scripts into the image and I use them in my ENTRYPOINT
:
ENTRYPOINT [ "/app/run.sh" ]
I battled this for a long time today and this seems to work.
erewok's solution works for me too, thanks!
The way I do it in my dockerfile is as follows (Source: https://medium.com/@chadlagore/conda-environments-with-docker-82cdc9d25754)
Optional setting shell to bash
SHELL ["/bin/bash", "-c"]
create your conda env
RUN conda create -n myenv
activate myenv and work in this environment
RUN echo "source activate myenv" > ~/.bashrc
ENV PATH /opt/conda/envs/env/bin:$PATH
@nsarode-joyn this worked for me thanks
@nsarode-joyn Thanks! You saved my day :)
@nsarode-joyn solution works for me only when I run the docker container in a bash (i.e. docker run -it <docker_image> bash)
it still does not work when I want to run the Docker container on a port (I want to run JupyterHub there)
docker run -p 8000:8000 <docker_image> jupyterhub
does not work, acting like I never installed JupyterHub. It works though from bash inside the Docker container.
So to simplify:
How do I activate a conda environment for the duration of a Dockerfile?
i.e. if I say RUN ["python", "train.py"]
how do I make it so that command is run with my environment active?
@rchossein Did you ever solve this problem? I am not able to run it using docker run -p 8000:8000
. The image builds correctly, but when I try to run the container, it acts as if my library (starlette
, in this case) isn't available. When activate my conda environment locally on my computer, I don't have any problem running my script. I follow @nsarod-joyn article on installing using a .yml
file.
This is similar to the solutions above, but avoids some of the boilerplate in every RUN command:
ENV BASH_ENV ~/.bashrc
SHELL ["/bin/bash", "-c"]
Then something like this should work as expected:
RUN conda activate my-env && conda info --envs
Or, to set the environment persistently (including for an interactive shell) you could:
RUN echo "conda activate my-env" >> ~/.bashrc
You'll want to be careful when setting BASH_ENV
, especially when using conda-build in a container. Since you'll always source ~/.bashrc
for all (non-interactive) sub-shells, conda-build cannot set up a proper test environment while building a package. It'll always source the base environment activation scripts, which will lead to inconsistent environments.
On a different note, setting an ENTRYPOINT
that sources the conda init commands will work most of the time, but not all. If someone decides to manually override the entry point while starting the container, you're out of luck. I bumped into this issue when trying to set a conda-based Docker interpreter in PyCharm. Because PyCharm uses its own entry point, the one configured in the Dockerfile is useless. As a result, the environment for Python interpreter was only partially initialised.
IMHO, the only fail-safe way to generate an image is to explicitly write out all the commands generated by
$ conda shell.posix activate <env_name>
and include them at the end of the Dockerfile.
As @kalefranz mentioned, conda run
is a useful workaround in a Docker context. I stumbled onto this discussion while trying to get conda envs recognized as kernels by jupyterlab. The solution was
RUN conda run -n base conda install --quiet --yes nb_conda_kernels
COPY python37a.yml /home/jovyan/
RUN conda env create -f python37a.yml
RUN conda run -n python37a ipython kernel install --user --name=python37a
Not sure if this is appropriate for adding to this issue but my problem is mirrored exactly in this thread so I thought I'd post here. I'm having the exact same error as described by @JakartaLaw. I'm trying to create a docker image that will result in container with the environment activated on run. Here are the contents of my dockerfile
:
FROM continuumio/miniconda3
ADD environment.yml /tmp/environment.yml
RUN conda env create -f /tmp/environment.yml
RUN echo "conda activate $(head -1 /tmp/environment.yml | cut -d' ' -f2)" >> ~/.bashrc
ENV PATH /opt/conda/envs/$(head -1 /tmp/environment.yml | cut -d' ' -f2)/bin:$PATH
And the yaml file defining the environment:
name: pointcloudz
channels:
- conda-forge
- defaults
dependencies:
- python=3.7
- gdal
- pdal
- entwine
Similar to @nsarode-joyn, I followed the advice in this excellent post. The dockerfile builds without problem, but when I execute
docker run -it pdal_pipeline
I get the following error (inside the container), and the new environment is not active:
CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
To initialize your shell, run
$ conda init <SHELL_NAME>
I've gotten to the bottom of the internet in search of an answer, but can't find a solution. I really need the environment to be created, the packages specified in environment.yml
to be installed into it, and for it to be running automatically upon running the container. Strangely, the following dockerfile
in which the environment is created directly with a conda create
, rather than a yaml file works exactly as expected but I have not been able to install the desired packages to the newly created environment from the dockerfile
itself.
FROM continuumio/miniconda3
RUN conda create -n env python=3.6
RUN echo "source activate env" > ~/.bashrc
ENV PATH /opt/conda/envs/env/bin:$PATH
Any wisdom here would be massively appreciated.
conda run
is not an ideal solution from what I can tell.
Here is a working dockerfile for adding a conda env:
https://github.com/jupyter/docker-stacks/pull/973/files
# Choose your desired base image
FROM jupyter/minimal-notebook:latest
# name your environment and choose python 3.x version
ARG conda_env=python36
ARG py_ver=3.6
# you can add additional libraries you want conda to install by listing them below the first line and ending with "&& \"
RUN conda env create --quiet --yes -p $CONDA_DIR/envs/$conda_env python=$py_ver ipython ipykernel && \
conda clean --all -f -y
# alternatively, you can comment out the lines above and uncomment those below
# if you'd prefer to use a YAML file present in the docker build context
# COPY environment.yml /home/$NB_USER/tmp/
# RUN cd /home/$NB_USER/tmp/ && \
# conda env create -p $CONDA_DIR/envs/$conda_env -f environment.yml && \
# conda clean --all -f -y
# create Python 3.x environment and link it to jupyter
RUN $CONDA_DIR/envs/${conda_env}/bin/python -m ipykernel install --user --name=${conda_env} && \
fix-permissions $CONDA_DIR && \
fix-permissions /home/$NB_USER
# any additional pip installs can be added by uncommenting the following line
# RUN $CONDA_DIR/envs/${conda_env}/bin/pip install
# prepend conda environment to path
ENV PATH $CONDA_DIR/envs/${conda_env}/bin:$PATH
# if you want this environment to be the default one, uncomment the following line:
# ENV CONDA_DEFAULT_ENV ${conda_env}
Thank you @mathematicalmichael!
It turns out all I needed to do was add the line:
ENV CONDA_DEFAULT_ENV ${conda_env}
To the bottom of the dockerfile.
For completeness, in case anyone encounters a similar issue, here is the full file:
FROM continuumio/miniconda3
ADD environment.yml /tmp/environment.yml
RUN conda env create -f /tmp/environment.yml
RUN echo "conda activate $(head -1 /tmp/environment.yml | cut -d' ' -f2)" >> ~/.bashrc
ENV PATH /opt/conda/envs/$(head -1 /tmp/environment.yml | cut -d' ' -f2)/bin:$PATH
ENV CONDA_DEFAULT_ENV $(head -1 /tmp/environment.yml | cut -d' ' -f2)
CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'. If your shell is Bash or a Bourne variant, enable conda for the current user with
`$ echo ". /opt/conda/etc/profile.d/conda.sh" >> ~/.bashrc` ...
You can to the exact same thing inside docker:
RUN echo "source /opt/conda/etc/profile.d/conda.sh"
RUN echo "conda activate myenv" >> $HOME/.bashrc
CMD ["/bin/bash"]
The first line modifies the path and other environment variables inside the container, which are maintained in the image, so you don't need to put this command in the bashrc. Running bash as the entry point will then load the environment, so that you get a shell inside myenv.
I tried nearly all of them, but I could not manage to activate a Conda environment on Heroku container. If someone already did, please type it.
Use SHELL ["/bin/bash", "-c"]
in the beginning before you activate the environment.
FROM continuumio/miniconda3:latest
SHELL ["/bin/bash", "-c"]
COPY environment.yml /home/files/environment.yml
RUN conda env create -f /home/files/environment.yml
RUN conda activate webapp
Another thing, creating an environment inside the docker image increases the size, you can just use the base env
I tried nearly all of them, but I could not manage to activate a Conda environment on Heroku container. If someone already did, please type it.
have you tried the instructions here?:
https://jupyter-docker-stacks.readthedocs.io/en/latest/using/recipes.html#add-a-python-3-x-environment
So to simplify:
How do I activate a conda environment for the duration of a Dockerfile?
i.e. if I say
RUN ["python", "train.py"]
how do I make it so that command is run with my environment active?
Did you find a solution for the issue? None of the solutions given in the thread do have an effect if I directly add RUN ["python", "train.py"]
to my dockerfile and run the docker container with docker --rm -it ...
Always falls back to the base environment of conda (/opt/conda/bin/python).
Is there no other solution than using a dedicated bash script like run.sh
?
@nicornk You might check out this article: Activating a Conda environment in your Dockerfile
tl;dr
# The code to run when container is started:
COPY run.py .
ENTRYPOINT ["conda", "run", "-n", "myenv", "python", "run.py"]
@nicornk You might check out this article: Activating a Conda environment in your Dockerfile
tl;dr
# The code to run when container is started: COPY run.py . ENTRYPOINT ["conda", "run", "-n", "myenv", "python", "run.py"]
@sterlinm Thanks for your reply. I saw that article already and tried it out but there seems to be a difference how the output is streamed / flushed to the console.
For example, if we have the following main.py
:
import sys
import time
print("Hello world")
sys.stdout.flush()
for i in range(0,10):
print(i)
sys.stdout.flush()
time.sleep(1)
print("goodbye :)")
and use the following run.sh:
#!/bin/bash
set -e
. /opt/conda/etc/profile.d/conda.sh
conda activate $(head -1 /home/user/environment.yml | cut -d' ' -f2)
python src/main.py
and the following RUN command in the dockerfile:
RUN chmod +x /home/user/run.sh
CMD ["/home/user/run.sh"]
When I use the following RUN command in the dockerfile:
CMD ["conda", "run", "-n", "utilization-management", "python", "src/main.py"]
So, there seems to be a fundamental difference between using conda run
and using a shell script to source the environment.
Any idea?
if I say
RUN ["python", "train.py"]
how do I make it so that command is run with my environment active?
I don't think there is a (simple) way to do that. Conda makes modifications to many environment variables and search paths, and in order to reflect this behaviour inside a container, you either need to make all those changes by hand or let conda do it - which means you have to run a shell and start your program inside it. You can do that by using conda run ...
or by running a shell script.
there seems to be a difference how the output is streamed / flushed to the console.
The observation seems right. In the first example, docker pipes to stdout of the bash script directly. In the second example, the output of the python program is buffered before piping it to stdout. The main difference is, in the second example, no shell is executed.
Either python itself or conda could be the reason for that [[in fact I've seen similar issues with python before]]. Maybe try running the python program in a container that has all dependencies installed via pip or repository?
My assumption is that conda run
does not flush properly to the console.
According to this issue, it's anyway not a good idea to use conda run
at all:
conda/conda#9599
conda run however continues to remain experimental and therefore unfit for production use.
@nicornk I hadn't noticed that issue with how it buffers. I'd seen that conda run is considered experimental, but it's been considered experimental for years it seems. My impression is that all of the strategies for activating conda inside of docker are a bit wonky so you have to pick your poison.
There's an open issue for allowing conda run to avoid buffering stdout, and somebody there came up with a trick that seems to work.
conda run -n py38 bash -c 'python src/main.py > /dev/tty 2>&1'
Now, this doesn't quite get you to having RUN ["python", "train.py"]
work, but I'd imagine you could write a bash script that would take that input and pass it into the conda run command as above. It may be more trouble than it's worth.
For what it's worth, I tend to use a shell script that sources the conda.sh file and activates the environment.
Here is the code that has worked for me, which login as non-root user, copy some files to the container, and use base environment, and run an application.
FROM continuumio/miniconda3
LABEL "maintainer"="YOUR_NAME"
ENV MYUSER nonroot_user_name
RUN useradd -m $MYUSER
USER $MYUSER
WORKDIR /home/$MYUSER
# Copy applications files
COPY ./src ./src
# Switch shell sh (default in Linux) to bash
SHELL ["/bin/bash", "-c"]
# Give bash access to Anaconda
RUN echo "source activate env" >> ~/.bashrc && \
source /home/$MYUSER/.bashrc
# Run application when run the image
# CMD ["python", "src/app.py"]
For those trying to use conda in a non-interactive shell within your container, most (seemingly all) of the instructions in this issue won't help you (although they will help with interactive shell).
See conda/conda#7980 for more information.
In that case, adding this to your bash script makes it work:
eval "$(conda shell.bash hook)"
conda activate my-env
Is there a solution that also reliably works with docker compose?
`ARG CUDA_VERSION=10.2
ARG CUDNN_VERSION=7
ARG OS_VERSION=18.04
FROM nvidia/cuda:${CUDA_VERSION}-cudnn${CUDNN_VERSION}-devel-ubuntu${OS_VERSION}
#Set path anaconda3 cuda
ENV ANACONDA_HOME=/opt/anaconda3
ENV CUDA_PATH=/usr/local/cuda
ENV PATH ${CUDA_PATH}/bin:$PATH
ENV LD_LIBRARY_PATH ${CUDA_PATH}/bin64:$LD_LIBRARY_PATH
ENV C_INCLUDE_PATH ${CUDA_PATH}/include
ENV DEBIAN_FRONTEND=noninteractive
RUN rm /etc/apt/sources.list.d/cuda.list
Set locale
ENV LANG C.UTF-8 LC_ALL=C.UTF-8
#ๆดๆฐๆบ
#RUN sed -i s:/archive.ubuntu.com:/mirrors.aliyun.com/ubuntu:g /etc/apt/sources.list
#RUN sed -i s:/archive.ubuntu.com:/mirrors.tuna.tsinghua.edu.cn/ubuntu:g /etc/apt/sources.list
#RUN cat /etc/apt/sources.list
RUN apt-get -y update --fix-missing &&
apt-get install -y --no-install-recommends
g++
wget
python-opencv
build-essential
cmake
git
curl
ca-certificates
zip
vim
unzip &&
apt-get clean
#Install Anaconda
RUN wget --quiet https://repo.anaconda.com/archive/Anaconda3-5.3.0-Linux-x86_64.sh -O ~/anaconda.sh &&
/bin/bash ~/anaconda.sh -b -p $ANACONDA_HOME &&
rm ~/anaconda.sh &&
ln -s /opt/conda/etc/profile.d/conda.sh /etc/profile.d/conda.sh &&
echo ". /opt/conda/etc/profile.d/conda.sh" >> ~/.bashrc &&
echo "conda activate base" >> ~/.bashrc
ENV PATH ${ANACONDA_HOME}/bin:$PATH
ENV LD_LIBRARY_PATH ${ANACONDA_HOME}/lib:$LD_LIBRARY_PATH
Set conda name
ENV CONDA_ENV_NAME faceformer
SHELL ["/bin/bash", "-c"]
ENV PATH ${ANACONDA_HOME}/envs/$CONDA_ENV_NAME/bin:$PATH
RUN conda create --name $CONDA_ENV_NAME python=3.7 -y
RUN echo ". /opt/conda/etc/profile.d/conda.sh" >> ~/.bashrc &&
echo "conda activate faceformer" >> ~/.bashrc
&& conda activate $CONDA_ENV_NAME
&& conda install pytorch==1.9.0 torchvision==0.10.0 torchaudio==0.9.0 cudatoolkit=10.2 -c pytorch
&& pip install -r requirements.txt
`
run error
Step 21/21 : RUN echo ". /opt/conda/etc/profile.d/conda.sh" >> ~/.bashrc && echo "conda activate faceformer" >> ~/.bashrc && conda activate $CONDA_ENV_NAME && conda install pytorch==1.9.0 torchvision==0.10.0 torchaudio==0.9.0 cudatoolkit=10.2 -c pytorch && pip install -r requirements.txt
---> Running in 585635c766cf
CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
If your shell is Bash or a Bourne variant, enable conda for the current user with
$ echo ". /opt/anaconda3/etc/profile.d/conda.sh" >> ~/.bashrc
or, for all users, enable conda with
$ sudo ln -s /opt/anaconda3/etc/profile.d/conda.sh /etc/profile.d/conda.sh
The options above will permanently enable the 'conda' command, but they do NOT
put conda's base (root) environment on PATH. To do so, run
$ conda activate
in your terminal, or to put the base environment on PATH permanently, run
$ echo "conda activate" >> ~/.bashrc
Previous to conda 4.4, the recommended way to activate conda was to modify PATH in
your ~/.bashrc file. You should manually remove the line that looks like
export PATH="/opt/anaconda3/bin:$PATH"
^^^ The above line should NO LONGER be in your ~/.bashrc file! ^^^
The command '/bin/bash -c echo ". /opt/conda/etc/profile.d/conda.sh" >> ~/.bashrc && echo "conda activate faceformer" >> ~/.bashrc && conda activate $CONDA_ENV_NAME && conda install pytorch==1.9.0 torchvision==0.10.0 torchaudio==0.9.0 cudatoolkit=10.2 -c pytorch && pip install -r requirements.txt' returned a non-zero code: 1
The way I do it in my dockerfile is as follows (Source: https://medium.com/@chadlagore/conda-environments-with-docker-82cdc9d25754)
Optional setting shell to bash
SHELL ["/bin/bash", "-c"]
create your conda env
RUN conda create -n myenv
activate myenv and work in this environment
RUN echo "source activate myenv" > ~/.bashrc ENV PATH /opt/conda/envs/env/bin:$PATH
You saved me!
My jenkins shell
docker exec -i containerxxx bash -c "PATH=/opt/conda/envs/$ENV_NAME/bin:$PATH; python --version; xxxxx"
You'll want to be careful when setting
BASH_ENV
, especially when using conda-build in a container. Since you'll always source~/.bashrc
for all (non-interactive) sub-shells, conda-build cannot set up a proper test environment while building a package. It'll always source the base environment activation scripts, which will lead to inconsistent environments.On a different note, setting an
ENTRYPOINT
that sources the conda init commands will work most of the time, but not all. If someone decides to manually override the entry point while starting the container, you're out of luck. I bumped into this issue when trying to set a conda-based Docker interpreter in PyCharm. Because PyCharm uses its own entry point, the one configured in the Dockerfile is useless. As a result, the environment for Python interpreter was only partially initialised.IMHO, the only fail-safe way to generate an image is to explicitly write out all the commands generated by
$ conda shell.posix activate <env_name>
and include them at the end of the Dockerfile.
IT'S TRUE!! I encountered an error with installing x11-common due to "ENV BASH_ENV ~/.bashrc" command when conducting a docker image.