I read some articles about using a virtual environment in Docker. Their argument are that the purpose of virtualization in Docker is to introduce isolation and limit conflicts with system packages etc.
However, aren’t Docker and Python-based images (e.g., python:*) already doing the same thing?
Can someone eli5 this whole thing?
It’s not necessary but there is no reason not to.
Pros:
Cons:
venv/bin/python3
instead of justpython3
in the run line of your dockerfileIt’s easy to set the path to include the venv in the Dockerfile, that way you never have to activate, either in the run line, nor if you exec into it. Also this makes all your custom entry points super easy to use. Bonus, it’s super easy to use uv to get super fast image builds like that. See this example https://gist.github.com/dwt/6c38a3462487c0a6f71d93a4127d6c73
Surely if upgrading python will affect your global python packages it will also affect your venv python packages?
This can also be done without using venv’s, you just need to copy them to the location where global packages are installed.
Upgrading the base image does not imply updating your python, and even updating your python does not imply updating your python packages (except for the standard libraries, of course).
Sure, but in the case where you upgrade python and it affects python packages it would affect global packages and a venv in the same way.
Sure If that happens. But it may also not. Which is actually usually the case. Sure, it’s not 100% safe, but it is safer.
If you’re on an apple silicon mac, docker performance can be atrocious if you are emulating. It can also be inconvenient to work with Docker volumes and networks. Python already has
pyenv
and tools likepoetry
andrye
. Unless there’s a need for Docker, I personally would generally avoid it (tho I do almost all my deployments via docker containers)