Deepo is an open framework for painlessly assembling specialized Docker images for deep learning research. It provides a "Lego set" of dozens of standard components for preparing deep learning tools, along with a framework for composing them into custom Docker images.
At the core of Deepo is a Dockerfile generator that
- lets you customize your deep learning environment with Lego-like modules
- describe your environment in a single command line
- Deepo generates Dockerfiles following best practices
- and handles all the configuration for you
- automatically resolves dependencies
- Deepo knows which combinations of CUDA, cuDNN, Python, PyTorch, TensorFlow, etc. are compatible
- picks the right versions on your behalf
- and determines the correct installation order via topological sorting
We also provide a series of pre-built Docker images that
- let you instantly set up common deep learning research environments
- support widely used deep learning frameworks
- support GPU acceleration (CUDA and cuDNN included) and also work in CPU-only mode
- run on Linux (CPU/GPU), Windows (CPU), and macOS (CPU)
Step 1. Install Docker and NVIDIA Container Toolkit.
Step 2. Pull the all-in-one image from Docker Hub
docker pull ufoym/deepoVerify that GPU access works inside a container:
docker run --gpus all --rm ufoym/deepo nvidia-smiIf this does not work, check the issues section of the NVIDIA Container Toolkit GitHub — many solutions are already documented. To launch an interactive shell in a persistent container:
docker run --gpus all -it ufoym/deepo bashTo share data and configuration between the host (your machine or VM) and the container, use the -v option:
docker run --gpus all -it -v /host/data:/data -v /host/config:/config ufoym/deepo bashThis makes /host/data on the host visible as /data inside the container, and /host/config as /config. This isolation helps prevent containerized experiments from accidentally overwriting or reading the wrong data.
Note that some frameworks (e.g., PyTorch) use shared memory for inter-process communication. If you use multiprocessing, the container's default shared memory size may be insufficient. Increase it with --ipc=host or --shm-size:
docker run --gpus all -it --ipc=host ufoym/deepo bashStep 1. Install Docker.
Step 2. Pull the all-in-one image from Docker Hub
docker pull ufoym/deepo:cpuLaunch an interactive shell:
docker run -it ufoym/deepo:cpu bashTo share data and configuration between the host (your machine or VM) and the container, use the -v option:
docker run -it -v /host/data:/data -v /host/config:/config ufoym/deepo:cpu bashThis makes /host/data on the host visible as /data inside the container, and /host/config as /config. This isolation helps prevent containerized experiments from accidentally overwriting or reading the wrong data.
Note that some frameworks (e.g., PyTorch) use shared memory for inter-process communication. If you use multiprocessing, the container's default shared memory size may be insufficient. Increase it with --ipc=host or --shm-size:
docker run -it --ipc=host ufoym/deepo:cpu bashYou are now ready to begin your journey.
$ python
>>> import tensorflow
>>> import torch
>>> import keras
>>> import mxnet
>>> import chainer
>>> import paddle$ darknet
usage: darknet <function>
The docker pull ufoym/deepo command from Quick Start gives you a standard image containing every available deep learning framework. You can also customize your own environment.
If you prefer a single framework instead of the all-in-one image, simply append a tag with the framework name. For example, to pull TensorFlow only:
docker pull ufoym/deepo:tensorflowdocker pull ufoym/deepodocker run --gpus all -it -p 8888:8888 -v /home/u:/root --ipc=host ufoym/deepo jupyter lab --no-browser --ip=0.0.0.0 --allow-root --LabApp.allow_origin='*' --LabApp.root_dir='/root'git clone https://github.com/ufoym/deepo.git
cd deepo/generatorFor example, to create an image with pytorch and keras:
python generate.py Dockerfile pytorch kerasOr with CUDA 11.3 and cuDNN 8:
python generate.py Dockerfile pytorch keras --cuda-ver 11.3.1 --cudnn-ver 8This generates a Dockerfile with everything needed to build pytorch and keras. The generator automatically resolves dependencies and topologically sorts them, so you don't need to worry about missing packages or ordering.
You can also specify the Python version:
python generate.py Dockerfile pytorch keras python==3.8docker build -t my/deepo .This may take several minutes, as some libraries are compiled from source.
| . | modern-deep-learning | dl-docker | jupyter-deeplearning | Deepo |
|---|---|---|---|---|
| ubuntu | 16.04 | 14.04 | 14.04 | 20.04 |
| cuda | X | 8.0 | 6.5-8.0 | 11.3/None |
| cudnn | X | v5 | v2-5 | v8 |
| onnx | X | X | X | O |
| tensorflow | O | O | O | O |
| pytorch | X | X | X | O |
| keras | O | O | O | O |
| mxnet | X | X | X | O |
| chainer | X | X | X | O |
| darknet | X | X | X | O |
| paddlepaddle | X | X | X | O |
| . | CUDA 11.3 / Python 3.8 | CPU-only / Python 3.8 |
|---|---|---|
| all-in-one | latest all all-py38 py38-cu113 all-py38-cu113 |
all-py38-cpu all-cpu py38-cpu cpu |
| TensorFlow | tensorflow-py38-cu113 tensorflow-py38 tensorflow |
tensorflow-py38-cpu tensorflow-cpu |
| PyTorch | pytorch-py38-cu113 pytorch-py38 pytorch |
pytorch-py38-cpu pytorch-cpu |
| Keras | keras-py38-cu113 keras-py38 keras |
keras-py38-cpu keras-cpu |
| MXNet | mxnet-py38-cu113 mxnet-py38 mxnet |
mxnet-py38-cpu mxnet-cpu |
| Chainer | chainer-py38-cu113 chainer-py38 chainer |
chainer-py38-cpu chainer-cpu |
| Darknet | darknet-cu113 darknet |
darknet-cpu |
| PaddlePaddle | paddle-cu113 paddle |
paddle-cpu |
| . | CUDA 11.3 / Python 3.6 | CUDA 11.1 / Python 3.6 | CUDA 10.1 / Python 3.6 | CUDA 10.0 / Python 3.6 | CUDA 9.0 / Python 3.6 | CUDA 9.0 / Python 2.7 | CPU-only / Python 3.6 | CPU-only / Python 2.7 |
|---|---|---|---|---|---|---|---|---|
| all-in-one | py36-cu113 all-py36-cu113 |
py36-cu111 all-py36-cu111 |
py36-cu101 all-py36-cu101 |
py36-cu100 all-py36-cu100 |
py36-cu90 all-py36-cu90 |
all-py27-cu90 all-py27 py27-cu90 |
all-py27-cpu py27-cpu |
|
| all-in-one with jupyter | all-jupyter-py36-cu90 |
all-py27-jupyter py27-jupyter |
all-py27-jupyter-cpu py27-jupyter-cpu |
|||||
| Theano | theano-py36-cu113 |
theano-py36-cu111 |
theano-py36-cu101 |
theano-py36-cu100 |
theano-py36-cu90 |
theano-py27-cu90 theano-py27 |
theano-py27-cpu |
|
| TensorFlow | tensorflow-py36-cu113 |
tensorflow-py36-cu111 |
tensorflow-py36-cu101 |
tensorflow-py36-cu100 |
tensorflow-py36-cu90 |
tensorflow-py27-cu90 tensorflow-py27 |
tensorflow-py27-cpu |
|
| Sonnet | sonnet-py36-cu113 |
sonnet-py36-cu111 |
sonnet-py36-cu101 |
sonnet-py36-cu100 |
sonnet-py36-cu90 |
sonnet-py27-cu90 sonnet-py27 |
sonnet-py27-cpu |
|
| PyTorch | pytorch-py36-cu113 |
pytorch-py36-cu111 |
pytorch-py36-cu101 |
pytorch-py36-cu100 |
pytorch-py36-cu90 |
pytorch-py27-cu90 pytorch-py27 |
pytorch-py27-cpu |
|
| Keras | keras-py36-cu113 |
keras-py36-cu111 |
keras-py36-cu101 |
keras-py36-cu100 |
keras-py36-cu90 |
keras-py27-cu90 keras-py27 |
keras-py27-cpu |
|
| Lasagne | lasagne-py36-cu113 |
lasagne-py36-cu111 |
lasagne-py36-cu101 |
lasagne-py36-cu100 |
lasagne-py36-cu90 |
lasagne-py27-cu90 lasagne-py27 |
lasagne-py27-cpu |
|
| MXNet | mxnet-py36-cu113 |
mxnet-py36-cu111 |
mxnet-py36-cu101 |
mxnet-py36-cu100 |
mxnet-py36-cu90 |
mxnet-py27-cu90 mxnet-py27 |
mxnet-py27-cpu |
|
| CNTK | cntk-py36-cu113 |
cntk-py36-cu111 |
cntk-py36-cu101 |
cntk-py36-cu100 |
cntk-py36-cu90 |
cntk-py27-cu90 cntk-py27 |
cntk-py27-cpu |
|
| Chainer | chainer-py36-cu113 |
chainer-py36-cu111 |
chainer-py36-cu101 |
chainer-py36-cu100 |
chainer-py36-cu90 |
chainer-py27-cu90 chainer-py27 |
chainer-py27-cpu |
|
| Caffe | caffe-py36-cu113 |
caffe-py36-cu111 |
caffe-py36-cu101 |
caffe-py36-cu100 |
caffe-py36-cu90 |
caffe-py27-cu90 caffe-py27 |
caffe-py27-cpu |
|
| Caffe2 | caffe2-py36-cu90 caffe2-py36 caffe2 |
caffe2-py27-cu90 caffe2-py27 |
caffe2-py36-cpu caffe2-cpu |
caffe2-py27-cpu |
||||
| Torch | torch-cu113 |
torch-cu111 |
torch-cu101 |
torch-cu100 |
torch-cu90 |
torch-cu90 torch |
torch-cpu |
|
| Darknet | darknet-cu113 |
darknet-cu111 |
darknet-cu101 |
darknet-cu100 |
darknet-cu90 |
darknet-cu90 darknet |
darknet-cpu |
@misc{ming2017deepo,
author = {Ming Yang},
title = {Deepo: Set up a deep learning environment with a single command line.},
year = {2017},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/ufoym/deepo}}
}
We appreciate all contributions. If you are planning to contribute bug fixes, please go ahead and open a pull request directly. If you plan to contribute new features, utility functions, or extensions, please open an issue first to discuss your idea with us.
Deepo is MIT licensed.
