5 Steps to More Interactive Deep Learning
Running Jupyter in Docker on a Deep Learning Virtual Machine (DLVM)
In the following steps I will walk through the process of setting up a Deep Learning Virtual Machine on Azure and running a Jupyter notebook through Nvidia Docker to enable more seamless interaction with custom deep learning environments.
The DLVM contains several tools for AI including popular GPU editions of deep learning frameworks and tools such as Microsoft R Server Developer Edition, Anaconda Python, Jupyter notebooks for Python and R, IDEs for Python and R, SQL database and many other data science and ML tools.
The DLVM runs on Azure GPU NC-series VM instances. These GPUs use discrete device assignment, resulting in performance close to bare-metal, and are well-suited to deep learning problems.
Why Use Nvidia Docker on the DLVM?
You might be thinking to yourself, if the DLVM base image comes with the most popular deep learning frameworks pre-installed, why should we even bother with containerization clients such as Nvidia-Docker?
Often when attempting to run deep learning tasks I find myself facing dependency nightmares.
Deep learning researchers tend to think less about production when they publish code to Git Hub. If they can get a package working on their own development environment, they often just assume that others will be able to do so as well.
Even with Python package managers such as pip and Anaconda, I often find that when getting a new cool project to run, or a model to train, lower level dependencies such as my CUDA version often get in my way.
For those not in the know CUDA is a parallel computing platform and application programming interface (API) developed by Nvidia. It allows software developers and software engineers to use a CUDA-enabled graphics processing unit (GPU) for general purpose processing.
Certain versions of Tensorflow won’t work with versions of CUDA above 9.1 but other frameworks such as PyTorch seem to perform better with later versions of CUDA.
To get around these issues and to increase the usability of my code I’ve started to use Nvidia-Docker to manage and run all my deep learning projects.
Using Nvidia-Docker to maintain my projects has the added advantage that, it allows me to more easily scale projects for production with orchestration services such a Kubernetes or Batch AI. Azure provides a great managed Kubernetes service which I recommend checking out.
Serving Jupyter Notebooks with Nvidia Docker on an Azure DLVM
In the following steps I will walk you through the process of setting up a DLVM on Azure and running a Jupter notebook through Nvidia Docker.
Azure Subscription with Access to GPUs
Step 1 Create Linux a DLVM
Step 2 Open the Port 8888 on the DLVM
Step 3 Connect to the DLVM with the Azure Shell
Step 4 Run Docker Container & link 8888 port to the VM Host
Step 5 Navigate to the Jupyter Notebook in the Browser
Now that your Jupyter notebook is running to access it in the browser :
- Copy the link to the local notebook http://cd3cdb8ea05f:8888/?token=66dc6919e8762c8136006cffd90b7b16f3fa7fd1fa591637&token=66dc6919e8762c8136006cffd90b7b16f3fa7fd1fa591637
- Replace the http://cd3cdb8ea05f or http://localhost part of the jupyter url with your VMs DNS name
There you have it! With just five simple steps you can now interact with your custom deep learning environment through Jupyter on Azure.
If this was helpful be sure to follow me and keep an eye out for more posts in the near future.