Docker on WSL for Windows 10

Configure Docker for Windows

In the general settings, you’ll want to expose the daemon without TLS. This step is necessary so that the daemon listens on a TCP endpoint. If you don’t do this then you won’t be able to connect from WSL.


You may also want to share any drives you plan on having your source code reside on. This step isn’t necessary but I keep my code on a regular HDD, so I shared my “E” drive too.

Can’t use Docker for Windows?

This is only necessary if you are NOT running Docker for Windows!

No problem, just configure your Docker daemon to use -H tcp:// and --tlsverify=false. Then you can follow along with the rest of this guide exactly.


Here’s the Ubuntu 16 installation notes taken from Docker’s documentation:

This will install the edge channel, change ‘edge’ to ‘stable’ if you want. You may also want to update the Docker Compose version based on the latest release.

# Environment variables you need to set so you don't have to edit the script below.
export DOCKER_CHANNEL=edge

# Update the apt package index.
sudo apt-get update

# Install packages to allow apt to use a repository over HTTPS.
sudo apt-get install -y \
    apt-transport-https \
    ca-certificates \
    curl \

# Add Docker's official GPG key.
curl -fsSL | sudo apt-key add -

# Verify the fingerprint.
sudo apt-key fingerprint 0EBFCD88

# Pick the release channel.
sudo add-apt-repository \
   "deb [arch=amd64] \
   $(lsb_release -cs) \

# Update the apt package index.
sudo apt-get update

# Install the latest version of Docker CE.
sudo apt-get install -y docker-ce

# Allow your user to access the Docker CLI without needing root.
sudo usermod -aG docker $USER

# Install Docker Compose.
sudo curl -L${DOCKER_COMPOSE_VERSION}/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose &&
sudo chmod +x /usr/local/bin/docker-compose

At this point you must close your terminal and open a new one so that you can run Docker without sudo. You might as well do it now!

Configure WSL to Connect to Docker for Windows

The next step is to configure WSL so that it knows how to connect to the remote Docker daemon running in Docker for Windows (remember, it’s listening on port 2375).

Open up your ~/.bashrc file and add this line to the bottom:

export DOCKER_HOST=tcp://

Logout of your WSL shell and come back in, or run source ~/.bashrc to reload it now.

If you want to get cute, you could do all of that with this 1 liner:

echo "export DOCKER_HOST=tcp://" >> ~/.bashrc && source ~/.bashrc

You can verify it works by running docker info. You should get back a list of details. If you get a permission denied error then make sure you log out and log back in, because that’s necessary to apply the changes so non-root users can run Docker. That’s what the sudo usermod bit of the long command did in the command chain when installing Docker.

Ensure Volume Mounts Work

The last thing we need to do is set things up so that volume mounts work. This tripped me up for a while because check this out…

When using WSL, Docker for Windows expects you to supply your volume paths in a format that matches this: /c/Users/bcochran/dev/myapp.

But, WSL doesn’t work like that. Instead, it uses the /mnt/c/Users/bcochran/dev/myapp format.

To get things to work for now, you need to bind a custom mount for any drives that you shared with Docker for Windows.

Bind custom mount points to fix Docker for Windows and WSL differences:
sudo mkdir /c
sudo mount --bind /mnt/c /c

You’ll want to repeat those commands for any drives that you shared, such as d or e, etc..

Verify that it works by running: ls -la /c. You should see the same exact output as running ls -la /mnt/cbecause /mnt/c is mounted to /c.

You can use volume mount paths like .:/myapp in your Docker Compose files and everything will work like normal. That’s awesome because that format is what native Linux and MacOS users also use.

It’s worth noting that whenever you run a docker-compose up, you’ll want to make sure you navigate to the /c/Users/bcochran/dev/myapp location first, otherwise your volume won’t work. In other words, never access /mnt/cdirectly.


Automatically set up the bind mount:


You can do that with this 1 liner: echo "sudo mount --bind /mnt/c /c" >> ~/.bashrc && source ~/.bashrc and make sure to repeat the command for any additional drives you shared with Docker for Windows. By the way, you don’t need to mkdir because we already did it.

Allow your user to bind a mount without a root password:

To do that, run the sudo visudo command.

That should open up nano (a text editor). Goto the bottom of the file and add this line: bcochran ALL=(root) NOPASSWD: /bin/mount, but replace “bcochran” with your username.

That just allows your user to execute the sudo mount command without having to supply a password. You can save the file with CTRL+O, confirm and exit with CTRL+X.


The Jupyter service is now updated to the new JupyterLab environment.

If you haven’t used Jupyter before, please try it out. It’s a powerful  notebook environment for interactive programming. It also includes a fully-featured browser-based command-line terminal, which makes a convenient alternative to SSHing.


This update includes:

  • The new JupyterLab environment
  •  support for Julia 0.6.2, Python 3.6, Matlab R2017b and R 3.4.2 kernels.
  • Better web-based terminal, with full support for qsub/qstat and modules
  • Support for packages installed in your personal conda environments within Python notebooks, with

from rcs import *



Azure Notebooks


Microsoft Azure Notebooks is a free service that provides Jupyter Notebooks along with supporting packages for R, Python and F#. The great thing about this service is that no downloads or lengthy setups are required. After signing up with a Microsoft ID, you can start working on a notebook within minutes.

What are the Jupyter notebooks?

Jupyter notebooks provide a seamless way to combine rich output, code and graphics. It allows you to embed YouTube videos, charts and more into one interactive environment. Meaning that your flow of thought is not interrupted; you can keep working and stay focussed. It’s perfect for avoiding those clunky workspaces that all student programmers know too well.

At its core, Jupyter Notebooks are simply JSON documents where each segment of the document is stored in a ‘cell’. At its highest level, it can be thought of as a dictionary with a few keys. You can think of it as a mixture of an editor and a command window that can also present your project in its entirety.

Signing up to and using the service

Sign up for a free account at

Once logged in, select Libraries from the upper toolbar, then you are directed to a page where you can create new libraries and view your existing ones. Libraries are a way of grouping and categorising your notebooks and they can be set to as Private or Public. The libraries can be shared with others, and you can also share individual notebooks. This means that it’s great for group projects which often involve sharing and co-authoring of code.


Once you are inside a library and have started up a notebook, it will be displayed under the Running tab. Note that simply exiting a notebook does not terminate a kernel. In order to exit a kernel, click Shutdown on the notebook you want to close.


Terminating a kernel

Getting started

There are a few features that you need to know to get started with the Jupyter notebooks.

The Cells

Each notebook consists of sections that are called cells.


An example of some cells

A cell can be thought of as a multiline text input field that can perform a number of operations. Cell types are not fixed and you can easily switch between the different types using the drop down menu whilst a cell is selected.

There are a number of handy shortcuts:

Command Action
Shift+Enter Run the current cell
Alt+Enter Run the current cell and insert a new one below
Ctrl+Enter Run the current cell and enter command mode
Double Click Edit a cell

Code Cells

Code cells are the default cell type. They can be thought of as nuggets of code that are sent to the kernel when the cell is executed. There is also full syntax highlighting and tab completion within the Jupyter notebooks. The results that are returned are displayed as the cell’s output like so:


Code Cell example

Text is but one of the outputs included in IPython’s rich display capability. Figures generated using the Matplotlib library and HTML tables are also included, to name a few.

clip_image014Plotting graphs

One of the most useful features of the Jupyter notebooks is that Notebook documents contain both the inputs and outputs of an interactive session. This is useful for making programming tutorials and for use in lectures and/or seminars.

Markdown Cells

Markdown cells output body text, headings and several other text options including LaTeX. Here are a few examples of the different text options:





One advantage of markdown is that you can avoid large chunks of commented out code, and instead simply switch the section between markdown or code using the dropdown menu.

RawNBConverts Cells

RawNBConverts are any additional media that you would want to use in your notebook. You can embed photos, videos and more. These cells help enhance your notebook and make it more interesting as you can replace copious paragraphs of text with images that embed neatly inside. No formatting or copy pasting is required.

Menubar and Toolbar

The menu bar presents different options that may be used to manipulate the way the notebook functions. You have the option to interrupt, restart or change kernels; there is also the option to upload to/download from the Dropbox service.

The tool bar gives a shortcut method of performing the most-used operations within the notebook, by simply clicking on an icon.


Menubar and Toolbar

Presenting your Notebook

clip_image022One of the best features of the Jupyter notebooks is the slide mode. It allow you to create impressive group presentations with live code executions and inline formulae. To enter slide mode, click the icon.


Slide Mode

Main Features

· Comments can be clearly expressed using the markdown cells instead of being cluttered up in your code, this is great for annotating your work throughout and allows you to provide full coherent explanations alongside your code.

· There are plenty of tutorials created using the Jupyter notebook available online that allow you to develop new skills, such as Machine Learning and Data Science.

· Several packages are already installed within Microsoft Azure, this saves you the trouble of locating said packages and installing them yourself. You can also install additional packages using pip install or conda install.

· It’s great for debugging as you are more likely to know exactly where the errors in your code are if you can run it in sections. In collaborative projects you often run the risk of writing conflicting code with your team members. By using the notebook, you can pinpoint your bugs and solve them immediately rather than waiting for a risky compile.

· You can export your notebook in several formats, such as LaTeX, HTML or PDF files. From there, the notebooks can be used in online blogs and you can even embed them into live websites.

· Each notebook is associated with a kernel, you can run multiple kernels simultaneously by starting multiple notebooks.

Meltdown and Spectre

Meltdown and Spectre exploit critical vulnerabilities in modern processors. These hardware vulnerabilities allow programs to steal data which is currently processed on the computer. While programs are typically not permitted to read data from other programs, a malicious program can exploit Meltdown and Spectre to get hold of secrets stored in the memory of other running programs. This might include your passwords stored in a password manager or browser, your personal photos, emails, instant messages and even business-critical documents.

Meltdown and Spectre work on personal computers, mobile devices, and in the cloud. Depending on the cloud provider’s infrastructure, it might be possible to steal data from other customers.


Meltdown breaks the most fundamental isolation between user applications and the operating system. This attack allows a program to access the memory, and thus also the secrets, of other programs and the operating system.

If your computer has a vulnerable processor and runs an unpatched operating system, it is not safe to work with sensitive information without the chance of leaking the information. This applies both to personal computers as well as cloud infrastructure. Luckily, there are software patches against Meltdown.


Spectre breaks the isolation between different applications. It allows an attacker to trick error-free programs, which follow best practices, into leaking their secrets. In fact, the safety checks of said best practices actually increase the attack surface and may make applications more susceptible to Spectre

Spectre is harder to exploit than Meltdown, but it is also harder to mitigate. However, it is possible to prevent specific known exploits based on Spectre through software patches.

Out of Order / Speculative Execution

Modern CPUs do out-of-order execution whenever they see a branch (if/switch etc). They will typically execute code for multiple branches while the conditional is evaluated. So

if (a+b*c == d) {
  // first branch
else {
  // second branch

will involve both the conditions running simultaneously while the condition is evaluated. Once the CPU has the answer (say “true”), it scraps the work from the second branch and commits the first branch. The instructions that are executed out-of-order are called “transient instructions” till they are committed.

The Bug

The code in both the branches can do a lot of things. The assumption is that all of these things will be rolled back once a branch is picked. The attack is possible because cache-state is something that does not seem to be rolled back. This is the crux behind both Meltdown and Spectre attacks.

Meltdown specifically works because “any-random-memory-access” seems to work while in a transient instruction. This attack allows a program to access the memory, and thus also the secrets, of other programs and the operating system.

CPU Cache?

Reading data from RAM is slow when you are a CPU. CPU cache times are in the order of 1-10ns, while RAM access takes >100ns. Almost any memory read/write is placed in the cache: The cache is a mirror image of memory activity on the computer.

Cache Timing?

Let us say I have this piece of code:

$secrets = ["secret1", "secret2", "secret3", "secret4", "realSecret"];
$realSecret = $secrets[4];

This loads the real secret in memory. An attacker then does the following:

  1. Clear the CPU cache
  2. Runs the above program
  3. Try to access the specific memory address

The above access results in an error, and raises an exception. However, the attacker knows that the secret is in one of the 5 possible locations. Since only one of these is ever read by the actual program, it can repeatedly run the program and time the exception to figure out which one of the locations was being read. The one which is being read is cached, and the exception will be raised much faster as a result.

Cache Timing attacks are the building blocks of Meltdown, which uses them as a side channel to leak data.

Now that we’ve explained cache-timing attacks (which can tell you “what-memory” is being used by another program), we can get back to Meltdown. Meltdown happens because:

  • CPUs do not rollback CPU-cache after speculative execution, and
  • You can manipulate the cache in those transient instructions to create a “side-channel” and
  • Intel CPUs allow you to read memory from other processes while in a transient instruction.

Meltdown consists of 3 steps:

Step 1. The content of an attacker-chosen memory location, which is inaccessible to the attacker, is loaded into a register.

Step 2. A transient instruction accesses a cache line based on the secret content of the register.

Step 3. The attacker uses Flush+Reload to determine the accessed cache line and hence the secret stored at the chosen memory location.

In code:

c = *kernel_memory_address;
b = probe[c];

There are several caveats:

Exception Suppressing

If you try to actually read kernel-space memory directly, your program will crash. Meltdown works around this by making sure that the memory is only read in transient instructions that will be rolled back.

So you wrap the above code with:

if (check_function()) {

And make sure that check_function always returns false. What happens is that the CPU starts running the code inside meltdown function before it has the result from the check.

Cache Lines

CPU cache are broken down into several cache-lines. Think of them as lookup hashes for your CPU cache. Instead of accessing single-byte (probe[c]), meltdown multiples the memory addresses by 4096 to make sure that the code accessess a specific cache line. So more like:

b = probe[c * 4096];

If you’re wondering why we are doing a read instead of just printing c, or maybe copying it to another place, it is because CPU designers considered that, and rollback those instructions correctly, so any writes cannot be used to exfiltrate the data from a transient instruction.


Sometimes, the exception is raised before the code executes, and the value of c is set to 0 as part of the rollback. This makes the attack unreliable. So, the attack decides to ignore zero-value-reads and only prime the cache if it reads a non-zero value. Thus the whole code becomes

if (check_function()) {
  label retry:
  c = *kernel_memory_address;
  if (c != 0)
    b = probe[c * 4096];
    goto retry;

The similar assembly code (from the paper) is:

; rcx = kernel address
; rbx = probe array
mov al, byte [rcx] ; try to read rcx
shl rax, 0xc ; multiply the read value with 4096 by shifting left 12(0xc) bits
jz retry ; retry if the above is zero
mov rbx, qword [rbx + rax] ; read specific entry in rbx

The special condition where c actually is zero is handled in the cache-timing where we notice no memory address has been cached and decide it was a zero.

Windows Update


Microsoft have determined that your computer must have antivirus compatible with their new Spectre/Meltdown security patch. If you do not have the correct antivirus installed (and this includes having no antivirus) Windows Update will cease to function as of this month.

That’s right if you do not have the registry setting below you will not be getting any windows updates.

Windows 10, Windows 8.1, Windows Server 2012 R2 and Windows Server 2016

Microsoft recommends all customers protect their devices by running a compatible and supported antivirus program. Customers can take advantage of built-in antivirus protection, Windows Defender Antivirus, for Windows 8.1 and Windows 10 devices or a compatible third-party antivirus application. The antivirus software must set a registry key as described below in order to receive the January 2018 security updates.

Windows 7 SP1 and Windows Server 2008 R2 SP1 Customers

In a default installation of Windows 7 SP1 or Windows Server 2008 R2 SP1, customers will not have an antivirus application installed by default. In these situations, Microsoft recommends installing a compatible and supported antivirus application such as Microsoft Security Essentials or a third-party anti-virus  application. The anti-virus software must set a registry key as described below in order to receive the January 2018 security updates.

Customers without Antivirus

In cases where customers can’t install or run antivirus software, Microsoft recommends manually setting the registry key as described below in order to receive the January 2018 security updates.

Setting the Registry Key

Customers will not receive the January 2018 security updates (or any subsequent security updates) and will not be protected from security vulnerabilities unless their antivirus software vendor sets the following registry key:

Value=”cadca5fe-87d3-4b96-b7fb-a231484277cc” Type=”REG_DWORD”

Local Mirrors for Ubuntu, CentOS, Raspbian, Debian and PuTTY

We know that our researchers build upon many free software projects, as we do ourselves. And so, like many institutions, we choose to help with the distribution of this software by providing a ‘mirror’ of the software repositories for as many distributions & projects as we possibly can. Such a service is provided free of charge to our students, and the general public too.


You can access files from our new Mirror Service.

Maple Licence has been renewed for another year

The software licence keys for Maple 2015/16/17 have been released. If you use any version of Maple please contact for the new keys.


New features in Maple 2017:

Extend Maple’s power with user-created packages
The MapleCloud now gives you instant, seamless access to a rich collection of user packages that extends Maple’s abilities, and even notifies you when updates are available.

Construct even complicated plots easily
The Plot Builder in Maple 2017 has a new design that makes it even easier to create and customize a wide variety of plots, simply and without knowing a single plot command.

Solve more problems
With Maple 2017, you can find exact solutions to more PDEs with boundary conditions, find new limits, solve more integrals, perform new graph theory computations, calculate more group properties, work with new hypergeometric functions, and much more.

Protect your work
Now you can password protect worksheets while still allowing access to the procedures they contain, so you can share your work without sharing your IP.

Expand your worldview
New map visualization tools and a geographical database let you explore and understand world data in a highly visual way.

State your assumptions
You can give Maple even more information about your problem, and Maple will take these assumptions into account in even more computations, eliminating solutions you don’t need and simplifying results appropriately.

Add a new layer of information to your plots
In Maple 2017, you can add dynamic plot annotations that will appear when you hover over specific points or curves, so you can convey even more information in your graphs.

Get a head start on engineering problem solving
The Maple Portal for Engineers, which provides a starting point for common engineering tasks, now covers many more topics, includes more examples, and provides sample applications to help you become productive quickly.

Develop your own algorithms and solutions
From performance improvements in core functions, to a more flexible debugger, to new tools that simplify package creation and distribution, Maple 2017 gives you everything you need to develop even complex algorithms and solutions on your own.

Get insight into your data
Enhanced support for statistics and data analysis includes new and improved visualizations, new data analysis tools, and expanded support for data frames throughout Maple, so you can work with and learn from your data.


Deep Learning

If you are involved in deep learning at Imperial you will know the difficulty in installing and configuring all the drivers, libraries and packages on your computer.

To promote and aid a standard deep learning environment we recommend adopting Docker and the Deepo image.

It contains most popular deep learning frameworks: theano, tensorflow, sonnet, pytorch, keras, lasagne, mxnet, cntk, chainer, caffe, torch.

Quick Start

Step 1. Install Docker and nvidia-docker.

Step 2. Obtain the Deepo image

Get the image from Docker Hub (recommended)
docker pull ufoym/deepo


Now you can try this command:

nvidia-docker run --rm ufoym/deepo nvidia-smi

This should work and enables Deepo to use the GPU from inside a docker container. If this does not work, search the issues section on the nvidia-docker GitHub — many solutions are already documented. To get an interactive shell to a container that will not be automatically deleted after you exit do

nvidia-docker run -it ufoym/deepo bash

If you want to share your data and configurations between the host (your machine or VM) and the container in which you are using Deepo, use the -v option, e.g.

nvidia-docker run -it -v /host/data:/data -v /host/config:/config ufoym/deepo bash

This will make /host/data from the host visible as /data in the container, and /host/config as /config. Such isolation reduces the chances of your containerized experiments overwriting or using wrong data.

Containers and Machine Learning

Docker Engine Utility for NVIDIA GPUs



Assuming the NVIDIA drivers and Docker® Engine are properly installed (see installation)

Ubuntu distributions

# Install nvidia-docker and nvidia-docker-plugin
wget -P /tmp
sudo dpkg -i /tmp/nvidia-docker*.deb && rm /tmp/nvidia-docker*.deb

# Test nvidia-smi
nvidia-docker run --rm nvidia/cuda nvidia-smi

CentOS distributions

# Install nvidia-docker and nvidia-docker-plugin
wget -P /tmp
sudo rpm -i /tmp/nvidia-docker*.rpm && rm /tmp/nvidia-docker*.rpm
sudo systemctl start nvidia-docker

# Test nvidia-smi
nvidia-docker run --rm nvidia/cuda nvidia-smi

Installing Docker on Ubuntu

To install Docker CE, you need the 64-bit version of one of these Ubuntu versions:

  • Zesty 17.04
  • Xenial 16.04 (LTS)
  • Trusty 14.04 (LTS)


  1. Update the apt package index:
    $ sudo apt-get update
  2. Install packages to allow apt to use a repository over HTTPS:
    $ sudo apt-get install \
        apt-transport-https \
        ca-certificates \
        curl \
  3. Add Docker’s official GPG key:
    $ curl -fsSL | sudo apt-key add -

    Verify that you now have the key with the fingerprint9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88, by searching for the last 8 characters of the fingerprint.

    $ sudo apt-key fingerprint 0EBFCD88
    pub   4096R/0EBFCD88 2017-02-22
          Key fingerprint = 9DC8 5822 9FC7 DD38 854A  E2D8 8D81 803C 0EBF CD88
    uid                  Docker Release (CE deb) <>
    sub   4096R/F273FCD8 2017-02-22
  4. Use the following command to set up the stablerepository. You always need the stable repository, even if you want to install builds from the edge or testrepositories as well. To add the edge or test repository, add the word edge or test (or both) after the word stable in the commands below.

    Note: The lsb_release -cs sub-command below returns the name of your Ubuntu distribution, such as xenial. Sometimes, in a distribution like Linux Mint, you might have to change $(lsb_release -cs) to your parent Ubuntu distribution. For example, if you are usingLinux Mint Rafaela, you could use trusty.


    $ sudo add-apt-repository \
       "deb [arch=amd64] \
       $(lsb_release -cs) \


    1. Update the apt package index.
      $ sudo apt-get update
    2. Install the latest version of Docker CE, or go to the next step to install a specific version. Any existing installation of Docker is replaced.
      $ sudo apt-get install docker-ce

      Configure Docker to start on boot

      Most current Linux distributions (RHEL, CentOS, Fedora, Ubuntu 16.04 and higher) use systemd to manage which services start when the system boots. Ubuntu 14.10 and below use upstart.


      $ sudo systemctl enable docker

      To disable this behavior, use disable instead.

      $ sudo systemctl disable docker

      If you need to add an HTTP Proxy, set a different directory or partition for the Docker runtime files, or make other customizations, see customize your systemd Docker daemon options.


      Docker is automatically configured to start on boot usingupstart. To disable this behavior, use the following command:

      $ echo manual | sudo tee /etc/init/docker.override


      $ sudo chkconfig docker on