2 minute read


Docker Basics

Refer to the Docker documentation and use docker --help for more details. Here’s a great Docker tutorial to get started.


Docker Image Operations

  • Download:
    docker pull [OPTIONS] NAME[:TAG|@DIGEST]
    
  • Commit Changes:
    docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]
    

Checking Docker Status

  • List Running Containers:
    docker ps
    
  • List Images:
    docker images
    
  • Inspect a Container/Image:
    docker inspect
    

Other Useful Commands

  • Run a Container:
    docker run
    
  • Remove an Image:
    docker rmi
    
  • Remove a Container:
    docker rm
    
  • Copy Files:
    docker cp [OPTIONS] SRC_PATH|- CONTAINER:DEST_PATH
    

Switching Between Interactive and Daemon Modes

Press <Ctrl> + p followed by <Ctrl> + q to detach from a container running in interactive mode and switch it to daemon mode. To reattach, use:

docker attach [OPTIONS] CONTAINER

Running Docker Without sudo

To allow running Docker commands without sudo:

sudo groupadd docker
sudo usermod -aG docker $USER
newgrp docker

Pushing Images to Docker Hub

docker login
docker tag <image_id> yourhubusername/REPOSITORY_NAME:tag
docker push yourhubusername/REPOSITORY_NAME

Writing a Dockerfile

A basic Dockerfile template (refer to the Dockerfile documentation):

FROM ubuntu:18.04
COPY . /app
EXPOSE 9000
RUN make /app
CMD python /app/app.py

Using docker-compose.yml

A docker-compose.yml file example (refer to Compose documentation):

version: "3.8"
services:
  webapp:
    build:
      context: ./dir
      dockerfile: Dockerfile-alternate
      args:
        buildno: 1

See this example for setting up a Django project.


Docker Proxy Configuration

  1. Set Proxy for docker pull:
    Refer to the Docker proxy documentation.

    sudo mkdir -p /etc/systemd/system/docker.service.d
    vim /etc/systemd/system/docker.service.d/http-proxy.conf
    
  2. Add the Following Configuration:
    Replace 127.0.0.1:1080 with your proxy’s address.

    [Service]
    Environment="HTTP_PROXY=socks5://127.0.0.1:1080"
    Environment="HTTPS_PROXY=socks5://127.0.0.1:1080"
    
  3. Apply Changes:
    sudo systemctl daemon-reload
    sudo systemctl restart docker
    
  4. Verify:
    sudo systemctl show --property=Environment docker
    

Using Docker with CUDA

Refer to NVIDIA Docker for details.

Setup NVIDIA Container Toolkit

distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list

sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
sudo systemctl restart docker

Pull and Run a CUDA Image

docker pull nvidia/cuda:10.2-basic
docker run --gpus all --ipc=host --net host -it --rm \
  -v /etc/localtime:/etc/localtime:ro \
  -v /dev/shm:/dev/shm \
  -v $(pwd):/workspace \
  --user $(id -u):$(id -g) \
  nvidia/cuda:10.2-runtime-ubuntu18.04

Create a Dockerfile for CUDA

ARG DOCKER_BASE_IMAGE=nvidia/cuda:10.2-basic
FROM $DOCKER_BASE_IMAGE

RUN rm /etc/apt/sources.list.d/cuda.list && \
    rm /etc/apt/sources.list.d/nvidia-ml.list && \
    apt-get update && apt-get install -y sudo

COPY pre-install.sh .
RUN ./pre-install.sh

ARG UID=1000
ARG GID=1000
ARG USER=docker
ARG PW=docker

RUN useradd -m ${USER} --uid=${UID} -s /bin/bash && \
    echo "${USER}:${PW}" | chpasswd && \
    adduser ${USER} sudo

USER ${USER}
WORKDIR /home/${USER}

Container Using Host Proxy

1. Configure Proxy

Add the following to ~/.docker/config.json:

{
  "proxies": {
    "default": {
      "httpProxy": "http://127.0.0.1:8118",
      "httpsProxy": "http://127.0.0.1:8118",
      "noProxy": "localhost"
    }
  }
}

Alternatively, set the proxy in the Dockerfile or during build:

docker build --net host ...

Accessing Containers via SSH

SSH from the Host Machine

  1. Ensure SSH is installed and running in the container.
  2. Find the container’s IP address:

    docker inspect <container_id> | grep "IPAddress"
    
  3. SSH to the container:

    ssh user@<container_ip_address>
    

Direct SSH to Containers on Remote Machines

Map the container’s SSH port to the host:

docker run -p 52022:22 container1
docker run -p 53022:22 container2

SSH to the container using the host’s IP and mapped port:

ssh -p 52022 user@<host_ip>

Accessing Files Inside Containers

  1. Map Directories: Use volume mapping during docker run.
  2. Set Up a Web Server: Run a basic HTTP server in the container:

    python3 -m http.server
    
  3. Use WebDAV: Set up WebDAV for collaborative access.

WebDAV Example

  1. Install WebDAV:

    pip install wsgidav cheroot
    
  2. Create a wsgidav.yaml configuration file.

  3. Run WebDAV:

    wsgidav --config=wsgidav.yaml --host=0.0.0.0 --port=8000 --root ./share
    
  4. Set up an SSH tunnel:

    ssh -f -N -L 9980:0.0.0.0:8000 -p 12345 user@<jumper_ip>
    
  5. Access the container’s files via WebDAV (dav://localhost:9980/).

Enjoy seamless file management directly from your file explorer!


Tags:

Updated:

Comments