Working with Docker
This article gives you a quick start on how to use Docker. Docker is a bit like virtualization. Instead of virtualizing an operating system and running that on a hypervisor on top of the host, Docker uses primitives in the Linux kernel to be able to run one or more processes isolated from the host but using the host OS. Much of the tech found in Docker has been around for years but Docker’s advent presented developers and administrators a unified and clear set of tools to use that tech.
Useful Docker commands
Version
This gives docker client and sever version running on your machine
$ docker version
Client:
Version: 18.09.0
API version: 1.39
Go version: go1.10.4
Git commit: 4d60db4
Built: Wed Nov 7 00:48:57 2018
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 18.09.0
API version: 1.39 (minimum version 1.12)
Go version: go1.10.4
Git commit: 4d60db4
Built: Wed Nov 7 00:16:44 2018
OS/Arch: linux/amd64
Experimental: false
Listing all the images on your machine
An image can be thought of everything needed to run an application or service. For example, one of the images found below is Redis, the popular key-value store. One can, with Docker, simply run the Redis image to get a default Redis container on the host with very little overhead. When done the container can be thrown away without leaving any traces it was on the host keeping the host clean from spurious files.
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hello-world latest 4ab4c602aa5e 3 months ago 1.84kB
redis latest 1babb1dde7e1 2 months ago 94.9MB
Creating and running containers
$ docker create <image-name-or-image-id> //creates a container which can be looked up using docker ps -a
$ docker start -a <id-after-create-command> // here -a means printing the o/p(if any) from the container on console
Container isolation
$ docker run -it busybox bash //this command on first terminal window
$ docker run -it busybox bash //this command on new terminal window
Above commands run two diiferent containers even though they look simillar in terms of execution and thats how docker maintains the container isolation.
Listing all running containers
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e42647bf2a4e redis "docker-entrypoint.s…" 7 weeks ago Up 4 seconds 0.0.0.0:6379->6379/tcp myredis
Listing all stopped containers
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e42647bf2a4e redis "docker-entrypoint.s…" 7 weeks ago Up 2 minutes 0.0.0.0:6379->6379/tcp myredis
Listing ids of all stopped containers
$ docker ps -aq
91ac19ce0181
fde317327ed7
Stopping/Killing a container
$ docker stop <container-id> //shut down gracefully
$ docker kill <conatiner-id> //stop immidiatly
Printing the stopped container logs
$ docker log <container-id-from-create>
e.g.
$ docker log 91ac19ce0181
Removing the unused containers
Syntax: docker rm <“container-id”>
$ docker rm 91ac19ce0181
Removing all the stopped containers
$ docker rm (docker ps -aq)
Removing all images
$ docker rmi (docker images -aq)
Delete all containers and images
Running this in conjunction with removing one or more images, even all of them, is a good way to prevent space constrained systems from running out of disk space.
$ docker system prune
Create and run docker containers form an docker image
$ docker run <image-id-or-image-name>
e.g.
$ docker run hello-world
Overriding a default startup command for an image
$ docker run <image> <command>
$ docker run busybox echo hi there
$ docker run busybox ls
Multi command containers
$ docker run redis
$ docker exec -it <container-id-or-name> <another-command> //executing(exec) intractively(-it) commands on a running container
e.g.
$ docker exec -it myredis redis-cli
or
$ docker exec -it myredis bash //gives you container terminal access to run commands directly inside the container
or
$ docker exec -it <container-id-or-name> sh //sh is command processor
Starting with the shell
$ docker run -it <image-name> bash/sh
Working with Dockerfile
In this section we will cover how to create your own custom docker image using docker file Following are the steps involved: 1. Create a Dockerfile 2. Pass it to docker client 3. Client sends it to docker server 4. Final output is a usable docker image
A Dockerfile is a file in your root directory of your project, the name is literally “Dockerfile” without any extension. The Dockerfile template contains following
//Specify a base image
//Run commands to install additional dependencies
//Specify a startup command when a container is created
e.g. we will create a docker image form a Dockerfile Your Dockerfile should look like this:
#chose the base image
FROM alpine
#download and install a dependeny or execute some commands
RUN apk add --update redis
#tell the image what to do when it starts as a container
CMD ["redis-server"]
Then on your terminal run the following command inside the directory where your Dockerfile is present
#search for Dockerfile inside the current directory and build an image out of it
$ dokcer build .
If you want to build the docker image with file named other than Dockerfile use the following command
$ docker build -f <file-name>
e.g.
$ docker build -f Dockerfile.dev
Docker file instruction to make specific folder as a working directory:
#/app is the folder created during the copy inside the docker container
WORKDIR /app
or
WORKDIR ./gm
Docker file instruction to copy the build files inside the container
COPY <local-path> <inside-container-path>
#copy all the content to the folder /app in the image
COPY . /app
or
#./ means the current build context where your docker file is available
COPY ./ ./
Running the container and routing the request from local machine to the specific port on the container
$ docker run -p <host-port>:<container-port> <image-name>
e.g.
$ docker build -p 8080:8080 redis
Exposing the specific port of the container to work with outside world with Dockerfile
EXPOSE <port-number>
e.g.
EXPOSE 5000
Sample Dockerfile with all the instructions
FROM node:alpine
WORKDIR /app
COPY . .
RUN npm install
CMD ["npm","start"]
Multi step build process
Multi step build process helps to create light weight, efficient and customized images.
e.g. Create a docker image with web server for production based containers to serves any request to the exposed ports
#temporary container with name as "builder"
FROM node:alpine as builder
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
#output of container /app/build as per the node standard
RUN npm run build
#creating the final container
FROM nginx
#copying from temporary container to current container, path specification is as per the nginx standards
COPY --from=builder /app/build /usr/share/nginx/html
Tagging an docker image
Tagging is a good way to version your containers.
$ docker build -t [<docker-id>]/<project-name>:<version-or-tag> .
e.g.
$ docker build -t redis:latest .
$ docker build -t redis:1.0 .
Docker volumes
When you want the changes made to the local source to reflect in docker container,you basically map the local directory path to the directory inside the container
$ docker run -p 5000:5000 -v <local-path>:<container-path> <image-id>
e.g.
$ docker run -p 5000:5000 -v $(pwd):/app 91ac19ce0181
$ docker run -p 5000:5000 -v /app/node_modules -v $(pwd):/app 91ac19ce0181
Note: -v /app/node_modules meaning this directory mapped to the container folder and do not mess with it
Now you have a lot of the tools you will need to be competent with Docker. Other technologies that build upon Docker are shown below. Once you get the feel for using software in a container you will likely want to learn more about how to run many of them at once. Some of the approaches to doing this are listed below.
Next steps
- Docker-Compose used in conjunction with Docker-Machine
- Kubernetes
- Docker Swarm