Dockerfile and compose
Table of contents
Dockerfile is a file which contains already mentioned steps for creating a specific Docker image. Dockerfile resembles shell script, but it's not that. This is a completely different language which is unique for Docker. When we create dockerfile, this implies that we should name the file in the same way, but it is not wrong to name it differently. The only thing we need to do here is to explicitly emphasise the name of the file, because the name dockerfile is already a common term.
The first command which we have to make is FROM command. It can be found in every dockerfile and it is obligatory. It is used to describe which Docker image will be used to create a container.
ENV command is used to set up environment variables. This command is very important for creating containers because it is the crucial way to pass data as key-value for creating containers or for the containers which have already been run. One of the reasons why the environment variables are more preferable is because they are supported on every system. The order of making commands in dockerfile is important, taking into account that Docker executing them out by using a top-down approach, meaning it starts with the first made command and moves onto the following. In the background, Docker executes shell commands inside of the container.
RUN command can run shell commands which we need in order to execute them inside of the container. This may include updating package manager of the operating system, installing some applications or unpacking files. If we take a look at dockerfile of nginx:mainline image we can see that they start in the following way:
which means that this container will work on debian distribution of Linux operating system and RUN command will have access to all the shell commands supported in debian. You need to be careful when creating RUN commands because every separate command creates a new layer in docker image, and there is a possibility that by executing a few commands we want to create only one layer, so that it would include only one change. In this case we use two ampersand signs (&) to connect two commands to make call them one after another. For instance:
RUN apt-get update && apt-get install curl
This will create only one layer, although we have actually made two commands. We will save our time by using this technique, but also the space because we have created only one layer.
EXPOSE command is used for opening a port from the container. For nginx example, we will see the following:
As we have already mentioned, when we run the container, none of the ports are opened except for those which have been specified by the user through a command line by using -p option. EXPOSE will only open the specified port within Docker virtual network.
Also, one of the most important commands is a CMD. CMD command will be executed once we start the container or when we restart it. Naturally, these are only some of the possible commands and they are most frequently used. In order to run dockerfile and make a container, we need to use a build command.
docker image build -f dockerfile .
This command will find the file on the path which we assigned with the option -f. Docker will go through all the commands from dockerfile, execute and save them in the local cache with a unique identifier, so that, the next time we run this dockerfile Docker will not have to execute commands which it already has in the local cache which speeds up the process of running the container. In case we change one command from dockerfile, Docker will recognise that only that one command has been changed. This means that all the commands before the changed one will be executed very quickly by reading data from the local cache, while it will make the changed command as well as the commands after the changed one run again. While running the dockerfile, Docker will write down the information that the command was loaded from local cache or that it was run from scratch. It is recommended that the things that are rarely changing are placed at the beginning of a dockerfile, while the things that require more frequent changes are placed later on to speed up the process of running of the containers and executing the commands.
Compose is a tool for defining and starting of more containers at once. With compose, you can use YAML file for configuration. With one command, you can create and run all the listed services from the configuration.
When it comes to containers, they are mainly represented as one process, and it happens quite rarely that only one container will be enough to run all the necessary applications and services. We will probably have one container which will serve as proxy, the other one as a frontend application, then for backend, database, etc. All of these containers will need to be interconnected so that they could communicate with each other. Also, we will use docker-compose.yml where we will list all of our containers which are necessary together with their configuration so that we would not have to remember docker run options for each of the containers and we will run all that by using one command from the terminal. Compose consists of three steps:
- The definition of the environment of the application with dockerfile, so that we can use it anywhere.
- The definition of the services contained within the application in docker-compose.yml file.
- The process of running by using the command docker-compose up.
There are various versions of this file, that is functionalities which it can support. In docker-compose.yml file you can list all the necessary things for the environment from the initial docker image, across network, containers, volume, to ports and shell commands. This name is the default one, but it is not obligatory. The user can use any other name and forward that filename by using the option -f. The example of docker-compose file for running of nginx:
In case you name the file nginx-proxy-docker.yml, we will run it by using a command:
By running this command, docker will also create a network for us although it is not listed and it will give names to all the containers. It will name the network according to the directory where it is located. In my case directory is compose-nginx, then it will create compose-nginx_default network, and put a prefix compose-nginx_ to the containers, but also a suffix which represents number of containers run. In our case, this will be compose-nginx_proxy_1. To cancel all the actions which have been executed, you can call one of the following two commands:
- docker-compose stop
- docker-compose down
The difference between stop and down is that stop command only stops all the containers, while down will stop containers and delete all the objects which the command up had created.
Compose can build docker image in runtime. It will first search for images in docker’s local cache which it needs and if they include a build option, it will make a new build by using an up command. If we want to create image again, we can do that by using one of the following two commands:
- docker-compose build - it will only build the images
- docker-compose up - it will build the images if necessary and run the containers
- docker-compose up –build - it will force building the images and run the containers
- docker-compose up –no-build - it will skip the build process and run the containers, but if build is not already run, the execution will crash
In the docker-compose file we can list dockerfile files where all the steps for creating containers are defined. For example, if we have some special commands we need to execute for nginx container which we have written in nginx.dockerfile file, we can define it in docker-compose file in the following way:
Unlike the previous example where dockerfile was not listed, but Docker has downloaded image from a docker repository, in this case docker found nginx.dockerfile on user’s computer, created image which is specified in that file and ran a container.
Link to the example on github. Springboot + mysql in compose file.
In case you missed Decket's latest blog post on Docker network, you can check it out here. :)
Latest blog posts
Intent classification: understanding text with the powe...
In today’s world, with the expansion of data generated from various sources, analyzing it has become a critical challenge for businesses. Read more about how intent classification of textual data works and how it can lead t...
What Is Stable Diffusion and How Does It Work?
For the past few years, revolutionary models in the field of AI image generators have appeared. Stable diffusion is a text-to-image model of Deep Learning published in 2022. Find out the reasons why Stable diffusion gained ...