Скачать книгу

to get the output of the passed dough. This can be done by writing in the Dockerfile in the CMD section: NPM run, which will run the tests.

      Image storage:

      * public and private Docker Hub (http://hub.docker.com)

      * for private and secret projects, you can create your own image repository. The image is called registry

      Docker for building apps and one-off jobs

      Unlike virtual machines, launching, which is associated with significant human and computational costs, Docker is often used to perform one-time actions when software needs to be launched one-time, and it is desirable not to spend effort on installing and removing it. To do this, a container is launched, which is mounted to the folder with our application, which performs the required actions on it and, after they are completed, is deleted. An example is a JavaScript project for which you need to build and run tests. At the same time, the project itself does not use NodeJS, but contains only collector configs, for example, WEBPack, and written tests. To do this, we start the build container in iterative mode, in which you can control the build process, if necessary, and after the build is completed, the container will stop and delete itself, for example, you can run something like this at the root of the application: docker run -it –rm -v $ (pwd): app node-build . Tests can be carried out in a similar way. As a result, the application is built and tested on a test server, but the software that is not required for its operation on the production server will not be adopted and will not consume resources, and can be transferred to the production server, for example, using a container. In order not to write documentation on starting the build and testing, you can put two corresponding configs Docker-compose-build.yml and Docker-compose-test.yml and call them Docker-compose up -f ./docker-compose-build.

      Management and access

      We manage containers using the Docker command . Now, let's say there is a need to manage remotely. Using VNC, SSH, or something else to manage your Docker team will probably be too time consuming if the task gets complicated. That's right, first you will need to figure out what Docker is, because the Docker command and the Docker program are not the same thing, or rather, the Docker command is a console client for managing the Docker Engine client-server application. The team interacts with the Docker Machine server through the Docker REST API, which is intended for remote interaction with the server. But, in this case, you need to take care of authorization and SSL-encryption of traffic. This is ensured by the creation of keys, but in general, if the task is centralized management, differentiation of rights and security, it is better to look towards products that initially provide this and use Docker as a container launch, and not as a system.

      By default, for security reasons, the client communicates with the server over a Unix socket (special file /var/run/socket.sock), not over a network socket. To work through a Unix socket, you can tell the curl sending program to use curl –unix-socket /var/run/docker.sock http: /v1.24/containers/json , but this feature is supported since curl 1.40, which is not supported on CentOS. To solve this problem and to communicate between remote servers, we use a network socket. To activate it, stop the systemctl server stop docker and start it with the settings dockerd -H tcp: //0.0.0.0: 9002 & (listen to everyone on port 9002, which is not permissible for production). After running, the docker ps command will not work, but docker -H 5.23.52.111:9002 ps or docker -H tcp: //geocode1.essch.ru: 9202 ps . By default, Docker uses port 2375 for http, and 2376 for https. In order not to change the code everywhere and not to specify the socket every time, we will write it in the environment variables:

      export DOCKER_HOST = 5.23.52.111: 9002

      Docker ps

      Docker info

      unser DOCKER_HOST

      It is important to write export to make the variable available to all programs: child processes. Also, there should be no spaces around = . Now we can access the Dockerd server from anywhere on the same network. To manage remote and local Docker Engine (Docker on hosts), the Docker Machine program has been developed. Interaction with it is carried out using the Docker-machine command. First, let's install:

      base = https: //github.com/docker/machine/releases/download/v0.14.0 &&

      curl -L $ base / docker-machine – $ (uname -s) – $ (uname -m)> / usr / local / bin / docker-machine &&

      chmod + x / usr / local / bin / docker-machine

      Group of related applications

      We already have several different applications, let's say NGINX, MySQL and our application. We isolated them in different containers, now they do not conflict with each other and NGINX and MySQL, we did not waste time and effort on making our own configuration, but simply downloaded: Docker run mysql , docker run Nginx , and for the application docker build.; docker run myapp -p 80:80 bash . As you can see, it would seem that everything is very simple, but there are two points: control and interconnection.

      To demonstrate control, we will take the container of our application, and we will implement two possibilities – start and creation (bulkhead). For manual start, when we know that the container has already been created, but just stopped, it is enough to execute docker start myapp , but for automatic mode this is not enough, and we need to write a script that would take into account whether the container already exists, whether there is an image for it :

      if $ (docker ps | grep myapp)

      then

      docker start myapp

      else

      if! $ (docker images | grep myimage)

      docker build.

      fi

      docker run -d –name myapp -p 80:80 myimage bash

      fi

      … And to create it, you need to delete the container, if it exists:

      if $ (docker ps | grep myapp)

      docker rm -f myapp

      fi

      if! $ (docker images | grep myimage)

      docker build.

      fi

      docker run -d –name myapp -p 80:80 myimage bash

      … It is clear that you need to general parameters, the name of the image, the container to be displayed in variables, to check that the Dockerfile is there, it is valid, and only after that delete the container and much more. To understand the real scale, without going into the interaction of containers, about cloning (scaling) these groups and the like, but I will just mention that the Docker run command can exceed one to two dozen lines. For example, a dozen of forwarded ports, mountable folders, memory and processor limits, connections with other containers, and a few more specific parameters. Yes, this is not good, but it is difficult to divide into many containers in this version, due to the lack of a container interaction map. But the question arises: Isn't there a lot to do to just provide the user with the opportunity to start the container or rebuild? Often, the answer of the system administrator boils down to the fact that only a select few can be given access. But even here there is a solution: Docker-compose is a tool for working with a group of containers:

      # docker-compose

      version: v1

      services:

      myapp:

      container-name: myapp

      images: myimages

      ports:

      – 80:80

      build:.

      … For start docker-compose up -d , and for bulkhead docker down; docker up -d . Moreover, when changing the configuration, when a complete bulkhead is not needed, it will simply be updated.

      Now that we simplify the process of managing a single container, let's work with a group. But here, for us, only the config itself will change:

      # docker-compose

      version: v1

      services:

      mysql:

      images: mysql

      Nginx:

      images:

Скачать книгу