Скачать книгу

repository:

      keywords:

      author: ESSch

      license: (ISC)

      About to write to /home/vagrant/nodejs/package.json:

      {

      "name": "nodejs",

      "version": "1.0.0",

      "description": "",

      "main": "index.js",

      "scripts": {

      "test": "echo \" Error: no test specified \ "&& exit 1"

      },

      "author": "ESSch",

      "license": "ISC"

      }

      Is this ok? (yes) yes

      First, let's create a WEB server. I'll use the library to create it:

      vagrant @ ubuntu: ~ / nodejs $ npm install Express –save

      npm WARN deprecated [email protected]: Package unsupported. Please use the express package (all lowercase) instead.

      [email protected] / home / vagrant / nodejs

      └── [email protected]

      npm WARN [email protected] No description

      npm WARN [email protected] No repository field.

      vagrant @ ubuntu: ~ / nodejs $ cat << EOF> index.js

      const express = require ('express');

      const app = express ();

      app.get ('/ healt', function (req, res) {

      res.send ({status: "Healt"});

      });

      app.listen (9999, () => {

      console.log ({status: "start"});

      });

      EOF

      vagrant @ ubuntu: ~ / nodejs $ node index.js &

      [1] 18963

      vagrant @ ubuntu: ~ / nodejs $ {status: 'start'}

      vagrant @ ubuntu: ~ / nodejs $ curl localhost: 9999 / healt

      {"status": "Healt"}

      Our server is ready to work with Prometheus. We need to configure Prometheus for it.

      The Prometheus scaling problem arises when the data does not fit on one server, more precisely, when one server does not have time to record data and when the processing of data by one server does not suit the performance. Thanos solves this problem by not requiring federation setup, by providing the user with an interface and API that it broadcasts to Prometheus instances. A web interface similar to Prometheus is available to the user. He himself interacts with agents that are installed on instances as a side-car, as Istio does. He and the agents are available as containers and as a Helm chart. For example, an agent can be brought up as a container configured on Prometheus, and Prometheus is configured with a config followed by a reboot.

      docker run –rm quay.io/thanos/thanos:v0.7.0 –help

      docker run -d –net = host –rm \

      –v $ (pwd) /prometheus0_eu1.yml:/etc/prometheus/prometheus.yml \

      –-name prometheus-0-sidecar-eu1 \

      –u root \

      quay.io/thanos/thanos:v0.7.0 \

      sidecar \

      –-http-address 0.0.0.0:19090 \

      –-grpc-address 0.0.0.0:19190 \

      –-reloader.config-file /etc/prometheus/prometheus.yml \

      –-prometheus.url http://127.0.0.1:9090

      Notifications are an important part of monitoring. Notifications consist of firing triggers and a provider. A trigger is written in PromQL, as a rule, with a condition in Prometheus. When a trigger is triggered (metric condition), Prometheus signals the provider to send a notification. The standard provider is Alertmanager and is capable of sending messages to various receivers such as email and Slack.

      For example, the metric "up", which takes the values 0 or 1, can be used to poison a message if the server is off for more than 1 minute. For this, a rule is written:

      groups:

      – name: example

      rules:

      – alert: Instance Down

      expr: up == 0

      for: 1m

      When the metric is equal to 0 for more than 1 minute, then this trigger is triggered and Prometheus sends a request to the Alertmanager. Alertmanager specifies what to do with this event. We can prescribe that when the InstanceDown event is received, we need to send a message to the mail. To do this, configure Alertmanager to do this:

      global:

      smtp_smarthost: 'localhost: 25'

      smtp_from: '[email protected]'

      route:

      receiver: example-email

      receivers:

      – name: example-email

      email_configs:

      – to: '[email protected]'

      Alertmanager itself will use the installed protocol on this computer. In order for it to be able to do this, it must be installed. Take Simple Mail Transfer Protocol (SMTP), for example. To test it, let's install a console mail server in parallel with the Alert Manager – sendmail.

      Fast and clear analysis of system logs

      OpenSource full-text search engine Lucene is used for quick search in logs. On its basis, two low-level products were built: Sold and Elasticsearch, which are quite similar in capabilities, but differ in usability and license. Many popular assemblies are built on them, for example, just a delivery set with ElasticSearch: ELK (Elasticsearch (Apache Lucene), Logstash, Kibana), EFK (Elasticsearch, Fluentd, Kibana), and products, for example, GrayLog2. Both GrayLog2 and assemblies (ELK / EFK) are actively used due to the lesser need to configure non-test benches, for example, you can put EFK in a Kubernetes cluster with almost one command

      helm install efk-stack stable / elastic-stack –set logstash.enabled = false –set fluentd.enabled = true –set fluentd-elastics

      An alternative that has not yet received much consideration are systems built on the previously considered Prometheus, for example, PLG (Promtail (agent) – Loki (Prometheus) – Grafana).

      Comparison of ElasticSearch and Sold (systems are comparable):

      Elastic:

      ** Commercial with open source and the ability to commit (via approval);

      ** Supports more complex queries, more analytics, out of the box support for distributed queries, more complete REST-full JSON-BASH, chaining, machine learning, SQL (paid);

      *** Full-text search;

      *** Real-time index;

      *** Monitoring (paid);

      *** Monitoring via Elastic FQ;

      *** Machine learning (paid);

      *** Simple indexing;

      *** More data types and structures;

      ** Lucene engine;

      ** Parent-child (JOIN);

      ** Scalable native;

      ** Documentation from 2010;

      Solr:

      ** OpenSource;

      ** High speed with JOIN;

      *** Full-text search;

      *** Real-time index;

      *** Monitoring in the admin panel;

      *** Machine learning through modules;

      *** Input data: Work, PDF and others;

      *** Requires a schema for indexing;

      ***

Скачать книгу