Скачать книгу

let's delete (it took me 10 minutes 23 seconds):

      esschtolts @ cloudshell: ~ / terraform / aws (agile-aleph-203917) $ ./../terraform destroy -var = "token = AKIAJ4SYCNH2XVSHNN3A" -var = "key = huEWRslEluynCXBspsul3AkKlin1ViR9 + Mo

      Destroy complete! Resources: 7 destroyed.

      Establishing the CI / CD process

      Amazon provides (aws.amazon.com/ru/devops/) a wide range of DevOps tools designed in a cloud infrastructure:

      * AWS Code Pipeline – the service allows you to create a chain of stages from a set of services in a visual editor, through which the code must go before it goes to production, for example, assembly and testing.

      * AWS Code Build – the service provides an auto-scaling build queue, which may be required for compiled programming languages, when adding features or making changes requires a long re-compilation of the entire application, when using one server it becomes a bottleneck when rolling out the changes.

      * AWS Code Deploy – Automates deployment and rollback in case of errors.

      * AWS CodeStar – the service combines the main features of the previous services.

      Setting up remote control

      artifact server

      aws s3 ls s3: // name_backet aws s3 sync s3: // name_backet name_fonder –exclude * .tmp # files from the bucket will be downloaded to the folder, for example, a website

      Now, we need to download the AWS plugin:

      esschtolts @ cloudshell: ~ / terraform / aws (agile-aleph-203917) $ ./../terraform init | grep success

      Terraform has been successfully initialized!

      Now we need to get access to AWS, for that we click on the name of your user in the header of the WEB interface, in addition to My account , the My Security Credentials item will appear , by selecting which, we go to Access Key -> Create New Access Key . Let's create EKS (Elastic Kuberntes Service):

      esschtolts @ cloudshell: ~ / terraform / aws (agile-aleph-203917) $ ./../terraform apply

      –var = "token = AKIAJ4SYCNH2XVSHNN3A" -var = "key = huEWRslEluynCXBspsul3AkKlinAlR9 + MoU1ViY7"

      Delete everything:

      $ ../terraform destroy

      Creating a cluster in GCP

      node pool – combining nodes into a cluster with

      resource "google_container_cluster" "primary" {

      name = "tf"

      location = "us-central1"

      $ cat main.tf # configuration state

      terraform {

      required_version = "> 0.10.0"

      }

      terraform {

      backend "s3" {

      bucket = "foo-terraform"

      key = "bucket / terraform.tfstate"

      region = "us-east-1"

      encrypt = "true"

      }

      }

      $ cat cloud.tf # cloud configuration

      provider "google" {

      token = "$ {var.hcloud_token}"

      }

      $ cat variables.tf # variables and getting tokens

      variable "hcloud_token" {}

      $ cat instances.tf # create resources

      resource "hcloud_server" "server" {....

      $ terraform import aws_acm_certificate.cert arn: aws: acm: eu-central-1: 123456789012: certificate / 7e7a28d2-163f-4b8f-b9cd-822f96c08d6a

      $ terraform init # Initialize configs

      $ terraform plan # Check actions

      $ terraform apply # Running actions

      Debugging:

      essh @ kubernetes-master: ~ / graylog $ sudo docker run –name graylog –link graylog_mongo: mongo –link graylog_elasticsearch: elasticsearch \

      –p 9000: 9000 -p 12201: 12201 -p 1514: 1514 \

      –e GRAYLOG_HTTP_EXTERNAL_URI = "http://127.0.0.1:9000/" \

      –d graylog / graylog: 3.0

      0f21f39192440d9a8be96890f624c1d409883f2e350ead58a5c4ce0e91e54c9d

      docker: Error response from daemon: driver failed programming external connectivity on endpoint graylog (714a6083b878e2737bd4d4577d1157504e261c03cb503b6394cb844466fb4781): Bind for 0.0.0.0:9000 failed: port is already allocated.

      essh @ kubernetes-master: ~ / graylog $ sudo netstat -nlp | grep 9000

      tcp6 0 0 ::: 9000 ::: * LISTEN 2505 / docker-proxy

      essh @ kubernetes-master: ~ / graylog $ docker rm graylog

      graylog

      essh @ kubernetes-master: ~ / graylog $ sudo docker run –name graylog –link graylog_mongo: mongo –link graylog_elasticsearch: elasticsearch \

      –p 9001: 9000 -p 12201: 12201 -p 1514: 1514 \

      –e GRAYLOG_HTTP_EXTERNAL_URI = "http://127.0.0.1:9001/" \

      –d graylog / graylog: 3.0

      e5aefd6d630a935887f494550513d46e54947f897e4a64b0703d8f7094562875

      https://blog.maddevs.io/terrafom-hetzner-a2f22534514b

      For example, let's create one instance:

      $ cat aws / provider.tf

      provider "aws" {

      region = "us-west-1"

      }

      resource "aws_instance" "my_ec2" {

      ami = "$ {data.aws_ami.ubuntu.id}"

      instance_type = "t2.micro"

      }

      $ cd aws

      $ aws configure

      $ terraform init

      $ terraform apply –auto-approve

      $ cd ..

      provider "aws" {

      region = "us-west-1"

      }

      resource "aws_sqs_queue" "terraform_queue" {

      name = "terraform-queue"

      delay_seconds = 90

      max_message_size = 2048

      message_retention_seconds = 86400

      receive_wait_time_seconds = 10

      }

      data "aws_route53_zone" "vuejs_phalcon" {

      name = "test.com."

      private_zone = true

      }

      resource "aws_route53_record" "www" {

      zone_id = "$ {data.aws_route53_zone.vuejs_phalcon.zone_id}"

      name = "www. $ {data.aws_route53_zone.selected.name}"

      type = "A"

      ttl = "300"

      records = ["10.0.0.1"]

      }

      resource "aws_elasticsearch_domain" "example" {

      domain_name = "example"

      elasticsearch_version = "1.5"

      cluster_config {

      instance_type = "r4.large.elasticsearch"

      }

      snapshot_options {

      automated_snapshot_start_hour = 23

      }

      }

      resource "aws_eks_cluster" "eks_vuejs_phalcon" {

      name

Скачать книгу