Allgemein

Terraform on AWS

To get a few questions out of the way:

What is Terraform?

Terraform is an open-source infrastructure as code software tool created by HashiCorp. Users define and provision data center infrastructure using a declarative configuration language known as HashiCorp Configuration Language, or optionally JSON. Terraform manages external resources with “providers”.

Wikipedia – https://en.wikipedia.org/wiki/Terraform_(software)

So why?

It is great to have all your infrastructure configured with some kind of code. I’ve worked with AWS SAM and Cloudformation but was never happy as this would only allow me to define AWS Infrastructure.

By using terraform I can also deploy to other platforms like GCM or Azure.

Central state – why that?

This boils down to 2 options:

  1. if you are the only dev working on it, use the local state of terraform. But keep in mind that you cannot have others update the structure then as terraform needs to know the current state of the system and the import of existing infrastructure is a manual process. (that would need to be repeated every time someone else updates the structure)
  2. with a central state you can have multiple people updating your infrastructure. by defining IAM roles for AWS you can also restrict them on what can be updated. Terraform will always load the latest state before computing a change set to execute.

We will store the central state in S3 and provide locking with DynamoDB. Locking is required as we could in other cases start updates in parallel, causing inconsistent State.

If you want to read more on Terraform State handling take a look at Yevgeniy Brikman‘s article on Medium: https://blog.gruntwork.io/how-to-manage-terraform-state-28f5697e68fa

Step 1 – required tools

AWS CLI

follow the instructions on https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html

Once finished the output of aws --version should be something like

aws-cli/1.16.309
Python/3.7.3 Darwin/19.2.0 botocore/1.13.45

Use the environment variables to configure access to AWS. If you are using profiles then call export AWS_PROFILE=profilename to use it.

Terraform CLI

follow the instructions on https://learn.hashicorp.com/terraform/getting-started/install

Once finished the output of terraform --version should be something like Terraform v0.12.23

Step 2 – IAM Policy for the State

Create a new IAM Policy called terraform-execution to give users access to deploy changes. They will need to have the permission for Terraform as well as for the resources they are supposed to create!

Do not create the S3 Bucket / Dynamo DB

table manually!

use the variables MY_*

MY_S3_STATE_BUCKET -> choose an available one

MY_AWS_ACCOUNT_ID -> replace with yours

MY_DYNAMO_TABLE_NAME -> locking table name

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "VisualEditor0",
      "Effect": "Allow",
      "Action": [
        "dynamodb:UpdateTimeToLive",
        "dynamodb:PutItem",
        "dynamodb:DeleteItem",
        "s3:ListBucket",
        "dynamodb:Query",
        "dynamodb:UpdateItem",
        "dynamodb:DeleteTable",
        "dynamodb:CreateTable",
        "s3:PutObject",
        "s3:GetObject",
        "dynamodb:DescribeTable",
        "dynamodb:GetItem",
        "s3:GetObjectVersion",
        "dynamodb:UpdateTable"
      ],
      "Resource": [
        "arn:aws:s3:::MY_S3_STATE_BUCKET",
        "arn:aws:s3:::MY_S3_STATE_BUCKET/*",
        "arn:aws:dynamodb:us-east-1:MY_AWS_ACCOUNT_ID:table/MY_DYNAMO_TABLE_NAME",
        "arn:aws:dynamodb:us-east-1:MY_AWS_ACCOUNT_ID:table:table/MY_DYNAMO_TABLE_NAME"
      ]
    },
    {
      "Sid": "VisualEditor1",
      "Effect": "Allow",
      "Action": [
        "dynamodb:ListTables",
        "s3:HeadBucket"
      ],
      "Resource": "*"
    }
  ]
}

Step 3

Create a file main.tf with the following content:

provider "aws" {
  version = "~> 2"
  region = "us-east-1"
}

resource "aws_s3_bucket" "terraform_state" {
  bucket = "MY_S3_STATE_BUCKET"
  acl    = "private"

  # Enable versioning so we can see the full revision history of our state files
  versioning {
    enabled = true
  }

  # Enable server-side encryption by default
  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm = "AES256"
      }
    }
  }

  tags = {
    Terraform = "true"
  }
}

resource "aws_dynamodb_table" "terraform_locks" {
  name         = "MY_DYNAMO_TABLE_NAME"
  billing_mode = "PAY_PER_REQUEST"
  hash_key     = "LockID"
  attribute {
    name = "LockID"
    type = "S"
  }

  tags = {
    Terraform = "true"
  }
}

now to install all dependencies for terraform like aws execute terraform init

now to create the S3 bucket and DynamoDB table plan the changes by: terraform plan

and then execute them by terraform apply

It is always a good idea to first call plan, as this will give you an overview of the changes terraform is about to execute. With apply the changes are beeing executed (the delta to the current state could have changed if there are multiple people working on it or you changed something in the account itself!)

cautious dev

Step 4 – enable Central State

At then end of the main.tf file add now the following lines:

terraform {
  backend "s3" {
    bucket         = "MY_S3_STATE_BUCKET"
    key            = "global/s3/terraform.tfstate"
    region         = "us-east-1"
    dynamodb_table = "MY_DYNAMO_TABLE_NAME"
    encrypt        = true
  }
}

with this again call first “plan” and then terraform apply.

Also your current / local state will then be transferred to the central state!

Step 5

now commit your main.tf file to source control and start collaborating with other devs on your infrastructure!

This also makes it a lot easier to reuse infrastructure across projects!

Allgemein

Heroku – PHP Worker (Symfony3)

Many are using heroku, and I guess with a similar config to our initial one:

web: vendor/bin/heroku-php-nginx -C nginx_app.conf web/
worker: bin/console queue:worker

This seems right but something was missing

  • custom ini was not applied
  • the worker could crash and would not respawn at all (this was bad!)

to get our ini to work we passed it directly to the php command, next was to have a simple bash loop to always have a worker running.

worker: while true; do php --php-ini web/.user.ini bin/console queue:worker; sleep 1; done

There must be a better way

It’s almost painful to look at that worker configuration… There must be a way to respawn. Sadly I did not find any Bundles / finished solutions.

Note: I did not add the php –php-ini web/.user.ini  to all examples (but it should be there!)

The Idea

A command should behave kind of like a ThreadPool and restart it’s children once they quit. And in the end I want to call it like this:

worker: bin/console thread:pool --threads=4 queue:worker

The Solution

to make our process handling easier use symfony/process. Next just create a new Console command to handle several processes and keep track of them.

composer require symfony/process

And a very basic Pool:

$threads = 4;
$command = 'bin/console something';

while (true) {
  for ($i = 0; $i < $threads; $i ++) {
    if ($pool[$i] instanceof \Symfony\Component\Process\Process 
        && ($pool[$i]->isRunning() || !$pool[$i]->isStarted())
    ) {
      continue;
    }

    // here we start the process
    $pool[$i] = new \Symfony\Component\Process\Process($command);
    $pool[$i]->start();
  }

  usleep(500000); // this is important, you want to sleep some time
}

This makes it possible to have a single command trigger the amount of workers we need.

You can see the complete command here: gist.github.com/wodka/23475d36cf13e956b8db7578bf6251ed

Screen Shot 2017-04-14 at 17.38.50

Logging

this is something I missed initially. Heroku collects logs that are written to stdout – therefore we have to get the output of the processes written to our main process.

Thankfully the Process::start takes a callable with 2 arguments, $type and $buffer.

 

Allgemein, Arbeit

Unit tests for SF2 with IntelliJ / PHPStorm and Vagrant

you require a working Symfony instance running through Vagrant.

Prerequisites

1) Install Remote Interpreter

remote_php

2) Add a Remote PHP Interpreter.

use vagrant ssh-config to get required information

Screen Shot 2016-02-17 at 14.11.57

3) Setup Path Mapping (Deployment Server)

Screen Shot 2016-02-17 at 14.47.11

 

4) Setup PHPUnit

Screen Shot 2016-02-18 at 00.04.27.png

create a file in app/phpunit.php and use it as a custom loader for phpunit:

<?php

if (!defined('PHPUNIT_COMPOSER_INSTALL')) {
    define('PHPUNIT_COMPOSER_INSTALL', __DIR__ . '/autoload.php');
}

require_once 'autoload.php';

this allows you to use @runInSeparateProcess till https://youtrack.jetbrains.com/issue/WI-29458 is fixed

 

 

Now everything should be setup and you can run the tests in Vagrant 🙂

Allgemein

Bitbucket Build Status from Codeship

Maybe you have heard about the new Build Status in Bitbucket – it’s awesome and shows you how good your commit / pull request is doing.

There is still a lot to be improved especially for pull requests compared to github but they will get there!

bitbucket

Integrate Codeship

Create a php file anywhere (must be reachable from the web) with the following content: snippet link

The script accepts the json data from codeship and pushes the build status (in progress | failed | success) to bitbucket. Also fill in your credentials from bitbucket!

 

that’s it 🙂 enjoy!

Allgemein, Arbeit

from Ubuntu to OSX

Over the years I moved from windows to linux and now to osx. Why you might ask?

Well, this is simple: I no longer have the time to configure the system to work the way I want it to. It is just supposed to do as much as possible without my intervention.

Ubuntu came pretty close to that but sadly my old vaio was breaking down.

 

I’m using OSX now for a month and I’m more that just happy! There are things missing like the “select and paste with middle mouse button” that I’m used from ubuntu and the option to move windows with the mouse and the keyboard, but it is close enough.