Logging docker to GCR registry

February 2, 2017 Leave a comment

I close my eyes, only for a moment
And the moment’s gone
All my dreams pass before my eyes, a curiosity
(Kansas – Dust in the wind)

Docker daemon can access images from multiple docker registries. Some registry can be read-only for unauthenticated users while private registries usually require authentication even for pulling images. Configuration of registries and authentication is stored in ~/.docker/config.json .

When you want to log in to docker registry, you can simply run:

docker login <registry_uri>

Command will create an entry in the ~/.docker/config.json looking like:

"registry.example.com": {
    "auth": "Tag3iekueep9raN9lae6Ahv9Maeyo1ee",
    "email": "jsosic@gmail.com"
}

But, if you want to use GCR (Google Container Registry), there is a problem – there is no password provided.

Google suggests using gcloud for accessing all of their services. To log in with your user credentials, simply run the following command:

docker login -e jsosic@gmail.com \
   -u oauth2accesstoken \
   -p "$(gcloud auth print-access-token)" \
   https://gcr.io

If you are setting a service account, then you need to use it’s private key, which Google distributes in JSON format:

docker login \
   -e service_user@compute-engine-669.iam.gserviceaccount.com \
   -u _json_key \
   -p "$(cat GCE_project-name_serviceaccount_<someid>.json)" \
   https://gcr.io

Now you can pull existing images from your private registry:

docker pull gcr.io/compute-engine-669/redis:latest

Isn’t it cool?

Categories: Cloud, Docker, GCE, Linux Tags: , ,

Deleting images from docker registry

January 23, 2017 Leave a comment

I

Something has got to give
These things I just don’t want you to see
(Dark Tranquillity – Format C for Cortex)

Docker registry is not really a user friendly piece of software. There is no fancy GUI and more importantly documentation is sparse.

So, if you run it for a long time, chance is you’ll end up with bunch of tagged images wasting space in your registry. How to delete old images? There is a way, but it’s not really a nice one.

Each image name is consisted of repository name and of an image tag. This combination links hashes of image layers. So, to delete image, we need to find correct repository name, image tag and then list corresponding image hashes.

First, prepare some env variables which we’re gonna use later:

export REG_PASS='<user>:<password>'
export HDR='Accept: application/vnd.docker.distribution.manifest.v2+json'
export REG_URI='https://myregistry.example.com:5000/v2/'
alias curl_reg="curl --user $REG_PASS -H $HDR"

Now lets list all available repositories:

curl_reg "${REG_URI}/_catalog"

Following command will show us the list of all the tags available within the specific repository:

curl_reg "${REG_URI}/<repository>/tags/list"

Now, lets say we want to delete an image with tag 5. We need to find the root hash of that specific tag:

curl_reg -s "${REG_BASEURI}/<repository>/manifests/53" \
            | jq '.config.digest'

Once we have a hash, it’s easy peasy:

curl_reg -X DELETE "${REG_BASEURI}/<repository>/manifests/<SHA256sum>"

And that’s it.

Note: This won’t free the disk space. Docker registry garbage collector frees up disk space and it should be run automatically within next 24 hours. If you want to run it manually, you can do it by connecting to the host hosting the registry and running the following command:

docker exec -it <registry_container_id> \
       bin/registry garbage-collect /etc/docker/registry/config.yml

And that’s it!

Note that this procedure should work for v2 images, but if you still have v1 images floating around your registry, you could use the delete-docker-registry-image to delete them.

Categories: Docker, Linux Tags: ,

Helping Zabbix housekeeper to catch up

December 2, 2016 Leave a comment

A howling wind of nightmares
Howling through barren streets
Frozen in time
The city woke up – paralyzed
(At the Gates – At War with Reality)

Zabbix monitoring system is old generation system storing timeseries data in regular SQL database backend. This design decision has many drawbacks: performance issues and storage size being two most noticeable ones.

To tackle these problems, Zabbix divided performance metric data into two segments: history and trends. History data stores all values as they are received by sensors. After the expiration time, which can be set per individual sensor, Zabbix does averaging (trending) on hourly basis and discards history data. Hourly average data is called trends.

Backend process designed used for deleting old data is called “HouseKeeper”. HouseKeeper is not perfect unfortunately. It runs every HousekeepingFrequency hours and deletes MaxHousekeeperDelete entries in each pass.

What happens if HouseKeeper deletion process lasts longer then interval between two runs? Next run will skip cause two HouseKeeper processes can’t run simultaneously. This is like a government budget: you’re in deficit each year and cruft (external debt) just piles up.

This is especially noticeable if you run Zabbix datastore on a relatively slow block device…

There is an easy way to clean out Zabbix datastore, but it requires a short Zabbix downtime. Procudere goes like this:

  • stop zabbix server
  • backup, backup, backup !!!
  • copy schema of history table(s) to history_copy
  • copy data newer then last X days from history to history_copy
  • drop history table
  • rename history_copy to history
  • re-create indexes on history table (if you’re running PgSQL)
  • start zabbix server

As ever – if you’re doing something twice, it has to be automatized third time.

So, I created a script for Zabbix Servers running PostgreSQL backend that automates this job for me. Script is pretty simple so I’ll paste it here:

#!/bin/bash

TABLES="history history_log history_old history_str history_text history_uint"
DBNAME=zabbix
HISTORY_DAYS=20

EPOCH_TIME=$(date --date '20 days ago' +'%s')

for table in $TABLES; do
    # remove old schema dumps
    rm -f ${table}.pre-data.sql ${table}.post-data.sql

    # dump schema
    pg_dump --section=pre-data  -t $table $DBNAME > ${table}.pre-data.sql
    pg_dump --section=post-data -t $table $DBNAME > ${table}.post-data.sql

    # create new table
    sed -r "s/\b${table}\b/${table}_copy/" ${table}.pre-data.sql \
    | psql $DBNAME -f -

    # copy data
    echo "INSERT INTO ${table}_copy \
          SELECT * FROM ${table} WHERE clock > ${EPOCH_TIME};" \
    | psql zabbix -f -

    # swap tables
    echo "ALTER TABLE ${table} RENAME TO ${table}_old;"  | psql zabbix -f -
    echo "ALTER TABLE ${table}_copy RENAME TO ${table};" | psql zabbix -f -

    # drop indexes from saved table and create new one
    sed 's/CREATE/DROP/' ${table}.post-data.sql \
    | sed 's/ ON.*/;/' \
    | psql $DBNAME -f -
    cat ${table}.post-data.sql | psql $DBNAME -f -
done

Note: Script works only on PostgreSQL backend.

Categories: Uncategorized

Rescuing GCE compute instance

September 17, 2016 Leave a comment

I’m digging deep inside my soul
To bring myself out of this God-damned hole
I rid the demons from my heart
And found the truth was with me from the start
(Halford – Resurrection)

If you’re still managing servers as pets and not as cattle – inevitable happens: filesystem breaks, sshd gets killed, wrong iptables settings get applied, wrong mount option halts boot process – and you’re in deep. Your important server is inaccessible.
If your server is running on in-house managed VmWare or XenServer, .NET console will help you rescue it. If it’s running on bare metal, you can rely on iDRAC or something similar. But, if you’re running in cloud – you’re pretty much screwed.

If you’re running GCE, there are couple of options at your disposal at the time of dismay. First, there is a beta Virtual Serial Port option that you can connect to and see where the hell did your instance halt and what messages are printed.

To enable Virtual Serial Port, you need to have gcloud (command line tool) installed and authenticated to your project. So, first thing is to list available instances:

% gcloud compute instances list
NAME   ZONE        MACHINE_TYPE   INTERNAL_IP  EXTERNAL_IP      STATUS
test   us-east1-b  g1-small       10.240.0.5   104.155.112.80   RUNNING

Now, to be able to connect to Virtual Serial Console, you need to set up ssh keys properly. Username and keys from your project metadata can be obtained by running:

% ssh-keygen ~/.ssh/google_compute_engine
% gcloud compute project-info add-metadata \
   --metadata-from-file sshKeys=google_compute_engine.pub

If you already have a key, you will need to set up both ~/.ssh/google_compute_engine and ~/.ssh/google_compute_engine.pub to match the key from the project.

After the keys are set, you can finally connect:

% gcloud beta compute connect-to-serial-port gceuser@test

You should probably get a standard TTY login prompt.

If an attempt to fix the problem through GCE Virtual Serial Console didn’t succeed, but you think boot disk can be saved by attaching it to other instance, you will need to:

  • disable “auto delete boot disk”
  • destroy instance 😦
  • attach boot disk as additional disk to another VM
  • mount it, fix whatever is broken, umount it
  • detach disk from instance
  • create new instance, and choose this disk as boot disk

Using gcloud, it would look something like this

% gcloud compute instances \
  set-disk-auto-delete test --no-auto-delete --device-name test
% gcloud compute instances delete test
% gcloud compute instance

You should probably get a standard TTY logi

Rspec tests on LogStash v1.5 and v1.5+

December 13, 2015 1 comment

Heimdall gazes east
A sail has caught his eye
He lifts his hand and sounds the horn
The undead army has arrived
(Amon Amarth – As Loke Falls)

Logstash 1.4 had an avalanche of bugs. I really wasn’t satisfied with the behavior of agent, it was constantly crashing, missing log lines, etc… Hence, I was waiting for new version hoping it will fix things. Testing 1.5.0 release candidates I did notice one thing that get me angry – logstash-test / rspec was removed from production RPM.. No way to run rspec tests any more with release version. Writing logstash conf and patterns is difficult without unit testing, and ends up with sysadmin usually introducing regressions and ending up with lots of _grokparsefailure entries. So, not having unit tests is simply not acceptable. In my previous article I explained in detail how to write tests. But, how to actually run tests with Logstash 1.5 or Logstash 2.0 or any newer vesion higher then 1.4?

Solution is to build the rspec yourself… First you’ll need some prerequisites, and this is an example from CentOS 7:

# yum install java-1.8.0-openjdk-devel rubygem-rake

After setting up all the requirements, you can grab logstash from GitHub, switch to version branch and prepare rspec:

$ git clone https://github.com/elastic/logstash
$ cd logstash
$ git checkout 2.1
$ rake bootstrap
$ rake test:install-core

After the build is over, you have your own, freshly baked rspec. Unfortunately, you’ll also need to modify your tests 😦 There are two changes needed for tests to work with new rspec:

  • replace require ‘test_utils’ with require ‘spec_helper’
  • remove extend LogStash::RSpec

This is example of new test:

require 'spec_helper'
require 'logstash/filters/grok'

describe LogStash::Filters::Grok do
  describe "my-super-logstash-test" do
    config <<-CONFIG
.... paste logstash config here ...
    CONFIG

    sample 'This is sample log entry' do
      insist { subject['message'] } == 'This is sample log entry'
    end
  end
end

And that’s it, you can test with simple bin/rspec /path/to/my/test.rb.

Designing SMTP honeypot – mailcatcher

November 28, 2015 Leave a comment

There is no way
I have no voice
I have no say
I have no choice
I feel no pain or sympathy
It’s just cold blood that runs through me

(Judas Priest – Cold Blooded)

There are 2 usecases for having a SMTP honeypot: if you develop applications that interact with users via e-mail or if you develop anti-spam protection. In both cases, robust testing infrastructure is a must. I’ve came across one distinct use case, so I had to think out of the box and set up catch-all SMTP server. I named it ‘mailcatcher’.

Mailcatcher behaves like an open relay (meaning: it allows anyone on the internet to send emails through it) – but instead of relaying messages, it just stores them localy as HTML documents. Mails are not forwarded to actual recipients. Mailcatcher will not only accept to relay mail from all sources to all destinations, but will also accept all log in credentials (all username/password combinations). Mails are stored in web root directory by default, but if the login credentials are presented to SMTP, then username will be used as a destination directory.

Main building blocks of the mailcatcher are: Postfix, Dovect and couple of custom scripts.

First, client program connects to postfix on 25/TCP. Postfix then awaits for client introduction (HELO/EHLO). If client specifies it wants authentication, following postfix parameters forward auth info to Dovecot for checkup:

# Postfix auth through Dovecot
smtpd_sasl_auth_enable = yes
broken_sasl_auth_clients = yes
smtpd_sasl_type = dovecot
smtpd_sasl_path = private/auth
smtpd_sasl_authenticated_header=yes

If authentication is succesful – which is by Dovecot design always, for any username/password pair, or if authentication wasn’t selected in the first place, postfix continues with the SMTP protocol.

Postfix then receives From field containing <sender>@<sender_domain> and To field containing <destination_user>@<destination_domain>. It then first checks if this specific postfix instance is the final destination for emails going to “destination_domain”. We trick postfix into thinking it is final destination for any domain with this config instruction and corresponding file:

mydestination = pcre:/etc/postfix/mydestination

And contents of PCRE file:

# cat /etc/postfix/mydestination
/^.*/    OK

Postfix is usually set up to accept only certain domains, but in this case it accepts every destination domain. So, a developer can even send an email to @pornhub.com !

Since mailcatcher doesn’t have mailboxes for all possible destination users, we instruct it to catch all emails and save them to account mailcatcher. So, for example, mail destined for <john@doe.com> will be rerouted internally to <mailcatcher@doe.com>:

luser_relay = mailcatcher

To be able to store all those emails, server has to have local alias “mailcatcher” which is present in /etc/aliases:

# grep mailcatcher /etc/aliases
mailcatcher:    "|/usr/local/bin/decode.pl"

So, when a mail is forwarded to mailcatcher, aliases directive actually pipes the contents of the message to decode.pl script. Script is run as nginx user, which is set up by another main.cf directive:

default_privs = nginx

Once the postfix finsihes with mail processing, it pipes it (through aliases) to decode.pl script. Here’s an example script:

#!/usr/bin/perl -w

chdir "/tmp";

# Always be safe
use strict;
use warnings;

# Timestamps
use Time::HiRes qw(time);
use POSIX qw(strftime);

# Use the module
use MIME::Parser;
use MIME::Base64;
use MIME::Lite;

my $directory = "/var/www/mailcatcher/";
undef $/;
my $mail_body = ;
$/="\n";

my $parser  = MIME::Parser->new;
my $entity  = $parser->parse_data($mail_body);
my $header  = $entity->head;
my $from    = $header->get_all("From");
my $to      = $header->get_all("To");
my $date    = $header->get("Date");
my $subject = $header->get("Subject");
my $sender  = $header->get("Received");

# check if directory exists, 'gs' flags to treat string like a single line
if ( $sender =~ /Authenticated sender:/s ) {
  $sender =~ s/.*\(Authenticated sender: ([a-zA-Z0-9._%+-]+)\).*/$1/gs;
  $directory .= $sender;
  $directory .= '/';
} 

unless ( -d $directory ) { mkdir $directory; }

# remove header from email
if ( $mail_body =~ /Content-Transfer-Encoding: base64/ ) {
  $mail_body =~ s/(.+\n)+\n//;
  $mail_body = decode_base64($mail_body);
}

# generate filename for storing mail body
my $t = time;
my $filename = strftime "%s_%Y-%m-%d_%H-%M-%S", localtime $t;
$filename .= sprintf ".%09d", ($t-int($t))*1000000000;
$filename .= '.html';

# finally write our email
open(my $fh, '>', "${directory}${filename}") or die "Could not open file '${directory}${filename}' $!";
    # write to file
    print $fh "From: $from <br/>";
    print $fh "To: $to <br/>";
    print $fh "Date: $date <br/>";
    print $fh "Subject: $subject <br/><br/><br/>";
    print $fh $mail_body;
close $fh;
# EOF

Script parses the mail with MIME perl modules, fetches relevant information, decodes the mail and stores it in destination directory.

If the EHLO process was specified, and authentication is selected, Postfix connects to Dovecot to check the credentials. This is the relevant part of auth configuration of Dovecot:

userdb {
   driver = checkpassword
   args = /usr/local/bin/fakesasl.sh
}
passdb {
   driver = checkpassword
   args = /usr/local/bin/fakesasl.sh
}

Another script is neccessary, fackesasl.sh:

#!/bin/bash
# Example Dovecot checkpassword script that may be used as both passdb or userdb.
# FakeAuth, will allow any user/pass combination.# Implementation guidelines at http://wiki2.dovecot.org/AuthDatabase/CheckPassword# The first and only argument is path to checkpassword-reply binary.
# It should be executed at the end if authentication succeeds.
CHECKPASSWORD_REPLY_BINARY="$1"# Messages to stderr will end up in mail log (prefixed with "dovecot: auth: Error:")
LOG=/dev/stderr

# User and password will be supplied on file descriptor 3.
INPUT_FD=3

# Error return codes.
ERR_PERMFAIL=1
ERR_NOUSER=3
ERR_TEMPFAIL=111

# Make testing this script easy. To check it just run:
#   printf '%s\x0%s\x0' <user> <password> | ./checkpassword.sh test; echo "$?"
if [ "$CHECKPASSWORD_REPLY_BINARY" = "test" ]; then
CHECKPASSWORD_REPLY_BINARY=/bin/true
INPUT_FD=0
fi

# Read input data. Password may be empty if not available (i.e. if doing credentials lookup).
read -d $'\x0' -r -u $INPUT_FD USER
read -d $'\x0' -r -u $INPUT_FD PASS

# Both mailbox and domain directories should be in lowercase on file system.
# So let's convert login user name to lowercase and tell Dovecot 'user' and 'home' (which overrides
# 'mail_home' global parameter) values should be updated.
# Of course, conversion to lowercase may be done in Dovecot configuration as well.
export USER="`echo \"$USER\" | tr 'A-Z' 'a-z'`"
mail_name="`echo \"$USER\" | awk -F '@' '{ print $1 }'`"
domain_name="`echo \"$USER\" | awk -F '@' '{ print $2 }'`"
export HOME="/var/qmail/mailnames/$domain_name/$mail_name/"

# Dovecot calls the script with AUTHORIZED=1 environment set when performing a userdb lookup.
# The script must acknowledge this by changing the environment to AUTHORIZED=2,
# otherwise the lookup fails.
[ "$AUTHORIZED" != 1 ] || export AUTHORIZED=2

# Always return OK ;)
exec $CHECKPASSWORD_REPLY_BINARY

And that’s it!

PS. To be able to run this on RedHat derivatives – turn SELinux to permissive mode. I created two SELinux modules to cover me on this one.

First:

# Allows postfix (running under type context postfix_local_t)
# to write to web directories (type httpd_sys_content_t).
module mailcatcher 1.2;

require {
	type httpd_sys_content_t;
	type postfix_local_t;
	class dir { create write search getattr add_name };
	class file { write ioctl create open getattr };
}

#============= postfix_local_t ==============
allow postfix_local_t httpd_sys_content_t:dir { create write search getattr add_name };
allow postfix_local_t httpd_sys_content_t:file { write ioctl create open getattr };

Second module:

# Allows dovecot (running under type context dovecot_auth_t)
# to read and exec fakeauth script (type shell_exec_t).
module fakeauth 1.1;

require {
	type dovecot_auth_t;
	type shell_exec_t;
	class file { read open execute };
}

#============= dovecot_auth_t ==============
allow dovecot_auth_t shell_exec_t:file { read open execute };

PS. Don’t forget to create destination directories for your mails/files:

# mkdir /var/www/mailcatcher
# chown root:mailcatcher /var/www/mailcatcher
# chmod 0775 /var/www/mailcatcher

and to install nginx with autoindex module – so HTML files can be visible on http/https 😉

RabbitMQ users queues exchanges bindings import export

June 19, 2015 Leave a comment

All I see it’s dead world
And I know that’s our fault
Living Absent minded
(Archeon – Dead World)

If you want to deploy test RabbitMQ, migrate from one node/cluster to another one or just back up your Rabbit metadata, there is a simple way to do it through RabbitMQ API.

API is available at http://rabbitmq:15762/api/ – with most of the documentation. To back up metadata, simply run:

$ curl -i -u <username>:<password> http://rabbitmq:15672/api/definitions

Output is in JSON format, and you can save it in a file by redirecting curl output to a file.
Later if you decide to restore the saved file to another (or same) Rabbit instance, it’s a single line command again:

$ curl -i -u <username>:<password>   \
  -H "content-type:application/json" \
  -X POST                            \
  --data @/tmp/rabbit_defs           \
  http://rabbitmq-new:15672/api/definitions

And that’s it!

%d bloggers like this: