Archive

Archive for the ‘Linux’ Category

Logging docker to GCR registry

February 2, 2017 Leave a comment

I close my eyes, only for a moment
And the moment’s gone
All my dreams pass before my eyes, a curiosity
(Kansas – Dust in the wind)

Docker daemon can access images from multiple docker registries. Some registry can be read-only for unauthenticated users while private registries usually require authentication even for pulling images. Configuration of registries and authentication is stored in ~/.docker/config.json .

When you want to log in to docker registry, you can simply run:

docker login <registry_uri>

Command will create an entry in the ~/.docker/config.json looking like:

"registry.example.com": {
    "auth": "Tag3iekueep9raN9lae6Ahv9Maeyo1ee",
    "email": "jsosic@gmail.com"
}

But, if you want to use GCR (Google Container Registry), there is a problem – there is no password provided.

Google suggests using gcloud for accessing all of their services. To log in with your user credentials, simply run the following command:

docker login -e jsosic@gmail.com \
   -u oauth2accesstoken \
   -p "$(gcloud auth print-access-token)" \
   https://gcr.io

If you are setting a service account, then you need to use it’s private key, which Google distributes in JSON format:

docker login \
   -e service_user@compute-engine-669.iam.gserviceaccount.com \
   -u _json_key \
   -p "$(cat GCE_project-name_serviceaccount_<someid>.json)" \
   https://gcr.io

Now you can pull existing images from your private registry:

docker pull gcr.io/compute-engine-669/redis:latest

Isn’t it cool?

Categories: Cloud, Docker, GCE, Linux Tags: , ,

Deleting images from docker registry

January 23, 2017 Leave a comment

I

Something has got to give
These things I just don’t want you to see
(Dark Tranquillity – Format C for Cortex)

Docker registry is not really a user friendly piece of software. There is no fancy GUI and more importantly documentation is sparse.

So, if you run it for a long time, chance is you’ll end up with bunch of tagged images wasting space in your registry. How to delete old images? There is a way, but it’s not really a nice one.

Each image name is consisted of repository name and of an image tag. This combination links hashes of image layers. So, to delete image, we need to find correct repository name, image tag and then list corresponding image hashes.

First, prepare some env variables which we’re gonna use later:

export REG_PASS='<user>:<password>'
export HDR='Accept: application/vnd.docker.distribution.manifest.v2+json'
export REG_URI='https://myregistry.example.com:5000/v2/'
alias curl_reg="curl --user $REG_PASS -H $HDR"

Now lets list all available repositories:

curl_reg "${REG_URI}/_catalog"

Following command will show us the list of all the tags available within the specific repository:

curl_reg "${REG_URI}/<repository>/tags/list"

Now, lets say we want to delete an image with tag 5. We need to find the root hash of that specific tag:

curl_reg -s "${REG_BASEURI}/<repository>/manifests/53" \
            | jq '.config.digest'

Once we have a hash, it’s easy peasy:

curl_reg -X DELETE "${REG_BASEURI}/<repository>/manifests/<SHA256sum>"

And that’s it.

Note: This won’t free the disk space. Docker registry garbage collector frees up disk space and it should be run automatically within next 24 hours. If you want to run it manually, you can do it by connecting to the host hosting the registry and running the following command:

docker exec -it <registry_container_id> \
       bin/registry garbage-collect /etc/docker/registry/config.yml

And that’s it!

Note that this procedure should work for v2 images, but if you still have v1 images floating around your registry, you could use the delete-docker-registry-image to delete them.

Categories: Docker, Linux Tags: ,

Rescuing GCE compute instance

September 17, 2016 Leave a comment

I’m digging deep inside my soul
To bring myself out of this God-damned hole
I rid the demons from my heart
And found the truth was with me from the start
(Halford – Resurrection)

If you’re still managing servers as pets and not as cattle – inevitable happens: filesystem breaks, sshd gets killed, wrong iptables settings get applied, wrong mount option halts boot process – and you’re in deep. Your important server is inaccessible.
If your server is running on in-house managed VmWare or XenServer, .NET console will help you rescue it. If it’s running on bare metal, you can rely on iDRAC or something similar. But, if you’re running in cloud – you’re pretty much screwed.

If you’re running GCE, there are couple of options at your disposal at the time of dismay. First, there is a beta Virtual Serial Port option that you can connect to and see where the hell did your instance halt and what messages are printed.

To enable Virtual Serial Port, you need to have gcloud (command line tool) installed and authenticated to your project. So, first thing is to list available instances:

% gcloud compute instances list
NAME   ZONE        MACHINE_TYPE   INTERNAL_IP  EXTERNAL_IP      STATUS
test   us-east1-b  g1-small       10.240.0.5   104.155.112.80   RUNNING

Now, to be able to connect to Virtual Serial Console, you need to set up ssh keys properly. Username and keys from your project metadata can be obtained by running:

% ssh-keygen ~/.ssh/google_compute_engine
% gcloud compute project-info add-metadata \
   --metadata-from-file sshKeys=google_compute_engine.pub

If you already have a key, you will need to set up both ~/.ssh/google_compute_engine and ~/.ssh/google_compute_engine.pub to match the key from the project.

After the keys are set, you can finally connect:

% gcloud beta compute connect-to-serial-port gceuser@test

You should probably get a standard TTY login prompt.

If an attempt to fix the problem through GCE Virtual Serial Console didn’t succeed, but you think boot disk can be saved by attaching it to other instance, you will need to:

  • disable “auto delete boot disk”
  • destroy instance 😦
  • attach boot disk as additional disk to another VM
  • mount it, fix whatever is broken, umount it
  • detach disk from instance
  • create new instance, and choose this disk as boot disk

Using gcloud, it would look something like this

% gcloud compute instances \
  set-disk-auto-delete test --no-auto-delete --device-name test
% gcloud compute instances delete test
% gcloud compute instance

You should probably get a standard TTY logi

Rspec tests on LogStash v1.5 and v1.5+

December 13, 2015 1 comment

Heimdall gazes east
A sail has caught his eye
He lifts his hand and sounds the horn
The undead army has arrived
(Amon Amarth – As Loke Falls)

Logstash 1.4 had an avalanche of bugs. I really wasn’t satisfied with the behavior of agent, it was constantly crashing, missing log lines, etc… Hence, I was waiting for new version hoping it will fix things. Testing 1.5.0 release candidates I did notice one thing that get me angry – logstash-test / rspec was removed from production RPM.. No way to run rspec tests any more with release version. Writing logstash conf and patterns is difficult without unit testing, and ends up with sysadmin usually introducing regressions and ending up with lots of _grokparsefailure entries. So, not having unit tests is simply not acceptable. In my previous article I explained in detail how to write tests. But, how to actually run tests with Logstash 1.5 or Logstash 2.0 or any newer vesion higher then 1.4?

Solution is to build the rspec yourself… First you’ll need some prerequisites, and this is an example from CentOS 7:

# yum install java-1.8.0-openjdk-devel rubygem-rake

After setting up all the requirements, you can grab logstash from GitHub, switch to version branch and prepare rspec:

$ git clone https://github.com/elastic/logstash
$ cd logstash
$ git checkout 2.1
$ rake bootstrap
$ rake test:install-core

After the build is over, you have your own, freshly baked rspec. Unfortunately, you’ll also need to modify your tests 😦 There are two changes needed for tests to work with new rspec:

  • replace require ‘test_utils’ with require ‘spec_helper’
  • remove extend LogStash::RSpec

This is example of new test:

require 'spec_helper'
require 'logstash/filters/grok'

describe LogStash::Filters::Grok do
  describe "my-super-logstash-test" do
    config <<-CONFIG
.... paste logstash config here ...
    CONFIG

    sample 'This is sample log entry' do
      insist { subject['message'] } == 'This is sample log entry'
    end
  end
end

And that’s it, you can test with simple bin/rspec /path/to/my/test.rb.

Designing SMTP honeypot – mailcatcher

November 28, 2015 Leave a comment

There is no way
I have no voice
I have no say
I have no choice
I feel no pain or sympathy
It’s just cold blood that runs through me

(Judas Priest – Cold Blooded)

There are 2 usecases for having a SMTP honeypot: if you develop applications that interact with users via e-mail or if you develop anti-spam protection. In both cases, robust testing infrastructure is a must. I’ve came across one distinct use case, so I had to think out of the box and set up catch-all SMTP server. I named it ‘mailcatcher’.

Mailcatcher behaves like an open relay (meaning: it allows anyone on the internet to send emails through it) – but instead of relaying messages, it just stores them localy as HTML documents. Mails are not forwarded to actual recipients. Mailcatcher will not only accept to relay mail from all sources to all destinations, but will also accept all log in credentials (all username/password combinations). Mails are stored in web root directory by default, but if the login credentials are presented to SMTP, then username will be used as a destination directory.

Main building blocks of the mailcatcher are: Postfix, Dovect and couple of custom scripts.

First, client program connects to postfix on 25/TCP. Postfix then awaits for client introduction (HELO/EHLO). If client specifies it wants authentication, following postfix parameters forward auth info to Dovecot for checkup:

# Postfix auth through Dovecot
smtpd_sasl_auth_enable = yes
broken_sasl_auth_clients = yes
smtpd_sasl_type = dovecot
smtpd_sasl_path = private/auth
smtpd_sasl_authenticated_header=yes

If authentication is succesful – which is by Dovecot design always, for any username/password pair, or if authentication wasn’t selected in the first place, postfix continues with the SMTP protocol.

Postfix then receives From field containing <sender>@<sender_domain> and To field containing <destination_user>@<destination_domain>. It then first checks if this specific postfix instance is the final destination for emails going to “destination_domain”. We trick postfix into thinking it is final destination for any domain with this config instruction and corresponding file:

mydestination = pcre:/etc/postfix/mydestination

And contents of PCRE file:

# cat /etc/postfix/mydestination
/^.*/    OK

Postfix is usually set up to accept only certain domains, but in this case it accepts every destination domain. So, a developer can even send an email to @pornhub.com !

Since mailcatcher doesn’t have mailboxes for all possible destination users, we instruct it to catch all emails and save them to account mailcatcher. So, for example, mail destined for <john@doe.com> will be rerouted internally to <mailcatcher@doe.com>:

luser_relay = mailcatcher

To be able to store all those emails, server has to have local alias “mailcatcher” which is present in /etc/aliases:

# grep mailcatcher /etc/aliases
mailcatcher:    "|/usr/local/bin/decode.pl"

So, when a mail is forwarded to mailcatcher, aliases directive actually pipes the contents of the message to decode.pl script. Script is run as nginx user, which is set up by another main.cf directive:

default_privs = nginx

Once the postfix finsihes with mail processing, it pipes it (through aliases) to decode.pl script. Here’s an example script:

#!/usr/bin/perl -w

chdir "/tmp";

# Always be safe
use strict;
use warnings;

# Timestamps
use Time::HiRes qw(time);
use POSIX qw(strftime);

# Use the module
use MIME::Parser;
use MIME::Base64;
use MIME::Lite;

my $directory = "/var/www/mailcatcher/";
undef $/;
my $mail_body = ;
$/="\n";

my $parser  = MIME::Parser->new;
my $entity  = $parser->parse_data($mail_body);
my $header  = $entity->head;
my $from    = $header->get_all("From");
my $to      = $header->get_all("To");
my $date    = $header->get("Date");
my $subject = $header->get("Subject");
my $sender  = $header->get("Received");

# check if directory exists, 'gs' flags to treat string like a single line
if ( $sender =~ /Authenticated sender:/s ) {
  $sender =~ s/.*\(Authenticated sender: ([a-zA-Z0-9._%+-]+)\).*/$1/gs;
  $directory .= $sender;
  $directory .= '/';
} 

unless ( -d $directory ) { mkdir $directory; }

# remove header from email
if ( $mail_body =~ /Content-Transfer-Encoding: base64/ ) {
  $mail_body =~ s/(.+\n)+\n//;
  $mail_body = decode_base64($mail_body);
}

# generate filename for storing mail body
my $t = time;
my $filename = strftime "%s_%Y-%m-%d_%H-%M-%S", localtime $t;
$filename .= sprintf ".%09d", ($t-int($t))*1000000000;
$filename .= '.html';

# finally write our email
open(my $fh, '>', "${directory}${filename}") or die "Could not open file '${directory}${filename}' $!";
    # write to file
    print $fh "From: $from <br/>";
    print $fh "To: $to <br/>";
    print $fh "Date: $date <br/>";
    print $fh "Subject: $subject <br/><br/><br/>";
    print $fh $mail_body;
close $fh;
# EOF

Script parses the mail with MIME perl modules, fetches relevant information, decodes the mail and stores it in destination directory.

If the EHLO process was specified, and authentication is selected, Postfix connects to Dovecot to check the credentials. This is the relevant part of auth configuration of Dovecot:

userdb {
   driver = checkpassword
   args = /usr/local/bin/fakesasl.sh
}
passdb {
   driver = checkpassword
   args = /usr/local/bin/fakesasl.sh
}

Another script is neccessary, fackesasl.sh:

#!/bin/bash
# Example Dovecot checkpassword script that may be used as both passdb or userdb.
# FakeAuth, will allow any user/pass combination.# Implementation guidelines at http://wiki2.dovecot.org/AuthDatabase/CheckPassword# The first and only argument is path to checkpassword-reply binary.
# It should be executed at the end if authentication succeeds.
CHECKPASSWORD_REPLY_BINARY="$1"# Messages to stderr will end up in mail log (prefixed with "dovecot: auth: Error:")
LOG=/dev/stderr

# User and password will be supplied on file descriptor 3.
INPUT_FD=3

# Error return codes.
ERR_PERMFAIL=1
ERR_NOUSER=3
ERR_TEMPFAIL=111

# Make testing this script easy. To check it just run:
#   printf '%s\x0%s\x0' <user> <password> | ./checkpassword.sh test; echo "$?"
if [ "$CHECKPASSWORD_REPLY_BINARY" = "test" ]; then
CHECKPASSWORD_REPLY_BINARY=/bin/true
INPUT_FD=0
fi

# Read input data. Password may be empty if not available (i.e. if doing credentials lookup).
read -d $'\x0' -r -u $INPUT_FD USER
read -d $'\x0' -r -u $INPUT_FD PASS

# Both mailbox and domain directories should be in lowercase on file system.
# So let's convert login user name to lowercase and tell Dovecot 'user' and 'home' (which overrides
# 'mail_home' global parameter) values should be updated.
# Of course, conversion to lowercase may be done in Dovecot configuration as well.
export USER="`echo \"$USER\" | tr 'A-Z' 'a-z'`"
mail_name="`echo \"$USER\" | awk -F '@' '{ print $1 }'`"
domain_name="`echo \"$USER\" | awk -F '@' '{ print $2 }'`"
export HOME="/var/qmail/mailnames/$domain_name/$mail_name/"

# Dovecot calls the script with AUTHORIZED=1 environment set when performing a userdb lookup.
# The script must acknowledge this by changing the environment to AUTHORIZED=2,
# otherwise the lookup fails.
[ "$AUTHORIZED" != 1 ] || export AUTHORIZED=2

# Always return OK ;)
exec $CHECKPASSWORD_REPLY_BINARY

And that’s it!

PS. To be able to run this on RedHat derivatives – turn SELinux to permissive mode. I created two SELinux modules to cover me on this one.

First:

# Allows postfix (running under type context postfix_local_t)
# to write to web directories (type httpd_sys_content_t).
module mailcatcher 1.2;

require {
	type httpd_sys_content_t;
	type postfix_local_t;
	class dir { create write search getattr add_name };
	class file { write ioctl create open getattr };
}

#============= postfix_local_t ==============
allow postfix_local_t httpd_sys_content_t:dir { create write search getattr add_name };
allow postfix_local_t httpd_sys_content_t:file { write ioctl create open getattr };

Second module:

# Allows dovecot (running under type context dovecot_auth_t)
# to read and exec fakeauth script (type shell_exec_t).
module fakeauth 1.1;

require {
	type dovecot_auth_t;
	type shell_exec_t;
	class file { read open execute };
}

#============= dovecot_auth_t ==============
allow dovecot_auth_t shell_exec_t:file { read open execute };

PS. Don’t forget to create destination directories for your mails/files:

# mkdir /var/www/mailcatcher
# chown root:mailcatcher /var/www/mailcatcher
# chmod 0775 /var/www/mailcatcher

and to install nginx with autoindex module – so HTML files can be visible on http/https 😉

Deploying custom SELinux modules

March 11, 2015 Leave a comment

Asgård’s always been my home
But I’m of different blood
I will overthrow the throne
Deceiver! Deceiver of the gods!
(Amon Amarth – Deceiver of the Gods)

If you decide to run OS with SELinux enabled, sooner or later you’ll bump into roadblock. So, the question is how to write your own SELinux rules and deploy them. For managing rules, I’m using Puppet module spiette/selinux.

Lets review the practical example from one of my hosts. Lets say you want to pipe all your emails to a script which will save them to a certain directory as HTML files. /etc/aliases has line like this one:

mailcatcher:    "|/usr/local/bin/decode.pl"

After sending email to mailcatcher user, this is what we can see in maillog:

Mar  3 09:59:58 mailcatcher postfix/local[16030]: 589207055A: 
   to=<mailcatcher@devel.localdomain>, orig_to=<jsosic@example.com>,
   relay=local, delay=0.1, delays=0/0/0/0.1, dsn=5.3.0,
   status=bounced
   (Command died with status 13: "/usr/local/bin/decode.pl".
   Command output: Could not open file
   '/var/www/mailcatcher/1425598_2015-03-03_09-59-58.461450099.html'
   Permission denied at /usr/local/bin/decode.pl line 50. )

After checking up directory permissions, it’s obvious there’s something else blocking access, and if you’re running RedHat derivative (CentOS/Scientific/Fedora) that something is usually SELinux. To confirm that, take a look at /var/log/audit/auditd.log:

type=SYSCALL msg=audit(1425644753.803:20374): arch=c000003e
   syscall=83 success=no exit=-13 a0=16c6580 a1=1ff a2=0
   a3=786966682d726865 items=0 ppid=26763 pid=26764 auid=1505 uid=498
   gid=498 euid=498 suid=498 fsuid=498 egid=498 sgid=498 fsgid=498
   tty=(none) ses=1355 comm="decode.pl" exe="/usr/bin/perl"
   subj=unconfined_u:system_r:postfix_local_t:s0 key=(null)
type=AVC msg=audit(1425644759.713:20375): avc:  denied  { create }
  for  pid=26777
  comm="decode.pl" name="example-name"
  scontext=unconfined_u:system_r:postfix_local_t:s0
  tcontext=unconfined_u:object_r:httpd_sys_content_t:s0 tclass=dir

Now we have a reason why our script can’t write a file. Next step is to pipe this long to audit2allow -m mailcatcher, which will generate the following output:

module mailcatcher 1.0;

require {
    type httpd_sys_content_t;
    type postfix_local_t;
    class dir create;
}

#============= postfix_local_t ==============
allow postfix_local_t httpd_sys_content_t:dir create;

Next step is to compile this module and load it into SELinux, and retry if our script now works as designed.

To compile it, save it in a file called mailcatcher.te, and run the following commands:

# checkmodule -M -m -o mailcatcher.mod mailcatcher.te
# semodule_package -o mailcatcher.pp -m mailcatcher.mod
# semodule -r mailcatcher.pp
# semodule -i mailcatcher.pp

After runnning the last command, you can recheck if your script works. If it has permission problems again, just repeat the process until you create a module that works. In my case, final module looks like this:

# Allows postfix (running under type context postfix_local_t)
# to write to web directories (type httpd_sys_content_t).
module mailcatcher 1.1;

require {
    type httpd_sys_content_t;
    type postfix_local_t;
    class dir { write search getattr add_name };
    class file { write ioctl create open getattr };
}

#============= postfix_local_t ==============
allow postfix_local_t httpd_sys_content_t:dir { write search getattr add_name };
allow postfix_local_t httpd_sys_content_t:file { write ioctl create open getattr };

Now, to automate it via puppet, save the file under yourmodule/files/selinux/mailcatcher.te. You can later use it in (any) manifest with the following code snippet:

include ::selinux
::selinux::module { 'mailcatcher':
  ensure => present,
  source => 'puppet:///modules/yourmodule/selinux/',
}

Puppet will transfer .te file to destination host, compile it and load it for you.

Categories: Linux, RedHat, Security Tags: , ,

htop atop tmux on XenServer 6.x

December 13, 2014 Leave a comment

Save me from myself if you ever really cared
Save me from myself, tell me you’re not scared
(Damage Plan – Save me)

If you want to run diagnostic commands on XenServer, you’re pretty much limited with available options. Since XenServer is based on RHEL 5.x series, with mostly 32bit libraries installed, we can use packages for RHEL/CentOS 5.x series i386 arch. Three tools that I use most often for initial screening of the servers are htop, atop and tmux. Doing something like ‘xe vm-export‘ and backing up VMs to external USB can be like reading a book in a dark room. There’s no progress bar, there’s no info – nothing. Calculating speed looks something along these lines:

# ls -al pbx.xva; sleep 60; ls -al pbx.xva 
-rw------- 1 root root 13799469056 Dec 13 23:00 pbx.xva 
-rw------- 1 root root 14982000640 Dec 13 23:01 pbx.xva

And after that: (14982000640 – 13799469056 ) / 60 / 1024 / 1024 = 18.79 MB/s. 🙂

Attaching/detaching or sharing sessions is a wet dream… only way to do it is to run tmux on a machine from which you are connecting to SSH of XenServer.

So, after I really got annoyed, I tried to install tmux. Initial tries with 64bit package for CentOS 6 were complaining about missing x86_64 libraries, so I switched to 32bit packages. That didn’t work also, complaining about too new version of rpmlib 🙂 So solution was obvious – use 32bit EPEL packages! These are the packages that I use:

# rpm -qa | egrep '(top|tmux)'
tmux-1.4-3.el5.1
atop-1.27-2.el5
htop-0.8.3-1.el5

Now we’re talking business!

%d bloggers like this: