Archive for the ‘Security’ Category

Designing SMTP honeypot – mailcatcher

November 28, 2015 Leave a comment

There is no way
I have no voice
I have no say
I have no choice
I feel no pain or sympathy
It’s just cold blood that runs through me

(Judas Priest – Cold Blooded)

There are 2 usecases for having a SMTP honeypot: if you develop applications that interact with users via e-mail or if you develop anti-spam protection. In both cases, robust testing infrastructure is a must. I’ve came across one distinct use case, so I had to think out of the box and set up catch-all SMTP server. I named it ‘mailcatcher’.

Mailcatcher behaves like an open relay (meaning: it allows anyone on the internet to send emails through it) – but instead of relaying messages, it just stores them localy as HTML documents. Mails are not forwarded to actual recipients. Mailcatcher will not only accept to relay mail from all sources to all destinations, but will also accept all log in credentials (all username/password combinations). Mails are stored in web root directory by default, but if the login credentials are presented to SMTP, then username will be used as a destination directory.

Main building blocks of the mailcatcher are: Postfix, Dovect and couple of custom scripts.

First, client program connects to postfix on 25/TCP. Postfix then awaits for client introduction (HELO/EHLO). If client specifies it wants authentication, following postfix parameters forward auth info to Dovecot for checkup:

# Postfix auth through Dovecot
smtpd_sasl_auth_enable = yes
broken_sasl_auth_clients = yes
smtpd_sasl_type = dovecot
smtpd_sasl_path = private/auth

If authentication is succesful – which is by Dovecot design always, for any username/password pair, or if authentication wasn’t selected in the first place, postfix continues with the SMTP protocol.

Postfix then receives From field containing <sender>@<sender_domain> and To field containing <destination_user>@<destination_domain>. It then first checks if this specific postfix instance is the final destination for emails going to “destination_domain”. We trick postfix into thinking it is final destination for any domain with this config instruction and corresponding file:

mydestination = pcre:/etc/postfix/mydestination

And contents of PCRE file:

# cat /etc/postfix/mydestination
/^.*/    OK

Postfix is usually set up to accept only certain domains, but in this case it accepts every destination domain. So, a developer can even send an email to !

Since mailcatcher doesn’t have mailboxes for all possible destination users, we instruct it to catch all emails and save them to account mailcatcher. So, for example, mail destined for <> will be rerouted internally to <>:

luser_relay = mailcatcher

To be able to store all those emails, server has to have local alias “mailcatcher” which is present in /etc/aliases:

# grep mailcatcher /etc/aliases
mailcatcher:    "|/usr/local/bin/"

So, when a mail is forwarded to mailcatcher, aliases directive actually pipes the contents of the message to script. Script is run as nginx user, which is set up by another directive:

default_privs = nginx

Once the postfix finsihes with mail processing, it pipes it (through aliases) to script. Here’s an example script:

#!/usr/bin/perl -w

chdir "/tmp";

# Always be safe
use strict;
use warnings;

# Timestamps
use Time::HiRes qw(time);
use POSIX qw(strftime);

# Use the module
use MIME::Parser;
use MIME::Base64;
use MIME::Lite;

my $directory = "/var/www/mailcatcher/";
undef $/;
my $mail_body = ;

my $parser  = MIME::Parser->new;
my $entity  = $parser->parse_data($mail_body);
my $header  = $entity->head;
my $from    = $header->get_all("From");
my $to      = $header->get_all("To");
my $date    = $header->get("Date");
my $subject = $header->get("Subject");
my $sender  = $header->get("Received");

# check if directory exists, 'gs' flags to treat string like a single line
if ( $sender =~ /Authenticated sender:/s ) {
  $sender =~ s/.*\(Authenticated sender: ([a-zA-Z0-9._%+-]+)\).*/$1/gs;
  $directory .= $sender;
  $directory .= '/';

unless ( -d $directory ) { mkdir $directory; }

# remove header from email
if ( $mail_body =~ /Content-Transfer-Encoding: base64/ ) {
  $mail_body =~ s/(.+\n)+\n//;
  $mail_body = decode_base64($mail_body);

# generate filename for storing mail body
my $t = time;
my $filename = strftime "%s_%Y-%m-%d_%H-%M-%S", localtime $t;
$filename .= sprintf ".%09d", ($t-int($t))*1000000000;
$filename .= '.html';

# finally write our email
open(my $fh, '>', "${directory}${filename}") or die "Could not open file '${directory}${filename}' $!";
    # write to file
    print $fh "From: $from <br/>";
    print $fh "To: $to <br/>";
    print $fh "Date: $date <br/>";
    print $fh "Subject: $subject <br/><br/><br/>";
    print $fh $mail_body;
close $fh;

Script parses the mail with MIME perl modules, fetches relevant information, decodes the mail and stores it in destination directory.

If the EHLO process was specified, and authentication is selected, Postfix connects to Dovecot to check the credentials. This is the relevant part of auth configuration of Dovecot:

userdb {
   driver = checkpassword
   args = /usr/local/bin/
passdb {
   driver = checkpassword
   args = /usr/local/bin/

Another script is neccessary,

# Example Dovecot checkpassword script that may be used as both passdb or userdb.
# FakeAuth, will allow any user/pass combination.# Implementation guidelines at The first and only argument is path to checkpassword-reply binary.
# It should be executed at the end if authentication succeeds.
CHECKPASSWORD_REPLY_BINARY="$1"# Messages to stderr will end up in mail log (prefixed with "dovecot: auth: Error:")

# User and password will be supplied on file descriptor 3.

# Error return codes.

# Make testing this script easy. To check it just run:
#   printf '%s\x0%s\x0' <user> <password> | ./ test; echo "$?"
if [ "$CHECKPASSWORD_REPLY_BINARY" = "test" ]; then

# Read input data. Password may be empty if not available (i.e. if doing credentials lookup).
read -d $'\x0' -r -u $INPUT_FD USER
read -d $'\x0' -r -u $INPUT_FD PASS

# Both mailbox and domain directories should be in lowercase on file system.
# So let's convert login user name to lowercase and tell Dovecot 'user' and 'home' (which overrides
# 'mail_home' global parameter) values should be updated.
# Of course, conversion to lowercase may be done in Dovecot configuration as well.
export USER="`echo \"$USER\" | tr 'A-Z' 'a-z'`"
mail_name="`echo \"$USER\" | awk -F '@' '{ print $1 }'`"
domain_name="`echo \"$USER\" | awk -F '@' '{ print $2 }'`"
export HOME="/var/qmail/mailnames/$domain_name/$mail_name/"

# Dovecot calls the script with AUTHORIZED=1 environment set when performing a userdb lookup.
# The script must acknowledge this by changing the environment to AUTHORIZED=2,
# otherwise the lookup fails.
[ "$AUTHORIZED" != 1 ] || export AUTHORIZED=2

# Always return OK ;)

And that’s it!

PS. To be able to run this on RedHat derivatives – turn SELinux to permissive mode. I created two SELinux modules to cover me on this one.


# Allows postfix (running under type context postfix_local_t)
# to write to web directories (type httpd_sys_content_t).
module mailcatcher 1.2;

require {
	type httpd_sys_content_t;
	type postfix_local_t;
	class dir { create write search getattr add_name };
	class file { write ioctl create open getattr };

#============= postfix_local_t ==============
allow postfix_local_t httpd_sys_content_t:dir { create write search getattr add_name };
allow postfix_local_t httpd_sys_content_t:file { write ioctl create open getattr };

Second module:

# Allows dovecot (running under type context dovecot_auth_t)
# to read and exec fakeauth script (type shell_exec_t).
module fakeauth 1.1;

require {
	type dovecot_auth_t;
	type shell_exec_t;
	class file { read open execute };

#============= dovecot_auth_t ==============
allow dovecot_auth_t shell_exec_t:file { read open execute };

PS. Don’t forget to create destination directories for your mails/files:

# mkdir /var/www/mailcatcher
# chown root:mailcatcher /var/www/mailcatcher
# chmod 0775 /var/www/mailcatcher

and to install nginx with autoindex module – so HTML files can be visible on http/https 😉


Deploying custom SELinux modules

March 11, 2015 Leave a comment

Asgård’s always been my home
But I’m of different blood
I will overthrow the throne
Deceiver! Deceiver of the gods!
(Amon Amarth – Deceiver of the Gods)

If you decide to run OS with SELinux enabled, sooner or later you’ll bump into roadblock. So, the question is how to write your own SELinux rules and deploy them. For managing rules, I’m using Puppet module spiette/selinux.

Lets review the practical example from one of my hosts. Lets say you want to pipe all your emails to a script which will save them to a certain directory as HTML files. /etc/aliases has line like this one:

mailcatcher:    "|/usr/local/bin/"

After sending email to mailcatcher user, this is what we can see in maillog:

Mar  3 09:59:58 mailcatcher postfix/local[16030]: 589207055A: 
   to=<mailcatcher@devel.localdomain>, orig_to=<>,
   relay=local, delay=0.1, delays=0/0/0/0.1, dsn=5.3.0,
   (Command died with status 13: "/usr/local/bin/".
   Command output: Could not open file
   Permission denied at /usr/local/bin/ line 50. )

After checking up directory permissions, it’s obvious there’s something else blocking access, and if you’re running RedHat derivative (CentOS/Scientific/Fedora) that something is usually SELinux. To confirm that, take a look at /var/log/audit/auditd.log:

type=SYSCALL msg=audit(1425644753.803:20374): arch=c000003e
   syscall=83 success=no exit=-13 a0=16c6580 a1=1ff a2=0
   a3=786966682d726865 items=0 ppid=26763 pid=26764 auid=1505 uid=498
   gid=498 euid=498 suid=498 fsuid=498 egid=498 sgid=498 fsgid=498
   tty=(none) ses=1355 comm="" exe="/usr/bin/perl"
   subj=unconfined_u:system_r:postfix_local_t:s0 key=(null)
type=AVC msg=audit(1425644759.713:20375): avc:  denied  { create }
  for  pid=26777
  comm="" name="example-name"
  tcontext=unconfined_u:object_r:httpd_sys_content_t:s0 tclass=dir

Now we have a reason why our script can’t write a file. Next step is to pipe this long to audit2allow -m mailcatcher, which will generate the following output:

module mailcatcher 1.0;

require {
    type httpd_sys_content_t;
    type postfix_local_t;
    class dir create;

#============= postfix_local_t ==============
allow postfix_local_t httpd_sys_content_t:dir create;

Next step is to compile this module and load it into SELinux, and retry if our script now works as designed.

To compile it, save it in a file called mailcatcher.te, and run the following commands:

# checkmodule -M -m -o mailcatcher.mod mailcatcher.te
# semodule_package -o mailcatcher.pp -m mailcatcher.mod
# semodule -r mailcatcher.pp
# semodule -i mailcatcher.pp

After runnning the last command, you can recheck if your script works. If it has permission problems again, just repeat the process until you create a module that works. In my case, final module looks like this:

# Allows postfix (running under type context postfix_local_t)
# to write to web directories (type httpd_sys_content_t).
module mailcatcher 1.1;

require {
    type httpd_sys_content_t;
    type postfix_local_t;
    class dir { write search getattr add_name };
    class file { write ioctl create open getattr };

#============= postfix_local_t ==============
allow postfix_local_t httpd_sys_content_t:dir { write search getattr add_name };
allow postfix_local_t httpd_sys_content_t:file { write ioctl create open getattr };

Now, to automate it via puppet, save the file under yourmodule/files/selinux/mailcatcher.te. You can later use it in (any) manifest with the following code snippet:

include ::selinux
::selinux::module { 'mailcatcher':
  ensure => present,
  source => 'puppet:///modules/yourmodule/selinux/',

Puppet will transfer .te file to destination host, compile it and load it for you.

Categories: Linux, RedHat, Security Tags: , ,

rsyslog filtering iptables from messages

March 25, 2013 Leave a comment

Where did we come from?
Why are we here?
Where do we go when we die?

(Dream Theater – Spirit carries on)

I tend to face problem of iptables DOS-ing my log files every now and then. If I can, i tend to use ULOG target and leave iptables logs to ulogd. But, sometimes ulogd is not an option – for example on shared OpenVZ hosting. So, today I decided to harness the power of rsyslogd.

Rsyslogd is new standard daemon for logging on major linux distrubtions, that replaced old sys(k)logd. It has powerful regex and scripting engine builtin, which can be used for many cool things.

So, to solve problem of iptables logs, let’s first mark them somehow, so that we can later recognize them. This is example rules that generates logs:

-A INPUT   -j LOG --log-level info --log-prefix "iptables INPUT   DROP: "
-A FORWARD -j LOG --log-level info --log-prefix "iptables FORWARD DROP: "

Offcourse, this is written in ‘iptables-save/restore’ format.

It’s obvious we can recognize log entry by the word ‘iptables’.

Now, lets add the following to rsyslog.conf:

:msg, startswith, "iptables"				/var/log/iptables

Note that these two lines have to be before the ‘/var/log/messages’ entry to take effect.

The first line directs rsyslog to send all messages that start with “iptables” to /var/log/iptables, and the second line discards those messages. So, that magical discard is what cleans out iptables noise from all subsequent logging files in rsyslog.conf.

Save the rsyslog.conf and restart daemon, and that’s it!

Finding UID that is generating traffic

January 8, 2013 Leave a comment

And though our hearts are broken
We have to wipe the tears away
In vain they did not suffer
Ten Thousand Strong will seize the day

(Iced Earth – Ten Thousand Strong)

Did you have one of those days when you notice strange traffic in your firewall logs and don’t know who is responsible for it? Is your machine compromised, or is it a legitimate traffic? Or maybe your server ends on SPAM blacklists every now and then although mail.log is as clean as your car? Well, first step in this case is to find out what UID is responsible for the suspicious traffic.
Iptables on Linux offers owner match, which works on OUTPUT chain only and attempts to match characteristics of the packet creator. Offcourse this works only for locally-generated packets. So, in this example we’ll try to match UID of the user that’s sending strange traffic. First off all, let’s enumerate all UIDs for running processes:

# ps -ef n | grep -v UID | sed 's/^\s*//' | cut -d' ' -f1 | sort | uniq

Next step is to generate iptables rules in OUTPUT chain to log outgoing connections. Let’s suppose we want to focus on packets going to destination port SMTP (TCP/25), because we’re suspicious about someone sending mails directly, without using local MTA. We can achieve this by running:

# for i in \
`ps -ef n | grep -v UID | sed 's/^\s*//' | cut -d' ' -f1 | sort | uniq`; \
do \
  iptables -A OUTPUT \
    -m owner --uid-owner $i \
    -p tcp --dport 25 \
    -j ULOG --ulog-prefix "GENERATED BY UID $i: "; \

With iptables populated, we can relax, lay back into comfortable chair and tail the log:

# tail -f /var/log/ulog/ulog/syslogemu | grep "GENERATED BY UID"
Dec  6 17:07:10 hostname GENERATED BY UID 502:  IN= OUT=eth0 MAC=
  SRC=local_ip DST= LEN=60 TOS=00 PREC=0x00 TTL=64 ID=1285
  DF PROTO=TCP SPT=54558 DPT=25 SEQ=1228047343 ACK=0 WINDOW=5840 SYN URGP=0 
Dec  6 17:07:10 hostname GENERATED BY UID 502:  IN= OUT=eth0 MAC=
  SRC=local_ip DST= LEN=60 TOS=00 PREC=0x00 TTL=64 ID=21290
  DF PROTO=TCP SPT=52895 DPT=25 SEQ=2552747462 ACK=0 WINDOW=5840 SYN URGP=0  
Dec  6 17:07:10 hostname GENERATED BY UID 502:  IN= OUT=eth0 MAC=
  SRC=local_ip DST= LEN=60 TOS=00 PREC=0x00 TTL=64 ID=46380 CE
  DF PROTO=TCP SPT=46744 DPT=25 SEQ=314520542 ACK=0 WINDOW=5840 SYN URGP=0 
Dec  6 17:07:10 hostname GENERATED BY UID 502:  IN= OUT=eth0 MAC=
  SRC=local_ip DST= LEN=52 TOS=00 PREC=0x00 TTL=64 ID=46381 CE
  DF PROTO=TCP SPT=46744 DPT=25 SEQ=314520543 ACK=814882206 WINDOW=46 ACK URGP=0 
Dec  6 17:07:10 hostname GENERATED BY UID 502:  IN= OUT=eth0 MAC=
  SRC=local_ip DST= LEN=60 TOS=00 PREC=0x00 TTL=64 ID=57227 CE
  DF PROTO=TCP SPT=54942 DPT=25 SEQ=2517129359 ACK=0 WINDOW=5840 SYN URGP=0 
Dec  6 17:07:10 hostname GENERATED BY UID 502:  IN= OUT=eth0 MAC=
  SRC=local_ip DST= LEN=52 TOS=00 PREC=0x00 TTL=64 ID=46382 CE
  DF PROTO=TCP SPT=46744 DPT=25 SEQ=314520543 ACK=814882251 WINDOW=46 ACK URGP=0 
Dec  6 17:07:10 hostname GENERATED BY UID 502:  IN= OUT=eth0 MAC=
  SRC=local_ip DST= LEN=58 TOS=00 PREC=0x00 TTL=64 ID=46383 CE
  DF PROTO=TCP SPT=46744 DPT=25 SEQ=314520543 ACK=814882251 WINDOW=46 ACK PSH URGP=0  
Dec  6 17:07:10 hostname GENERATED BY UID 502:  IN= OUT=eth0 MAC=
  SRC=local_ip DST= LEN=60 TOS=00 PREC=0x00 TTL=64 ID=54361 CE
  DF PROTO=TCP SPT=44414 DPT=25 SEQ=3003601745 ACK=0 WINDOW=5840 SYN URGP=0

OK, so we’ve found out the culprit!

Note that we only monitor UIDs that have running processes. Wether or not to log all the existing UIDs on local system is out of scope of this article, and depends on each particular case.

Hope you guys enjoyed it and see you guys next time (by my favorite e-sports commentator – Husky) 😉

Categories: Linux, Security Tags: ,

Cleaning OSSEC MySQL database

November 21, 2012 Leave a comment

If I had wings, would I be forgiving?
If I had horns
Would there be flames to shy my smile?

(Dark Tranquillity – Punish my heaven)

OSSEC is a good intrusion detection tool  that can help pinpoint not only security breaches but general software anomalies too. It can be installed as standalone and as server-client tool. Server can write data about alerts to database (usually it’s MySQL). But, on a large network one has to be really careful about using database as an alert datastore. Things can get out of control quite quickly.
Here is how current setup looks like on one of the installations that I help managing:

# /var/ossec/bin/agent_control -l | grep -c "^\s*ID"

So, this means that this server controls 149 agents. Average on this installation is about 300 alerts per day, but that is mainly due to lack of spare time available for optimization of custom rules. Nevertheless, all of the alerts go to the MySQL database too. So, by doing simple arithmetic we should be at around 55’000 database entries? Well, not quite, because OSSEC also logs events for which no alert will be sent… So this number is huge – take a look:

mysql> select count(*) from alert;
| count(*)  |
| 123008768 |

So this means that there is 675’000 rows inserted to MySQL per day! And all that goes to MyISAM by default OSSEC schema! Database is around 35GB, so it was time to do something about it. We decided to shrink the database to 32 days, and to purge the rest. Now, little shell magic comes to play. I’ve created a script that will be called with cron daily job. This is how the script looks like:

DELETE2TIME=`/bin/date -d "32 days ago" "+%Y-%m-%d %H:%M:%S"`

   /bin/grep "<$1>" /var/ossec/etc/ossec.conf | /bin/sed "s|^.*<$1>\(.*\)</$1>|\1|g"

MYSQLHOST=`getvaluefromossec "hostname"`
MYSQLDB=`getvaluefromossec "database"`
MYSQLUSER=`getvaluefromossec "username"`
MYSQLPASS=`getvaluefromossec "password"`

echo "
SET @delete2time=\"$DELETE2TIME\";
DELETE FROM alert WHERE timestamp < UNIX_TIMESTAMP(@delete2time);
DELETE FROM data WHERE timestamp < @delete2time;

You can simply put this script to /etc/cron.daily and it will run every day and purge only 1 day of old data. You can modify the number in the date command invocation by how far back do you want to keep your data. In this example data is kept 32 days.

Note: first run may take a long time if you have large database and want to purge lot of rows 😉 In my case – deleting of month of history entries (22.5 million rows in two tables) took 45 minutes!

Note2: I almost forgot… You should OPTIMIZE TABLE after big deletes to free disk space occupied by MyISAM tables. This can be done in cron.daily script, by adding these two lines:


although I would strongly recommend to organize these kind of tasks in cooperation with DBA.

Categories: Databases, Security Tags: ,

Ulogd 2.x on CentOS 6

November 8, 2012 11 comments

Windcolour – second sight
A touch of silence and the violence of dark
Illusion span – the aroma of time
Shadowlife and the scent of nothingness

(Dark Tranquillity – Insanity’s Crescendo)

Ulogd is a small deamon capable of logging iptables output from ULOG (or other targets) to various different backends. One can log into MySQL, PostgreSQL, sqlite, or plain old textual log file. I used ulogd massively on servers on CentOS 5, so I really missed the CentOS 6 version. Now, I’ve noticed ulogd 2.0.0-beta4 being available in Fedora 17, so opportunity came for me to backport it. RedHat Enterprise Linux 6 is based on Fedora 12, and luckily things haven’t gone out of reach quite yet, so backporting from latest Fedora to RHEL/CentOS 6 are still quite easy.

Binary and source packages are available in Srce RPM repository for Enterprise Linux. You can add SRCE to your yum repositories list simply by running following set of commands:

# /usr/bin/wget
# /bin/rpm --import RPM-GPG-KEY-SRCE
# /bin/rm -f RPM-GPG-KEY-SRCE
# /bin/rpm -Uvh

After this things are quite easy, just use yum and install the software:

# yum install ulogd


Snort: too many open files

October 19, 2012 6 comments

Creation of insane rule
All we hear:
Desperate cry
(Sepultura – Desperate Cry)

I really hate those unproductive hours (hopefully not days) when one needs to debug some strange problems whose solution won’t be reusable. Hm? Deja-vu? 🙂 Well it hit me again. And this time it was hard.

I was trying to write some manifests and control our local Snort installation through puppet. We use VRT and emerging rules, fetched via pulledpork. So, puppetizing Snort should be like a breeze. And it was… Everything went extremely well, I wrote two classes: snort and snort::pulledpork (along with standard params class). Data was stored in hiera, /etc/sysconfig/snort, /etc/snort/snort.conf and all of the pulledpork configs are dynamicly generated from that data. World looked really nice. And I was a happy devop 😉

But the problems started later – when I actually tried to start snort service. Service was just failing miserably without any significant output. I’ve tried ‘bash -x’ on the init script, and running manual command, but I was getting nowhere. Then I turned to syslog, and I saw a bunch of Snort startup messages and then all of a sudden:

rsyslogd-2177: imuxsock begins to drop messages from pid 17207 due to rate-limiting

Well, temporary fix for that issue was:

# echo "$SystemLogRateLimitInterval 0" > /etc/rsyslog.d/test.conf

And offcourse, you need to restart rsyslogd after this one… Pretty strange that default syslog in CentOS 6 is so itchy about being filled up too fast… I did like old syslogd behaviour more…

Anyways, back to the main issue. After “fixing” rsyslogd I finally had something to work on:

FATAL ERROR: /etc/snort/rules/VRT-app-detect.rules(0)
 Unable to open rules file "/etc/snort/rules/VRT-app-detect.rules":
 Too many open files.#012

Now we’re getting there! It’s a piece of cake to solve:

# echo "ulimit -n 10240" >> /etc/sysconfig/snortd

Although it didn’t work… So off I was on a lonely path of useless debugging. Why doesn’t this work? Maybe it’s something with the system? Trying to increase fs.file-max to absurd levels didn’t help…. Maybe it’s to do with account snort will run as – snortd? Trying to utilize limits.conf didn’t work either. Now I was buffled. One thing I did notice was that after raising ulimit on number of open files, snort was starting, or should I say failing, a lot longer… Then I decided to utilise strace. Number in the “read” system call was just raising and raising until hitting the maximum. The weird thing was that it always broke on the exact same file… That drag me away of real problem. So nothing helped so far – so I decided to dismantle snort configurations and rules. And after zeroing out one config file – snort started! Now we’re talking. I decided to uncomment line by line. After 3/4 of lines, another error… And now I finally saw the culprit!!!

include $RULE_PATH/VRT.conf

Utter facepalm… I’ll leave you to guess the name of the file that contained that line…

%d bloggers like this: