Archive

Archive for the ‘RedHat’ Category

Designing SMTP honeypot – mailcatcher

November 28, 2015 Leave a comment

There is no way
I have no voice
I have no say
I have no choice
I feel no pain or sympathy
It’s just cold blood that runs through me

(Judas Priest – Cold Blooded)

There are 2 usecases for having a SMTP honeypot: if you develop applications that interact with users via e-mail or if you develop anti-spam protection. In both cases, robust testing infrastructure is a must. I’ve came across one distinct use case, so I had to think out of the box and set up catch-all SMTP server. I named it ‘mailcatcher’.

Mailcatcher behaves like an open relay (meaning: it allows anyone on the internet to send emails through it) – but instead of relaying messages, it just stores them localy as HTML documents. Mails are not forwarded to actual recipients. Mailcatcher will not only accept to relay mail from all sources to all destinations, but will also accept all log in credentials (all username/password combinations). Mails are stored in web root directory by default, but if the login credentials are presented to SMTP, then username will be used as a destination directory.

Main building blocks of the mailcatcher are: Postfix, Dovect and couple of custom scripts.

First, client program connects to postfix on 25/TCP. Postfix then awaits for client introduction (HELO/EHLO). If client specifies it wants authentication, following postfix parameters forward auth info to Dovecot for checkup:

# Postfix auth through Dovecot
smtpd_sasl_auth_enable = yes
broken_sasl_auth_clients = yes
smtpd_sasl_type = dovecot
smtpd_sasl_path = private/auth
smtpd_sasl_authenticated_header=yes

If authentication is succesful – which is by Dovecot design always, for any username/password pair, or if authentication wasn’t selected in the first place, postfix continues with the SMTP protocol.

Postfix then receives From field containing <sender>@<sender_domain> and To field containing <destination_user>@<destination_domain>. It then first checks if this specific postfix instance is the final destination for emails going to “destination_domain”. We trick postfix into thinking it is final destination for any domain with this config instruction and corresponding file:

mydestination = pcre:/etc/postfix/mydestination

And contents of PCRE file:

# cat /etc/postfix/mydestination
/^.*/    OK

Postfix is usually set up to accept only certain domains, but in this case it accepts every destination domain. So, a developer can even send an email to @pornhub.com !

Since mailcatcher doesn’t have mailboxes for all possible destination users, we instruct it to catch all emails and save them to account mailcatcher. So, for example, mail destined for <john@doe.com> will be rerouted internally to <mailcatcher@doe.com>:

luser_relay = mailcatcher

To be able to store all those emails, server has to have local alias “mailcatcher” which is present in /etc/aliases:

# grep mailcatcher /etc/aliases
mailcatcher:    "|/usr/local/bin/decode.pl"

So, when a mail is forwarded to mailcatcher, aliases directive actually pipes the contents of the message to decode.pl script. Script is run as nginx user, which is set up by another main.cf directive:

default_privs = nginx

Once the postfix finsihes with mail processing, it pipes it (through aliases) to decode.pl script. Here’s an example script:

#!/usr/bin/perl -w

chdir "/tmp";

# Always be safe
use strict;
use warnings;

# Timestamps
use Time::HiRes qw(time);
use POSIX qw(strftime);

# Use the module
use MIME::Parser;
use MIME::Base64;
use MIME::Lite;

my $directory = "/var/www/mailcatcher/";
undef $/;
my $mail_body = ;
$/="\n";

my $parser  = MIME::Parser->new;
my $entity  = $parser->parse_data($mail_body);
my $header  = $entity->head;
my $from    = $header->get_all("From");
my $to      = $header->get_all("To");
my $date    = $header->get("Date");
my $subject = $header->get("Subject");
my $sender  = $header->get("Received");

# check if directory exists, 'gs' flags to treat string like a single line
if ( $sender =~ /Authenticated sender:/s ) {
  $sender =~ s/.*\(Authenticated sender: ([a-zA-Z0-9._%+-]+)\).*/$1/gs;
  $directory .= $sender;
  $directory .= '/';
} 

unless ( -d $directory ) { mkdir $directory; }

# remove header from email
if ( $mail_body =~ /Content-Transfer-Encoding: base64/ ) {
  $mail_body =~ s/(.+\n)+\n//;
  $mail_body = decode_base64($mail_body);
}

# generate filename for storing mail body
my $t = time;
my $filename = strftime "%s_%Y-%m-%d_%H-%M-%S", localtime $t;
$filename .= sprintf ".%09d", ($t-int($t))*1000000000;
$filename .= '.html';

# finally write our email
open(my $fh, '>', "${directory}${filename}") or die "Could not open file '${directory}${filename}' $!";
    # write to file
    print $fh "From: $from <br/>";
    print $fh "To: $to <br/>";
    print $fh "Date: $date <br/>";
    print $fh "Subject: $subject <br/><br/><br/>";
    print $fh $mail_body;
close $fh;
# EOF

Script parses the mail with MIME perl modules, fetches relevant information, decodes the mail and stores it in destination directory.

If the EHLO process was specified, and authentication is selected, Postfix connects to Dovecot to check the credentials. This is the relevant part of auth configuration of Dovecot:

userdb {
   driver = checkpassword
   args = /usr/local/bin/fakesasl.sh
}
passdb {
   driver = checkpassword
   args = /usr/local/bin/fakesasl.sh
}

Another script is neccessary, fackesasl.sh:

#!/bin/bash
# Example Dovecot checkpassword script that may be used as both passdb or userdb.
# FakeAuth, will allow any user/pass combination.# Implementation guidelines at http://wiki2.dovecot.org/AuthDatabase/CheckPassword# The first and only argument is path to checkpassword-reply binary.
# It should be executed at the end if authentication succeeds.
CHECKPASSWORD_REPLY_BINARY="$1"# Messages to stderr will end up in mail log (prefixed with "dovecot: auth: Error:")
LOG=/dev/stderr

# User and password will be supplied on file descriptor 3.
INPUT_FD=3

# Error return codes.
ERR_PERMFAIL=1
ERR_NOUSER=3
ERR_TEMPFAIL=111

# Make testing this script easy. To check it just run:
#   printf '%s\x0%s\x0' <user> <password> | ./checkpassword.sh test; echo "$?"
if [ "$CHECKPASSWORD_REPLY_BINARY" = "test" ]; then
CHECKPASSWORD_REPLY_BINARY=/bin/true
INPUT_FD=0
fi

# Read input data. Password may be empty if not available (i.e. if doing credentials lookup).
read -d $'\x0' -r -u $INPUT_FD USER
read -d $'\x0' -r -u $INPUT_FD PASS

# Both mailbox and domain directories should be in lowercase on file system.
# So let's convert login user name to lowercase and tell Dovecot 'user' and 'home' (which overrides
# 'mail_home' global parameter) values should be updated.
# Of course, conversion to lowercase may be done in Dovecot configuration as well.
export USER="`echo \"$USER\" | tr 'A-Z' 'a-z'`"
mail_name="`echo \"$USER\" | awk -F '@' '{ print $1 }'`"
domain_name="`echo \"$USER\" | awk -F '@' '{ print $2 }'`"
export HOME="/var/qmail/mailnames/$domain_name/$mail_name/"

# Dovecot calls the script with AUTHORIZED=1 environment set when performing a userdb lookup.
# The script must acknowledge this by changing the environment to AUTHORIZED=2,
# otherwise the lookup fails.
[ "$AUTHORIZED" != 1 ] || export AUTHORIZED=2

# Always return OK ;)
exec $CHECKPASSWORD_REPLY_BINARY

And that’s it!

PS. To be able to run this on RedHat derivatives – turn SELinux to permissive mode. I created two SELinux modules to cover me on this one.

First:

# Allows postfix (running under type context postfix_local_t)
# to write to web directories (type httpd_sys_content_t).
module mailcatcher 1.2;

require {
	type httpd_sys_content_t;
	type postfix_local_t;
	class dir { create write search getattr add_name };
	class file { write ioctl create open getattr };
}

#============= postfix_local_t ==============
allow postfix_local_t httpd_sys_content_t:dir { create write search getattr add_name };
allow postfix_local_t httpd_sys_content_t:file { write ioctl create open getattr };

Second module:

# Allows dovecot (running under type context dovecot_auth_t)
# to read and exec fakeauth script (type shell_exec_t).
module fakeauth 1.1;

require {
	type dovecot_auth_t;
	type shell_exec_t;
	class file { read open execute };
}

#============= dovecot_auth_t ==============
allow dovecot_auth_t shell_exec_t:file { read open execute };

PS. Don’t forget to create destination directories for your mails/files:

# mkdir /var/www/mailcatcher
# chown root:mailcatcher /var/www/mailcatcher
# chmod 0775 /var/www/mailcatcher

and to install nginx with autoindex module – so HTML files can be visible on http/https 😉

Deploying custom SELinux modules

March 11, 2015 Leave a comment

Asgård’s always been my home
But I’m of different blood
I will overthrow the throne
Deceiver! Deceiver of the gods!
(Amon Amarth – Deceiver of the Gods)

If you decide to run OS with SELinux enabled, sooner or later you’ll bump into roadblock. So, the question is how to write your own SELinux rules and deploy them. For managing rules, I’m using Puppet module spiette/selinux.

Lets review the practical example from one of my hosts. Lets say you want to pipe all your emails to a script which will save them to a certain directory as HTML files. /etc/aliases has line like this one:

mailcatcher:    "|/usr/local/bin/decode.pl"

After sending email to mailcatcher user, this is what we can see in maillog:

Mar  3 09:59:58 mailcatcher postfix/local[16030]: 589207055A: 
   to=<mailcatcher@devel.localdomain>, orig_to=<jsosic@example.com>,
   relay=local, delay=0.1, delays=0/0/0/0.1, dsn=5.3.0,
   status=bounced
   (Command died with status 13: "/usr/local/bin/decode.pl".
   Command output: Could not open file
   '/var/www/mailcatcher/1425598_2015-03-03_09-59-58.461450099.html'
   Permission denied at /usr/local/bin/decode.pl line 50. )

After checking up directory permissions, it’s obvious there’s something else blocking access, and if you’re running RedHat derivative (CentOS/Scientific/Fedora) that something is usually SELinux. To confirm that, take a look at /var/log/audit/auditd.log:

type=SYSCALL msg=audit(1425644753.803:20374): arch=c000003e
   syscall=83 success=no exit=-13 a0=16c6580 a1=1ff a2=0
   a3=786966682d726865 items=0 ppid=26763 pid=26764 auid=1505 uid=498
   gid=498 euid=498 suid=498 fsuid=498 egid=498 sgid=498 fsgid=498
   tty=(none) ses=1355 comm="decode.pl" exe="/usr/bin/perl"
   subj=unconfined_u:system_r:postfix_local_t:s0 key=(null)
type=AVC msg=audit(1425644759.713:20375): avc:  denied  { create }
  for  pid=26777
  comm="decode.pl" name="example-name"
  scontext=unconfined_u:system_r:postfix_local_t:s0
  tcontext=unconfined_u:object_r:httpd_sys_content_t:s0 tclass=dir

Now we have a reason why our script can’t write a file. Next step is to pipe this long to audit2allow -m mailcatcher, which will generate the following output:

module mailcatcher 1.0;

require {
    type httpd_sys_content_t;
    type postfix_local_t;
    class dir create;
}

#============= postfix_local_t ==============
allow postfix_local_t httpd_sys_content_t:dir create;

Next step is to compile this module and load it into SELinux, and retry if our script now works as designed.

To compile it, save it in a file called mailcatcher.te, and run the following commands:

# checkmodule -M -m -o mailcatcher.mod mailcatcher.te
# semodule_package -o mailcatcher.pp -m mailcatcher.mod
# semodule -r mailcatcher.pp
# semodule -i mailcatcher.pp

After runnning the last command, you can recheck if your script works. If it has permission problems again, just repeat the process until you create a module that works. In my case, final module looks like this:

# Allows postfix (running under type context postfix_local_t)
# to write to web directories (type httpd_sys_content_t).
module mailcatcher 1.1;

require {
    type httpd_sys_content_t;
    type postfix_local_t;
    class dir { write search getattr add_name };
    class file { write ioctl create open getattr };
}

#============= postfix_local_t ==============
allow postfix_local_t httpd_sys_content_t:dir { write search getattr add_name };
allow postfix_local_t httpd_sys_content_t:file { write ioctl create open getattr };

Now, to automate it via puppet, save the file under yourmodule/files/selinux/mailcatcher.te. You can later use it in (any) manifest with the following code snippet:

include ::selinux
::selinux::module { 'mailcatcher':
  ensure => present,
  source => 'puppet:///modules/yourmodule/selinux/',
}

Puppet will transfer .te file to destination host, compile it and load it for you.

Categories: Linux, RedHat, Security Tags: , ,

htop atop tmux on XenServer 6.x

December 13, 2014 Leave a comment

Save me from myself if you ever really cared
Save me from myself, tell me you’re not scared
(Damage Plan – Save me)

If you want to run diagnostic commands on XenServer, you’re pretty much limited with available options. Since XenServer is based on RHEL 5.x series, with mostly 32bit libraries installed, we can use packages for RHEL/CentOS 5.x series i386 arch. Three tools that I use most often for initial screening of the servers are htop, atop and tmux. Doing something like ‘xe vm-export‘ and backing up VMs to external USB can be like reading a book in a dark room. There’s no progress bar, there’s no info – nothing. Calculating speed looks something along these lines:

# ls -al pbx.xva; sleep 60; ls -al pbx.xva 
-rw------- 1 root root 13799469056 Dec 13 23:00 pbx.xva 
-rw------- 1 root root 14982000640 Dec 13 23:01 pbx.xva

And after that: (14982000640 – 13799469056 ) / 60 / 1024 / 1024 = 18.79 MB/s. 🙂

Attaching/detaching or sharing sessions is a wet dream… only way to do it is to run tmux on a machine from which you are connecting to SSH of XenServer.

So, after I really got annoyed, I tried to install tmux. Initial tries with 64bit package for CentOS 6 were complaining about missing x86_64 libraries, so I switched to 32bit packages. That didn’t work also, complaining about too new version of rpmlib 🙂 So solution was obvious – use 32bit EPEL packages! These are the packages that I use:

# rpm -qa | egrep '(top|tmux)'
tmux-1.4-3.el5.1
atop-1.27-2.el5
htop-0.8.3-1.el5

Now we’re talking business!

Git over HTTP on CentOS 6

May 23, 2014 4 comments

Total war is here
Face it without fear
Age of sword, age of spear
Fight for honor, glory, death in fire!
(Amon Amarth – Death in Fire)

There is already a bunch of posts about setting up Git over HTTP(S), but this one is specificaly targeted at setting it up under CentOS as cleanly as possible. There was bunch of errors that I saw along the way, so I will try to explain the process step by step.

First, you have to install Apache and Git.:

# yum -y install httpd git
# /etc/init.d/httpd start

Now, lets create directories for git and create our first repo:

# mkdir /var/www/gitrepos
# cd /var/www/gitrepos
# mkdir repo01 && cd repo01
# git --bare init
# git update-server-info
# cd /var/www
# chown -R apache: gitrepos

We are using ‘git –bare’, so that online repository doesn’t have files but only git metadata. That will enable users to push directly to online repository, otherwise they wouldn’t be able to push thier changes. This was the first error I did, I created repo with ‘git init’ and was not able to push later. After the repo is set up and chowned, lets set up apache. This is my configuration for vhost:

#
# vhost for git repositories (http)
#
<VirtualHost *:80>
    ServerName     git
    DocumentRoot    /var/www/gitrepos

    <Location />
        DAV on

        # general auth settings
        AuthName "Git login:"
        AuthType Basic

        # file authentication
        AuthUserFile  /var/www/htpasswd
        AuthGroupFile /var/www/htgroup

        <LimitExcept PROPFIND>
            Require valid-user
        </LimitExcept> 
    </Location>

    <Location /repo01>
        <LimitExcept PROPFIND>
            Require group adminlinux
        </LimitExcept> 
    </Location>

    LogLevel warn
    ErrorLog  /var/log/httpd/git_error.log
    CustomLog /var/log/httpd/git_access.log combined
</VirtualHost>

If you wonder why is PROPFIND method treated differently from all other http/dav methods – it’s because webserver runs PROPFIND without user authentication, so if it’s not excluded from limit, it will get rejected and you will see a message similar to this one when trying to push from the client:

error: Cannot access URL https://git/puppet-adriatic/, return code 22
fatal: git-http-push failed

We can fill up htpasswd file with – tadddaaa htpasswd command 🙂

# htpasswd -c /var/www/htpasswd user1
# htpasswd -c /var/www/htpasswd user2
# htpasswd -c /var/www/htpasswd user3

And htgroup with:

# echo "adminlinux: user1 user2" >> /var/www/htgroup

Now, on the client side, do a:

% git clone http://user1@git/repo01

And that’s it! After the first change/commit you do, be careful when you push those changes for the first time. This is the command I used for the first push:

% git push --set-upstream origin master

You may also encounter a 22/502 error on a MOVE command, like:

MOVE 12486a9c101c613c075d59b5cf61329f96f9ae12 failed, aborting (22/502)
MOVE 0c306c54862ae8c21226281e6e4f47c8339ed132 failed, aborting (22/502)
MOVE ce4c4fc9d1e4daf3a59516829a0e1bd6c66d4066 failed, aborting (22/502

This happened to me because I used http to https forwarding in apache, and I had a http specified in my .git/config on a client machine. After changing the destination to https, MOVE command did it’s magic. It seems that this error is result of server name/location being different in client repo and on a server side.

Note: I recommend using SSL and not plain text http, even with self-signed certificates. In that scenario you’ll probably want to use env variable GIT_SSL_NO_VERIFY=true.

Note2: CentOS ships with old version of git, 1.7.x so I recommend either using git from IUS repo (git2x packages) or backporting git from newer Fedora releases.

Categories: Development, Linux, RedHat Tags: , , ,

Finding dependencies for PHP applications

April 27, 2014 1 comment

Blame and lies, contradictions arise
Blame and lies, contradictions arise
Nobody will change my way
Life betrays, but I keep on going
(Sepultura – Inner Self)

I’m a big fan of packaging systems and I pretty much tend to hate deploying unpackaged software. So, I tend to build native OS pacakges as often as I can. Packaging PHP applicatoins (like WordPress for example) is pretty much straightforward job. Only problem one may encounter is listing all software dependencies correctly. Here comes pci to the rescue!

So what is a ‘pci‘ – it’s part of the phpcompatinfo PHP module. The purpose of the module, and of ‘pci’ executable is to find out version and the extensions required for a piece of (PHP) code to run. Yeah, it’s that simple! So, lets try it, on CentOS 6 + EPEL offcourse:

# yum -y install php-pear-PHP-CompatInfo
# mkdir /tmp/pcitest
# echo "<?php phpinfo(); ?>" >> /tmp/pcitest/phpinfo.ph
# pci -d /tmp/pcitest
+-----------------------------+---------+---+------------+--------------------+
| Files                       | Version | C | Extensions | Constants/Tokens   |
+-----------------------------+---------+---+------------+--------------------+
| /tmp/pcitest/*              | 4.0.0   | 0 |            |                    |
+-----------------------------+---------+---+------------+--------------------+

As you can see, phpinfo() requires at least versoin 4.0.0 of PHP and doesn’t require any extension. Let’s run it now on some real project, like TeamPass:

# pci -d teampass > pci.log
# cat pci.log | cut -d'|' -f 5 | grep -v ^+ | sort | uniq | \
    sed 's/^ /Requires: php-/g'
Requires: php-           
Requires: php-bcmath     
Requires: php-ctype      
Requires: php-date       
Requires: php-Extensions 
Requires: php-filter     
Requires: php-gd         
Requires: php-hash       
Requires: php-json       
Requires: php-mbstring   
Requires: php-mysql      
Requires: php-mysqli     
Requires: php-openssl    
Requires: php-pcre       
Requires: php-pgsql      
Requires: php-session    
Requires: php-SimpleXML  
Requires: php-SPL        
Requires: php-xml

With two simple commands we can get output that can easily be copy/pasted and used used in rpm spec file. Enjoy building packages! INNER SELF!

Is Not A Child Of A Service

September 8, 2013 1 comment

The mirror in your eyes (the mirror in your eyes)
Is telling me what I’ve become alive and that is why
There’s pain inside your eyes (the mirror never lies)
The mirror in your eyes
(Rage – The Mirror in Your Eyes)

Developing a resource agents for RedHat Cluster Suite is always a fun ride. Resource agents are pretty simple and I’m really amused with the fact that RedHat distributes only a handful of finished resource agents… So, I’m thinking about developing a whole set of them and reporting it back to RedHat (or Fedora) so that they can get included in official distro.

Today, I decided to make an agent for rsyslog. We are making highly available rsyslog instance, so why duplicate init script when I can spare some time and develop something that will be useful to general public. (Well, not so general after all – but RHCS/Pacemaker involved public at least).

So, after I polish the shell script, I encountered a first obstacle:

Sep  8 03:45:54 testhost rgmanager[22364]: [rsyslog]
   Service rsyslog:testresource Is Not A Child Of A Service
Sep  8 03:46:11 testhost rgmanager[23126]: [rsyslog]
   Service rsyslog:testresource Is Not A Child Of A Service

So, let the fun begin. After almost two hours of diff-ing the script versus other resource agents like apache, named, samba, running all the possible RHCS debug/verbose commands that I know, and meddling with cluster.conf options and positioning of the <rsyslog> resource, and offcourse googling for a solution – eureka!!!! I had the similar issue with pgsql-9.1 resource agent few months ago… A-HAAA, so I take a peak at cluster.rng file, and immediately I can see where’s the problem… I did not have a service_name parameter in my rsyslog.metadata! I added the following block:

<parameter name="service_name" inherit="service%name">
    <longdesc lang="en">
        Inherit the service name.  We need to know
        the service name in order to determine file
        systems and IPs for this smb service.
    </longdesc>
    <shortdesc lang="en">
        Inherit the service name.
    </shortdesc>
    <content type="string"/>
</parameter>

and problem solved! So for the benefit of (probably non-existing) resource agent developers, hope you find this solution googling around 🙂 If you do, please post a comment so that I know this particular post really helped someone 🙂 I would be glad….

DRAC6 fencing through IPMI

July 17, 2013 Leave a comment

So understand
Don’t waste your time always searching for those wasted years,
Face up… make your stand,
And realise you’re living in the golden years.
(Iron Maiden – Wasted Years)

RedHat Cluster Suite uses fencing as a safety measure for split brain scenario. Split brain is a situation when a cluster creates two or more partitions. Each partition thinks it is the only active and tries to start services. Now, the most benign situation that can occur in this scenario is IP conflict. Another, much worse situation is if two nodes try to write to same (shared) filesystem. Data corruption is obviously much worse then IP conflict 😉

To circumvent these situation, some kind of eviction of nodes has to occur. RedHat Cluster Suite leans on fencing mechanism. Fencing usually does some of the following:

  • node reboot (via mgmt console)
  • node power off (via PDU)
  • SCSI reservation

The most often used method is reboot via IPMI or APC PDU’s. In my case, I often use IPMI because APC PDU’s are quite expensive and are rarely available. RPM package ‘fence-agents’ offers many fencing mechanisms. Old Dell DRAC5 has it’s own fencing agent, but new, DRAC6 which is available on machines like R610/R620 or R710/R720 isn’t covered by fence agents. But there is a way to fence through DRAC 6 – via IPMI.

To enable IPMI fencing, few things have to be set in DRAC web GUI.

First, IPMI has to be enabled. To enable IPMI, you choose “iDRAC settings” from the menu on the left side of the screen, and then choose “Network/Security => Network” in top menu. At the bottom of that page, you can find settings section for IPMI. Set it up like it’s shown in the image and apply the settings.

DRAC6 IPMI settings

DRAC6 IPMI settings

Now all you have to do is to create a user with IPMI privileges. You can use ‘root’, but I strongly advise against. This user/password combo is blanktext in cluster.conf, so if one of your cluster nodes is compromised, attacker will find DRAC passwords for all the cluster members.

So, to create user, choose the “Users” subtab under “Network/Security” tab. Choose one of the free numbers, and after the wizzard stars choose “Configure User”. In the user configuration window, I recommend following settings:

  • Enable User: ON
  • User name: fencer
  • Maximum LAN User Privilege Granted: Administrator
  • Maximum Serial Port User Privilege Granted: Non
  • Leave all the iDRAC user privileges turned off

After you apply the settings, it’s time to test if they work. Login to some RHEL/CentOS machine and install fence agents:

# yum -y install fence-agents

Now, try running fence_ipmi agent:

# fence_ipmilan -P -a <drac_IP> -l fencer -p <password> -o status -v
Getting status of IPMI:<drac_IP_address>...Spawning:
  '/usr/bin/ipmitool -I lanplus -H '<drac_IP_address>'
   -U 'fencer' -P '[set]' -v chassis power status'...
Chassis power = On
Done

If you get the OK output, and not the “Unknown”, you’re all set! You can also test hard reboot, if you wish:

# fence_ipmilan -P -a <drac_IP> -l fencer -p <password> -o reboot -v
Rebooting machine @ IPMI:<drac_IP>...
Spawning: 'ipmitool .... -v chassis power status'...
Spawning: 'ipmitool .... -v chassis power off'...
Spawning: 'ipmitool .... -v chassis power status'...
Spawning: 'ipmitool .... -v chassis power off'...
Spawning: 'ipmitool .... -v chassis power status'...
Spawning: 'ipmitool .... -v chassis power off'...
Spawning: 'ipmitool .... -v chassis power status'...
Spawning: 'ipmitool .... -v chassis power off'...
Spawning: 'ipmitool .... -v chassis power status'...
Spawning: 'ipmitool .... -v chassis power status'...
Spawning: 'ipmitool .... -v chassis power on'...
Spawning: 'ipmitool .... -v chassis power status'...
Done

Congratulations, your server has just been rebooted! 🙂
Now, to use fence_ipmi with lanplus in your cluster.conf, you set up your fence device along these lines:

<fencedevice agent="fence_ipmilan" name="drac_fqdn" ipaddr="drac_IP" 
login="fencer" passwd="pass" lanplus="1"/>

And that’s it.

Categories: Linux, RedHat
%d bloggers like this: