Archive

Archive for February, 2014

Persitent iptables on Debian/Ubuntu

February 21, 2014 Leave a comment

Travel in Stygian
The damned scream my name
Travel in Stygian
I can’t repent it’s too late
(Iced Earth – Travel in Stygian)

RHEL has a standardized way of manipulating Linux firewall. File ‘/etc/sysconfig/iptables’ has stored firewall in iptables-save format. Since I have to sometimes use Debian based distributions, I often wondered why there isn’t a similar (standardized) way of handing iptables. Somene suggests to use UFW or Shorewall, but that are just wrappers around iptables made to make users life easier. But if you are iptables expert, then using wrappers is only drawing attention to wrong things and bringing in another layer of unnecessary complexity. So after a little research I found a better way to handle iptables on Debian distros. It’s called “iptables-persistent“. All a user has to do is:

# apt-get install iptables-persistent

So, how does it work? It installs init script called “iptables-persistent” and stores rules in iptables-save format in following two files:

  • /etc/iptables/rules.v4
  • /etc/iptables/rules.v6

Behaviour is similar to one of the RHEL ‘iptables’ script, you can write your own rules in that file or you can run iptables-save and copy output. I think this is a good approach towards standardization and too bad iptables-persistent is not a part of minimal Debian installation. Starting and stopping firewall is also pretty simple via sys-v style init script:

# /etc/init.d/iptables-persistent start

I really like this way of managing iptables.

Migrating Zabbix from MySQL to PostgreSQL

February 18, 2014 Leave a comment

Three evil giants
of the south
are constantly on the attack
With lies and fire
from their mouths
but we always send them back
(Amon Amarth – Guardians of Asgaard)

Sooner or later someone will come across a similar problem – migrating some MySQL based application to PgSQL. Reasons are various. I decided to migrate Zabbix to another backend because it was the only app using MySQL while all others were using PostgreSQL. So there was really no point at all to keep up with another database, set up another replication, backups, etc… Someone else will probably have different reasons.

So, how did I do it? First of all, it’s important to understand database schema and data types. Since Zabbix supports both backends, it’s obvious migration shouldn’t be that hard. First thing I did was upgrade to latest version of Zabbix (from 2.0 to 2.2.1). I also upgraded database to Percona 5.6. PostgreSQL versoin of choice was latest from 9.3 series. So this is my setup:

  • MySQL Percona 5.6
  • Zabbix 2.2.1
  • PostgreSQL 9.3

After upgrade to latest versions of Zabbix and MySQL, it’s good to make sure zabbix housekeeper has done it’s work, and database is as compacted as it should be. To be sure of this, take a look at zabbix-server logs and search for term ‘housekeep’.

As for migration, I decided to both dump tables and save them into CSV format. This is a simple script that does it:

mkdir /var/tmp/zabbix/CSV; chown -R mysql: /var/tmp/zabbix
for i in `find /var/lib/mysql/zabbix/ -type f | \
 sed 's/\....$//' | sort | uniq | sed 's#/var/lib/mysql/zabbix/##'`; do
   mysqldump --compact -c -e -n -t --compatible=postgresql zabbix $i > $i.sql ;
   mysqldump -t -T/var/tmp/zabbix/CSV zabbix $i --fields-terminated-by=';' ;
done

So, for every table in zabbix database in MySQL, I create both SQL dump (in postgresql compatible mode) and a CSV dump with semicolon as a column delimiter. Why did I choose semicolon? Because Zabbix tends to have a lot of colons in it’s data, so this eases up insertion of data and lowers needed number of escape characters, which can pose problems.
Note: As you can notice, I use InnoDB’s ‘ file-per-table’ mode, that’s why I can get a list of available tables just by searching through /var/lib/mysql. If you don’t use that setting, you should modify the commad by using “SHOW TABLES” query.

Next step is to modify schema.sql from Zabbix server installation and remove some of the constraints. I removed all “ALTER TABLE“, “CREATE INDEX” and “CREATE UNIQUE INDEX” queries from schema file, and imported that schema into zabbix database in PostgreSQL:

$ psql zabbix < schema_modified.sql

Those queries will be needed later, after all data is successfully imported.

Now that zabbix database has it’s base structure, we can start with importing data. I’ve used “COPY” command that imports from CSV. Import is atomic, so if some error occurs, target table will stay empty.

$ psql zabbix
zabbix# COPY alerts FROM '/var/tmp/zabbix/CSV/alerts.txt' DELIMITER ';' CSV;

If COPY exits with number of inserted rows, that that table is successfully imported. If, on the other hand, COPY spits out some kind of error or warning, that table probably wasn’t imported correctly. So, I recorded all the failed tables, and decided to import them via standard SQL dump. This is a list of tables that failed to import from CSV format:

items
acknowledges
auditlog
auditlog_details
alerts
actions
httptest
opmessage
triggers

Typical error which I saw was:

ERROR:  missing data for column "agent"

There were other errors, which I did not want to even try to fix. So, my next move was to use SQL dumps for those tables. But, I had to modify them a little bit, by adding following header:

SET standard_conforming_strings = 'off';
SET backslash_quote = 'on';
SET escape_string_warning = 'off';
BEGIN;

and offcourse a footer with “COMMIT” 😉 These settings are compatibility flags used for importing MySQL dumps. After all data was imported, all I needed to do is add all those indexes and table alterations from schema.sql that was ommited when I created database, and to fix permissions:

for i in `echo '\dt' | psql zabbix | grep 'public |' | \
awk -F\| '{ print $2 }' `; do
    echo "ALTER TABLE public.${i} OWNER TO zabbix;" | psql zabbix;
done

And that’s it, bye bye MySQL 🙂

Categories: Databases Tags: , , ,

Check if JCE is installed

February 1, 2014 Leave a comment

Asgård’s always been my home
But I’m of different blood
I will overthrow the throne
Deceiver! Deceiver of the gods!
(Amon Amarth – Deceiver of the Gods)

JCE stands for Java Cryptography Extension. Unlimited Strength Jurisdiction Policy Files are not distributed with standard Java distributions due to export laws limitations. But, some Java-powered software does need it to function properly. Often someone thinks JCE is installed but software still refuses to run properly. These are some of the possible errors:

org.apache.xml.security.encryption.XMLEncryptionException: 
Illegal key size or default parameters

So, how to check if unlimited JCE is really installed? After consulting google and stackoverflow answers I’ve written a small Java program that checks it very accurately. This is the source code:

import java.security.NoSuchAlgorithmException;
import javax.crypto.Cipher;

public class UnlimitedSupportJCE
{
  public static void main(final String[] args)
  {
    int strength = 0;
    try {
      strength = Cipher.getMaxAllowedKeyLength("AES");
    } catch (NoSuchAlgorithmException e) {
      System.out.println("isUnlimitedSupported=FALSE");
    }
    if ( strength > 128 ){
      System.out.printf("isUnlimitedSupported=TRUE,strength: %d%n",strength);
    } else {
      System.out.printf("isUnlimitedSupported=FALSE,strength: %d%n",strength);
    }
  }
}

You can save this code in a file named UnlimitedSupportJCE.java. You will also need JDK installed (with javac compiler). Now you can compile the code:

$ javac UnlimitedSupportJCE.java

This creates file called UnlimitedSupportJCE.class, which can now be run by JVM:

$ java UnlimitedSupportJCE
isUnlimitedSupported=TRUE, strength: 2147483647

We can see that on this particular JVM, Ulimited JCE is installed correctly. How about after removing the JARs?

$ java UnlimitedSupportJCE
isUnlimitedSupported=FALSE, strength: 128

We can see that strength of cyphers is dramaticaly lower with vanilla JCE files that are distributed with standard JRE/JDKs.

%d bloggers like this: