Archive for the ‘Virtualization’ Category

Rescuing GCE compute instance

September 17, 2016 Leave a comment

I’m digging deep inside my soul
To bring myself out of this God-damned hole
I rid the demons from my heart
And found the truth was with me from the start
(Halford – Resurrection)

If you’re still managing servers as pets and not as cattle – inevitable happens: filesystem breaks, sshd gets killed, wrong iptables settings get applied, wrong mount option halts boot process – and you’re in deep. Your important server is inaccessible.
If your server is running on in-house managed VmWare or XenServer, .NET console will help you rescue it. If it’s running on bare metal, you can rely on iDRAC or something similar. But, if you’re running in cloud – you’re pretty much screwed.

If you’re running GCE, there are couple of options at your disposal at the time of dismay. First, there is a beta Virtual Serial Port option that you can connect to and see where the hell did your instance halt and what messages are printed.

To enable Virtual Serial Port, you need to have gcloud (command line tool) installed and authenticated to your project. So, first thing is to list available instances:

% gcloud compute instances list
test   us-east1-b  g1-small   RUNNING

Now, to be able to connect to Virtual Serial Console, you need to set up ssh keys properly. Username and keys from your project metadata can be obtained by running:

% ssh-keygen ~/.ssh/google_compute_engine
% gcloud compute project-info add-metadata \

If you already have a key, you will need to set up both ~/.ssh/google_compute_engine and ~/.ssh/ to match the key from the project.

After the keys are set, you can finally connect:

% gcloud beta compute connect-to-serial-port gceuser@test

You should probably get a standard TTY login prompt.

If an attempt to fix the problem through GCE Virtual Serial Console didn’t succeed, but you think boot disk can be saved by attaching it to other instance, you will need to:

  • disable “auto delete boot disk”
  • destroy instance 😦
  • attach boot disk as additional disk to another VM
  • mount it, fix whatever is broken, umount it
  • detach disk from instance
  • create new instance, and choose this disk as boot disk

Using gcloud, it would look something like this

% gcloud compute instances \
  set-disk-auto-delete test --no-auto-delete --device-name test
% gcloud compute instances delete test
% gcloud compute instance

You should probably get a standard TTY logi


htop atop tmux on XenServer 6.x

December 13, 2014 Leave a comment

Save me from myself if you ever really cared
Save me from myself, tell me you’re not scared
(Damage Plan – Save me)

If you want to run diagnostic commands on XenServer, you’re pretty much limited with available options. Since XenServer is based on RHEL 5.x series, with mostly 32bit libraries installed, we can use packages for RHEL/CentOS 5.x series i386 arch. Three tools that I use most often for initial screening of the servers are htop, atop and tmux. Doing something like ‘xe vm-export‘ and backing up VMs to external USB can be like reading a book in a dark room. There’s no progress bar, there’s no info – nothing. Calculating speed looks something along these lines:

# ls -al pbx.xva; sleep 60; ls -al pbx.xva 
-rw------- 1 root root 13799469056 Dec 13 23:00 pbx.xva 
-rw------- 1 root root 14982000640 Dec 13 23:01 pbx.xva

And after that: (14982000640 – 13799469056 ) / 60 / 1024 / 1024 = 18.79 MB/s. 🙂

Attaching/detaching or sharing sessions is a wet dream… only way to do it is to run tmux on a machine from which you are connecting to SSH of XenServer.

So, after I really got annoyed, I tried to install tmux. Initial tries with 64bit package for CentOS 6 were complaining about missing x86_64 libraries, so I switched to 32bit packages. That didn’t work also, complaining about too new version of rpmlib 🙂 So solution was obvious – use 32bit EPEL packages! These are the packages that I use:

# rpm -qa | egrep '(top|tmux)'

Now we’re talking business!

XenServer PV guests and Cobbler

May 23, 2013 Leave a comment

Far, far beyond the island
We dwelt in shades of twilight
Through dread and weary days
Through grief and endless pain

(Blind Guardian – Mirror, mirror)

After automatizing deployment of machines, one starts to really hate manual installs 🙂 They are long, repetitive task that bores one to hell… One thing we didn’t automatize yet were installations of Citrix XenServer paravirtualized guests. After searching through Citrix forums, we found a neat solution.

First step is to configure new system in Cobbler. I’m using my own cobbler-puppet module ( for managing cobbler, so I’ll post the puppet code instead of classic Cobbler CLI for adding new system:

   cobblersystem { '':
    ensure     => present,
    profile    => 'CentOS-6.4-x86_64-xen',
    interfaces => {
      'eth0' => {
        ip_address  => '',
        netmask     => '',
        mac_address => 'EE:B8:80:CB:B3:04',
        static      => true,
        management  => true,
    kernel_options => {
      clocksource  => 'acpi_pm',
      text         => '~',
      kssendmac    => '~',
    gateway    => '',
    hostname   => '',

As you can notice, we added “clocksource=acpi_pm” kernel parameter, so that kernel at installation time uses acpi_pm as source for it’s clock. This parameter is needed for the VM to boot properly in Rescue mode. So, how does a kickstart file look like?  Important parts are about partitioning:

# System bootloader configuration
bootloader --location=mbr --driveorder=xvda --append="console=hvc0"
# Clear the Master Boot Record
# Partition clearing information
clearpart --drives=xvda --all
# Disk partitioning information
part /boot --ondisk=xvda --asprimary --fstype="ext2" --size=100
part swap  --ondisk=xvda --asprimary --fstype="swap" --size=4096
part /     --ondisk=xvda --asprimary --fstype="ext3" --size=20480
part /data --ondisk=xvda --asprimary --fstype="ext4" --size=4096 --grow

Notice that we changed classic ‘sda’ with ‘xvda’, because it’s the block device name in XenServer guests, and added “console=hvc0” parameter to kernel cmdline in “bootloader” section.

After we set up Cobbler, we can advance to creating new virtual machine in XenServer. In this example I choose CentOS 6.0 template. The most important thing is not to boot the VM after creation:

XenServer PV VM creation wizzard

I will emphasize it once more: it’s important to turn off the “Start the new VM automatically” option. Also, to ensure the boot from PXE, you must leave the virtual DVD drive empty. Now, there is one more issue I’ve stumbled upon. For some reason, if the first boot of the VM is done in Rescue Mode, vbd-param “bootable” is false. So, neat trick I use to overcome this issue is to boot the VM in normal mode, and power it off afterwards, and then boot it with empty DVD drive into Rescue Mode, which will use PXE…. And that’s it! Cobbler will install the VM, and after installation is complete, VM will reboot into PV mode.

If the VM refuses to boot after PXE/kickstart installation is done, you have to use little XenServer magic to set the bootable flag to true:

# xe vm-disk-list uuid=<our_new_shiny_vm_uuid>
# xe vbd-param-set uuid=<vbd_uuid_from_previous_command> bootable=true

Although this is not fully automatized solution which can spawn VM’s on it’s own, it surely helps to advance to fully managed environment.

phpVirtualBox in PHP vs RPM battle

February 13, 2013 Leave a comment

Is there no standard anymore?
What it takes, who I am, where I’ve been
You can’t be something you’re not
Be yourself, by yourself
Stay away from me
A lesson learned in life
Known from the dawn of time

(Pantera – Walk)

phpVirtualBox is a great piece of software that enables you to access, control and administer remote VirtualBox instances in a headless environment. So far, I was deploying it simply by downloading zip file, unzipping in /var/www/html and setting appropriate apache configuration. But, as a struggle to move my infrastructure (including my own workstation, desktop, dev servers, etc) to Puppet, I decided it was time to give yet another day of my life to open source community. Although I didn’t plan it will take that long to package phpVirtualBox properly, it did in the end.
There were some already available packages out there, but they were bad… So I decided to take a peak how EPEL packages php applications – like MediaWiki. At EPEL site, I downloaded mediawiki119 source RPM, unpackaged it and took a peak.

Idea was simple: all the static content (php code, images, js…) goes into /usr/share/, while all the dynamic content (user data, config.php…) goes into /var/www/.

This allows us to use the same code for multiple instances of the application. So, I’ve tried to follow that path. But soon enough, thing started to break apart. No matter what I did, app always wanted to read /usr/share/phpvirtualbox-4.2/config.php …

After dissecting the code, I’ve found out why. Application was trying to use php dirname() function to find out the parent directory of the php script, and with that information it was supposed to find relative location of config.php. Offending line was:


After talking to few of my dear collegues that are PHP programmers, I’ve found out that dirname() doesn’t respect symlinks, in a way that will always return location of the symlink target. So I had to find another approach. Function readlink() couldn’t help either – because scripts weren’t links, but their parrent directories were :-/ Ah crap. After two hours of fiddling with PHP (sorry – I’m a newbie at this language), I figured it out, and changed the line accordingly. This is what it looks like now:


As a true open source warrior, I’ve paid my due with reporting both SPEC file, finished src.rpm & rpm, and posting the patch upstream:

Hope you enjoy it!

Accessing host filesystem from VirtualBox guest

August 13, 2012 10 comments

I came into this world
A screaming infant
(Iced Earth – Life and death)

VirtualBox is a greate piece of software for desktop and workstation virtualization. I use it for both testing and production environments – if it is applicable in the latter.

One good idea for VirtualBox is if you are web/app developer for Linux and you want to keep Windows as your primary OS. One can install Linux distribution of choice in VirtualBox VM and reap both benefits of Windows as primary os (gaaameeees) and Linux as a server platform.

So why am I blogging about something that seems to work? Devil is in the details… “it seems to work” is crucial.

There are 3 ways AFAIK of sharing files between Win host and Linux guest:

  • keep files in guest (Linux) and share with host through Samba
  • keep files on host (Win) and use VirtualBox Shared Folders
  • keep files on host (Win) and mount CIFS from within guest

For someone like me, an experienced Linux sysadmin, first step is the easiest. But it has it’s drawbacks too. For example – I don’t wanna keep files in VM, in case of corruption of VM, reinstallation, or whatever. I want them on native Windows disk. So first option is clearly out.

Shared Folders are really great idea. You set folder in settings of a VM, start it up, install VirtualBox Guest Additions (which bring mount.vboxsf binary and vboxsf kernel module), mount it in guest and voila! But there’s a catch – performance. Shared Folders are really really slow… which in turn kills the idea of programming and testing code without loosing hair because of the slowness.

Analyzing the performance, it’s obvious that kernel spends really large amount of time in IO mode – so vboxsf emulation is behaving something like ZFSonLinux or FUSE… And we don’t like that, do we 😦

It seems that only viable solution is sharing folder though Windows, and mounting it via CIFS….

I did some tests, to show the performance difference. There are three directories I do my tests on:

  • /var/www/repo            ( vboxsf )
  • /var/www/repo2          ( local disk )
  • /var/www/repo3          ( windows export )

I’m doing 2 tests – find and du -sh on a 500MB svn working copy. Also I drop all caches before every test. Here are the results of vboxsf mounted test:

# time find /var/www/repo
real 2m8.864s
user 0m0.200s
sys 0m14.638s

# time du -sh /var/www/repo
592M    /var/www/repo
real 0m24.951s
user 0m0.340s
sys 0m4.859s

Next, the folder mounted via CIFS (windows share/export):

# time find /var/www/repo3
real    0m23.844s
user    0m0.280s
sys     0m1.590s

# time du -sh /var/www/repo3
653M    /var/www/repo3
real    0m3.903s
user    0m0.060s
sys     0m0.330s

Notice the difference in size, not to mention the 10 fold increase in speed… And finaly as a reference, tests of files stored localy on ext3 partition in guest:

# time find /var/www/repo3
real	0m29.775s
user	0m0.180s
sys	0m2.500s

# time du -sh /var/www/repo2
719M	/var/www/repo2
real	0m15.032s
user	0m0.530s
sys	0m0.640s

Notice once more increase in size. Clearly, CIFS mounted Windows exports win… Only drawback – you cannot do symlinks on them.

%d bloggers like this: