Tuesday 31 January 2012

Configuring ESXi VDR FLR on SuSE Linux SLES 11 x86_64


I've written the following post as it took me a while to figure out how to get SLES Linux File based restore from within a VMDK on ESXi.

By default there is support for Debian Guest and RedHat, There is also a helpful article on the VMware forums that details implementation on OpenSuSE 32bit.

This is where my first problem arose, as the VDR FLR programs require 32bit libraries in order to run. The way I approached this was to use a 32bit Guest VM as a donor for the 32bit linker programs, that dont seem to get included in the same way when installing the 32bit runtime environment on SLES 11 x86_64. All of the documentation on the OpenSuSE sites seem to point to declaring runtime variable settings for the linker and compiler by using "-m32" as an argument. Whilst this 'Works" it fails to actually build the source objects that you require.

So I created a 32-bit guest and after a bit of debugging zipped down the /usr/i586-suse-linux directory and copied it over to and unzipped it on the 64-bit guest that I wanted to have VDR FLR running on. - This will give me a 32bit version of the linker program 'ld'.

I also found that running on a kernel anything earlier than 2.6.27.21 failed to create the FUSE directories and files correctly under /tmp. So I ran a kernel update by grabbing these files from Novell's SLES site:

For the following to run successfully you will need to update module-init-tools first:



rpm -Fvh mod-init*.rpm
//This will use these files to update the module-init-tools.
module-init-tools-3.12-29.1.x86_64.rpm
module-init-tools-debuginfo-3.4-70.6.1.x86_64.rpm
module-init-tools-debugsource-3.4-70.6.1.x86_64.rpm



//

//Next do the kernel update online

mkdir /usr/local/src/kernelmods
move the following files into /usr/local/src/kernelmods
ext4dev-kmp-default-0_2.6.27.21_0.1-7.1.2.x86_64.rpm
ext4dev-kmp-xen-0_2.6.27.21_0.1-7.1.2.x86_64.rpm
kernel-default-2.6.27.21-0.1.2.x86_64.rpm
kernel-default-base-2.6.27.21-0.1.2.x86_64.rpm
kernel-source-2.6.27.21-0.1.1.x86_64.rpm
kernel-syms-2.6.27.21-0.1.2.x86_64.rpm
kernel-xen-2.6.27.21-0.1.2.x86_64.rpm
kernel-xen-base-2.6.27.21-0.1.2.x86_64.rpm
cd /usr/local/src/kernelmods
//Next run the update
rpm -Fvh *.rpm


Use YaST to make sure that you have installed the 32bit runtime environment. - Note that some of the steps we are doing after this is to get around a problem that I found with the 64bit linker not seeming to accept "-m32".

Once this has finished, its best for you to do a reboot, just to make sure you are running everything that you should be.

Download VMware-vix-disklib from the VMware site. I used this version:VMware-vix-disklib-1.2.0-230216.i386.tar. Copy this to /usr/local/src and unpack and install by executing ./vmware-install.pl

Next follow the VDR instructions to get hold of the FLR program:VMwareRestoreClient.tgz. Copy this file to /usr/local/src on the 64bit guest, and unpack.

Next grab a source copy of FUSE from the FUSE site - I used 2.7.3. Here are the build instructions that worked for me:

./configure '--prefix=/usr/local/mattsfuse'  '--build=i386' 'CC=gcc -m32' 'LD=/usr/i586-suse-linux/bin/ld' 'AS=gcc -c -m32' 'LDFLAGS=-L/usr/local/mattsfuse/lib' '--enable-threads=posix' '--infodir=/usr/share/info' '--mandir=/usr/share/man' '--libdir=/usr/local/mattsfuse/lib' '--libexecdir=/usr/local/mattsfuse/lib' '--enable-languages=c,c++,objc,fortran,obj-c++,java,ada' '--enable-checking=release' '--with-gxx-include-dir=/usr/include/c++/4.1.2' '--enable-ssp' '--disable-libssp' '--disable-libgcj' '--with-slibdir=/usr/local/mattsfuse/lib' '--with-system-zlib' '--enable-__cxa_atexit' '--enable-libstdcxx-allocator=new' '--program-suffix=' '--enable-version-specific-runtime-libs' '--without-system-libunwind' '--with-cpu=generic' '--host=i586-suse-linux' 'build_alias=i386' 'host_alias=i586-suse-linux' --cache-file=/dev/null --srcdir=.

As you can see in the configure script I specified an absolute path to the 32-bit linker (ld) 'LD=/usr/i586-suse-linux/bin/ld', and used --build=i386 and manually set some other 32bit flags to instruct the compiler on what to do.

Once the configure has run, issue a 'make' and 'make install' if there are no problems shown in the 'make'.

You now have a 32-bit source version of FUSE running in on 64-bit SLES!

Almost there, all we need to do now is use 'ldd' to look at the VDR programs we need to run and see what libs it thinks are missing.

cd /usr/local/src/VMwareRestoreClient

ldd libvixMntapi.so
</blockquote>
you should see something like this:
<blockquote>
linux-gate.so.1 =>  (0xffffe000)
libdl.so.2 => /lib/libdl.so.2 (0xb7d72000)
libpthread.so.0 => /lib/libpthread.so.0 (0xb7d5c000)
libcrypt.so.1 => /lib/libcrypt.so.1 (0xb7d29000)
libz.so.1 => /lib/libz.so.1 (0xb7d17000)
libvixDiskLib.so.1 => not found
libfuse.so.2 => not found
libc.so.6 => /lib/libc.so.6 (0xb7bea000)
/lib/ld-linux.so.2 (0x80000000)

The items showing 'not found' are the ones we need to move around.

cp -a /usr/local/mattsfuse/lib/libfuse.* /usr/lib
find / -name libvixDiskLib.so.1 -print
cp -a libvixDiskLib.so* /usr/lib
run 'ldconfig'
ldd libvixMntapi.so


 --> This should now show you the locations of the missing files that have now been found:



linux-gate.so.1 =>  (0xffffe000)
libdl.so.2 => /lib/libdl.so.2 (0xb7e4a000)
libpthread.so.0 => /lib/libpthread.so.0 (0xb7e34000)
libcrypt.so.1 => /lib/libcrypt.so.1 (0xb7e01000)
libz.so.1 => /usr/lib/libz.so.1 (0xb7def000)
libvixDiskLib.so.1 => /usr/lib/libvixDiskLib.so.1 (0xb7c98000)
libfuse.so.2 => /usr/local/lib/libfuse.so.2 (0xb7c7f000)
libc.so.6 => /lib/libc.so.6 (0xb7b53000)
/lib/ld-linux.so.2 (0x80000000)
librt.so.1 => /lib/librt.so.1 (0xb7b4a000


Next you should be able to run"VdrFileRestore -a <IP-address-of VDR appliance>" As per VMware's instructions on the 64-bit guest.

Follow the onscreen instructions to select which backup day that you want to mount the filesystem for. You will then need to SSH onto the 64-bit guest. If you run 'df' you will see that there is a /tmp/xxxxxx file mounted in the list. - Do not try to use this as a file path to grab files from. Instead use the suggested /root/HOSTNAME-DAY mount point.

For a test I moved /etc/hosts /etc/hosts.myold and then copied /root/HOSTNAME-DAY/etc/hosts /etc/hosts, and checked that I could read it ok. 

Hope that someone might find this useful. VDR is an amazing backup tool that is free with the Enterprise licence. You can either do a complete host restore, or use FLR as described above to restore single files from inside the machine image.


(c) Matt Palmer 29 Jan 2012

Thursday 13 October 2011

How-to Virtualize a HP BL460c running SLES Linux 10 SP2

To begin with a bit of background on the environment may be helpful…
The need to virtualize my HP C7000 blade environment came from a requirement to consolidate our comms room estate, and retire out legacy hardware and achieve as good an occupancy on the remaining hardware as possible. The eventual plan for the left-over kit could be anything from a test-rig running Eucalyptus, or just a VMware ESXi environment running many virtual machines. – For now we are keeping it simple with a basic ESXi environment.
Most of my existing hardware is running on G1 or G2 blade kit, and I wanted to be able to just lift out the existing servers and place them in their new environment with as little disruption as possible, or developer time rewriting legacy code,etc, whilst I gave some thought to how I would rearrange my estate for maximum efficiency once all the services running in it had been virtualized, and made effectively hardware independent (within reason).

Here are the steps I went through, I’ve also listed a couple of gotcha’s that I wasted a bit of time on, but I’m glad I’ve thought of, so I wont be wasting time again!
I wanted to virtualize a system that was running on a HP BL460c(using its local storage not SAN or storage blades) and make it run under ESXi. I thought that this would be a simple case of connecting the ESXi cold clone CD to the blade and doing a few mouse clicks. This was how I proceeded, but I couldnt figure out initially why the blade was unable to see my ESXi server, even though all the correct routing between networks existed. - Then I remembered that I was running with 2 x Gb2EC network switches in the back of that c7000 chassis, and that I had had to use VLAN-tagging on all of the ports, this worked fine when the original blade OS  was ‘up’, but without the knowledge of the VLAN tags in the cold clone CD, this seemed to fail to work.
(If someone has done a cold clone in an environment where they have needed to tag the packets that are being sent from the cold clone mini-OS then I would love to have some feedback on how you did it.)
In the end I moved the blade from its original chassis and placed it in a c7000 enclosure with the VLAN-tagging disabled, and this worked great.
So I used the blade ‘SUV’ cable and connected a CD drive and keyboard and VGA screen to the blade and booted from the VMware ESXi cold clone CD, and went through the steps of identifying the ESXi system that I wanted to receive the image that the cold clone CD produced from the blade. – I had a bit of a issue with the fact that parts of the configuration process for the cold clone environment seemed to require a mouse to click ‘Next’ as the tab key seemed to work intermittently (this could be a hardware/keyboard issue on my side), but just for reference its fine to disconnect the keyboard from the SUV cable and connect a mouse (and vice-versa) as many times as necessary throughout the installation. – Another approach which is probably possible is to connect the cold clone media using HP Virtual Media, but again I went for what was the most straightforward approach at the time.
Once the cloning process was complete I had the virtual version of the blade available on my ESXi host, but at this point it would still not boot successfully, as its expecting to see the Smart Array adapter in the blade, and so it tries to look for boot and root on /dev/cciss/c0d0pXX.
So from this point forward the files that I needed to edit on the Virtual machine image were the /etc/fstab, the /boot/grub/device.map and /boot/grub/menu.lst. You need to go through this and replace any reference to /dev/cciss/xxx with /dev/sdax and so on. As an example here are some of my changes, which I applied by booting a liveCD and mounting each partition:
[/boot/grub/device.map]
(hd0) /dev/cciss/c0d0  —-> changes to  —>>(hd0) /dev/sda (note that there is no partition number specified)
[/boot/grub/menu.lst]
root (hd0,0)
kernel /vmlinuz-version root=/dev/cciss/c0d0p3 resume=/dev/cciss/c0d0p2
initrd /initrd-version
The above three lines changed to:
root(hd0,0)
kernel /vmlinuz-version root=/dev/sda3 resume=/dev/sda2
initrd /initrd-version
[/etc/fstab]
/dev/cciss/c0d0p3 /  –>changes to –> /dev/sda3 /
/dev/cciss/c0d0p1 /boot –>changes to –> /dev/sda1 /boot
/dev/cciss/c0d0p2 swap –>changes to –> /dev/sda2 swap
Next, I grabbed the SLES install CD/DVD and booted as if I were going to do an installation. I proceeded through the normal install steps up to where you are asked whether you are doing a new install,an update or ‘other options’. From other options you can run the System Repair Tool, and this analyses the installed system and advises you of any missing kernel modules, or ones that are now defunct (amongst other things). My CD advised me to disable debugfs and usbfs. I did not select verify packages, but only ‘check partitions’, ‘fstab enties’ and the final step rewriting the boot loader if needed.
Once the newly imaged server had booted I needed to delete the old network interfaces, and delete all entries in the /etc/udev/rules.d/30-persistent-net-names, do a reboot, which automatically entered the new MAC address details for the new VMware ethernet adapter, then readded the network adapter in YaST.
After that I did a reboot, ejected the Install CD, installed VMwareTools on the Guest and I had my newly virtualized system operational again!

Matt Palmer 30-Aug-2011