Mar
28
2011

How to migrate a QEMU – KVM image to a Physical Machine(PC)

Virtualization and cloud services are great!  Everyone wants to move services to web space, but little is written about migrating to physical hardware.  Even though virtualized environments are ideal for labs(at least from cost and organization stand point).

Open Source QEMU-KVM provides an excellent set of tools through libvirtd/virt-manager.  I use virtualization for test lab environments and creating virtual upgrades for my customers.  The main benefit is that I can test 90% of their Asterisk PBX without touching the production system.

In the past, I used several boxes with removable drive bays.  There are many problems with this setup and I’m sure many of my readers will agree with the conclusion: that test labs can get messy and can hinder work. Other engineers, will frequently need the lab equipment, or hard drives(with important experiments) go missing.

Sometimes test servers get cannibalized for parts when there is an emergency.  Then consider the cost of the hardware and electricity.  Luckily, one Intel i7 with 8G of RAM can easily manage three or four Linux guests.  The conclusion I came too is that virtualization is the most effective way to get around these problems.

My hardware specs:

Fedora 14
Kernel 2.6.35.11-83.fc14.i686.PAE
8 Gigabytes DDR3
Intel(R) Core(TM) i7 CPU  860  @ 2.80GHz

What you need:

– QEMU-KVM image
– Linux Boot CD or USB stick
– DVD, USB, BD-r or external drive of some kind

This tutorial is to help system administrators migrate server images(created from virt-manager) from the virtual realm to a PC for production use.  I’m assuming that the reader has basic knowledge of Linux, QEMU, KVM, and virt-manager.  Before you begin, test that the image contains necessary drivers for any proprietary hardware.

Goals of this Post:
– Export QEMU-KVM image to a physical machine
– Resize VM Image to take up to fill harddrive space

Begin by copying the Virtual Machine image to some kind of media.  My image is 6GB so I’m copying it to a 8GB USB thumb drive.  I’m using a separate 2GB USB thumbdrive to boot a live Fedora image.  I used Fedora Live USB Creator and recommend it for it’s ease of use.

Copy the VM’s disk image from ‘/var/lib/libvirt/images/’ to some kind of removable media.  In my case I copied the 6GB image to a spare USB thumbdrive.  If your image is to large for the FAT file system consider formatting the drive in ext4.  The images I work with are too big for a DVD-r, but yours might be fine.  Blueray BD-R is a great option too.

The last piece of equipment we need is a server for our build,

Boot up the Fedora Live USB distro.  Note: if you using a RAID controller create the array before booting, stop the array to redetect, also you need to edit grup.conf to properly boot RAID volumes and other kinds of devices.

Connect the media containing your libvirt image file.  Use the dd command to copy that image to your hardrive.  The dd command is ‘must have’ utility for Linux admins.  Often it is used to create Linux boot disk from a floppy image.  We’re gonna take a virtual machine image and do the same thing to an unmounted hard disk.

Standard hard disks,

1) image disk

[root@localhost]# dd if=/media/USBDRIVE/my-libvirt.img of=/dev/sda1

RAID Arrays,

1) shutdown ‘auto detected’ array from Live distro

[root@localhost grub]# mdadm -Ss

2) re-detect raid array you created from Intel BIOS screen

[root@localhost grub]# dmraid -ay

You may get a error message, but at this point you should find a newly raid device located in ‘/dev/mapper/isw_*’.  Perform the same step above to image the new raid device.

3) copy image to the RAID ARRAY

[root@localhost]# dd if=/media/USBDRIVE/my-libvirt.img of=/dev/mapper/isw_YOUR_VOLUME0

You can check to see if your device is active with the following,

[root@localhost grub]# dmraid -ay
RAID set “isw_ciahbbfedg_Volume0” already active
The dynamic shared library “libdmraid-events-isw.so” could not be loaded:
/lib/libdmraid-events-isw.so: undefined symbol: pthread_mutex_trylock
RAID set “isw_ciahbbfedg_Volume0p1” already active
RAID set “isw_ciahbbfedg_Volume0p1” was not activated
RAID set “isw_ciahbbfedg_Volume0p2” already active
RAID set “isw_ciahbbfedg_Volume0p2” was not activated
RAID set “isw_ciahbbfedg_Volume0p3” already active
RAID set “isw_ciahbbfedg_Volume0p3” was not activated

[root@localhost grub]# dmraid -r
/dev/sdb: isw, “isw_ciahbbfedg”, GROUP, ok, 976773165 sectors, data@ 0
/dev/sda: isw, “isw_ciahbbfedg”, GROUP, ok, 976773165 sectors, data@ 0

4) edit grub.conf

The last step for RAID arrays is to mount the ‘/boot’ partion from our newly imaged drives and edit the grub.conf folder.  We want to remove the kernel options that normally skip the reading of RAID devices.  As you recall my original Virtual Machine was not a RAID device.  So I must remove those options for the physical server to operate correctly.

Create a new folder on the liveuser desktop.  Mount the RAID device to that folder.  If you notice the ‘p1’ stands for partition 1.  That’s the partition that contains the ‘/boot’ partition(in most cases).

example,

[root@localhost ~]# mount -t ext4 /dev/mapper/isw_YOURDEVICE_Volume0p1 /home/liveuse/mount/

Navigate to ‘/boot/grub/grub.conf’ remove ‘rd_NO_MD’ & ‘rd_NO_DM’.

The ‘if’ means ‘in file’ and the ‘of’ It will take a while to copy depending on how big the image is and the media you are using.  Also if you are using a different RAID controller or special device drivers there will be many more steps involved at this stage.  I cannot account for every type of installation so please post if you run into trouble.

When it’s complete, reboot the system.  Once rebooted open the GNOME disk utility(applications -> System Tools -> Disk Utility)  Create a partition with the unused space on our new hard drive, extend the logical volume to consume the entire drive.

Now we can use the ‘system-config-lvm’ to add this new partition to /.  Initialize your device from the ‘Uninitialized Entries’ section.  In my case I had to add a mapped RAID device that was not listed.

Find the root ‘logical'(not physical) volume in your setup and click on the ‘Edit Properties’.  Click ‘Use remaining’ followed by ‘ok’ button.  A dialog box like below will appear, select ‘Add’ to join the Logical Volume Group.

Our Volume is now the full size of our new server.

Good Luck and thank you for reading savelono.com.com

Written by mattb in: Linux | Tags: , , , ,




1 Comment »

  • And if you have no LVM?

    Comment | February 8, 2015

RSS feed for comments on this post. TrackBack URL

Leave a comment