With ARM entering the server space, a key technology in play in this segment is Virtualization. Virtualization is not a tool solely for servers and the data center, it is also used in the embedded space in segments like automotive and it is also starting to be used in mobile.
This is not a new technology, IBM pioneered it in the 1960s, and there are many different hypervisors implementing different methods of virtualization. In the Open Source realm there are two major hypervisors: KVM and Xen. Both are interact directly with the Linux kernel, however KVM is solely in the Linux domain whereas Xen works with Linux, *BSD and other UNIX variants.
In the past it was generally accepted that there are two types of hypervisor, Type1 (also known as bare metal or native) where the hypervisor runs directly on the host server and controls all aspects of the hardware and manages the guest operating systems, and Type2 (also known as hosted) where the hypervisor runs within a normal operating system; under this classification Xen falls into the Type1 camp and KVM fell into the Type2 camp. However the modern implementations of the hypervisors has now blurred the lines of distinction.
This time round I’ll be taking a look at the Xen Hypervisor, which is now one of the Linux Foundation’s collaborative projects.
The Xen hypervisor runs directly on the hardware and is responsible for handling CPU, Memory, and interrupts. It is the first program running after exiting the bootloader. Virtual machines then run atop of Xen. A running instance of a virtual machine in Xen is called a DomU or guest. The controller for the guest VMs is a special host VM called Dom0 and contains the drivers for all the devices in the system. Dom0 also contains a control stack to manage virtual machine creation, destruction, and configuration.
The latest version of Xen is 4.4.0, which was released in March and has support for both ARMv7 and ARMv8. For this exercise I’ll be looking at using Xen on ARMv8 with the Foundation Model.
Please consult the Xen Wiki for more information on using Xen with Virtualization Extensions and using Xen with Models. For discussion/review/information and help there are mailinglists.
You can use whichever Linux distribution you prefer, so long as you have a suitable cross-compilation environment set up. I’m using openSUSE 13.1 with the Linaro Cross Toolchain for AArch64.
Typographic explanation:
host$ = run as a regular user on host machine
host$
host# = run as root user on host machine (can use sudo if you prefer)
host#
chroot> = run as root user in chroot environment
chroot>
model> = run as root user in a running Foundation Model
model>
The first steps are to build Xen and a Linux kernel for use in both Dom0 and DomU machines. We then package Xen and Linux along with a Device Tree together for Dom0 to be used in the model using boot-wrapper.
If using Linaro’s toolchain, ensure the /bin directory is in your $PATH
$PATH
host$ git clone git://xenbits.xen.org/xen.git xen
host$ cd xen
host$ git checkout RELEASE-4.4.0
There is a small build bug due to use of older autotools which will be fixed in the 4.4.1 release. Rather than wait for the next release, we’ll just backport it now.
host$ git cherry-pick 0c68ddf3085b90d72b7d3b6affd1fe8fa16eb6be
There is also a small bug in GCC with PSR_MODE see bug LP# 1169164. Download the attached PSR_MODE_workaround.patch
PSR_MODE
PSR_MODE_workaround.patch
host$ patch -i PSR_MODE_workaround.patch -p1 host$ make dist-xen XEN_TARGET_ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- CONFIG_EARLY_PRINT=fastmodel host$ cd ..
host$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git host$ cd linux host$ git checkout v3.13 Create a new kernel config: host$ make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- defconfig host$ sed -e 's/.*CONFIG_XEN is not set/CONFIG_XEN=y/g' -i .config host$ sed -e 's/.*CONFIG_BLK_DEV_LOOP is not set/CONFIG_BLK_DEV_LOOP=y/g' -i .config host$ make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- oldconfig Make sure to select Y to all Xen config options I have attached a kernel.config which has all the required options enabled for reference. host$ make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- Image host$ cd ..
In a browser go to Arm Developer's Fixed Virtual Platforms page.
Scroll to the bottom and select “Download Now”
This should provide FM000-KT-00035-r0p8-52rel06.tgz
FM000-KT-00035-r0p8-52rel06.tgz
Extract the tarball
host$ tar xaf FM000-KT-00035-r0p8-52rel06.tgz
It is common to run the models without real firmware. In this case a boot-wrapper is needed to provide a suitable boot time environment for Xen, which allows booting into Non-Secure HYP mode and providing boot modules etc.:
host$ git clone -b xen-arm64 git://xenbits.xen.org/people/ianc/boot-wrapper-aarch64.git host$ cd boot-wrapper-aarch64 host$ ln -s ../xen/xen/xen Xen host$ ln -s ../linux/arch/arm64/boot/Image Image
Use the attached foundation-v8.dts to build the device tree blob
host$ dtc -O dtb -o fdt.dtb foundation-v8.dts host$ make CROSS_COMPILE=aarch64-linux-gnu- FDT_SRC=foundation-v8.dts IMAGE=xen-system.axf host$ cd .. Run the Model to make sure the kernel functions, it will panic as we haven’t setup the rootfs yet: host$ ./Foundation_v8pkg/models/Linux64_GCC-4.1/Foundation_v8 \ --image boot-wrapper-aarch64/xen-system.axf
Next we create a suitable chroot build environment using the AArch64 port of openSUSE. We will use the qemu-user-static support for AArch64 to run the chroot on the (x86) host.
First we build the qemu binary, then construct the chroot, finally we build Xen in the chroot environment.
host$ git clone https://github.com/openSUSE/qemu.git qemu-aarch64 host$ cd qemu-aarch64 host$ git checkout aarch64-work Install some build dependencies: host# zypper in glib2-devel-static glibc-devel-static libattr-devel-static libpixman-1-0-devel ncurses-devel pcre-devel-static zlib-devel-static host$ ./configure --enable-linux-user --target-list=arm64-linux-user --disable-werror --static host$ make -j4 host$ ldd ./arm64-linux-user/qemu-arm64 not a dynamic executable
This last step is to verify that the resulting binary is indeed a static binary. We will copy it into the chroot later on.
We now need to enlighten binfmt misc about aarch64 binaries:
On openSUSE:
host# cp scripts/qemu-binfmt-conf.sh /usr/sbin/ host# chmod +x /usr/sbin/qemu-binfmt-conf.sh host# qemu-binfmt-conf.sh On Debian: host# update-binfmts --install aarch64 /usr/bin/qemu-aarch64-static \ --magic '\x7fELF\x02\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\xb7' \ --mask '\xff\xff\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\xff' host$ cd ..
host$ wget http://download.opensuse.org/ports/aarch64/distribution/13.1/appliances/openSUSE-13.1-ARM-JeOS.aarch64-rootfs.aarch64-1.12.1-Build32.2.tbz
Note: the file name may change due to the continuous image building, if the above does not work verify the latest version of the tarball here.
host$ mkdir aarch64-chroot host# tar -C aarch64-chroot -xaf openSUSE-12.3-AArch64-JeOS-rootfs.aarch64-1.12.1-Build5.13.tbz Install the qemu binary into the chroot environment host# cp qemu-aarch64/arm64-linux-user/qemu-arm64 aarch64-chroot/usr/bin/qemu-aarch64-static host# cp /etc/resolv.conf aarch64-chroot/etc/resolv.conf
Copy the Xen sources into the chroot
host# cp -r xen aarch64-chroot/root/xen Chroot into the aarch64 environment host# chroot aarch64-chroot /bin/sh We now need to install some build dependencies chroot> zypper install gcc make patterns-openSUSE-devel_basis git vim libyajl-devel python-devel wget libfdt1-devel libopenssl-devel
If prompted to trust the key, I’ll let you choose whether to trust permanently or just this time (personally I chose to always trust the key).
chroot> cd /root/xen chroot> ./configure chroot> make dist-tools chroot> exit
The Xen tools are now in aarch64-chroot/root/xen/dist/install
We will create an ext3 formatted filesystem image, we will also use a simplified initscript to avoid long waits while running the model.
host$ wget http://download.opensuse.org/
This is the same rootfs tarball as use for the chroot. You can re-use the previously downloaded tarball if you wish.
host$ dd if=/dev/zero bs=1M count=1024 of=rootfs.img
host$ /sbin/mkfs.ext3 rootfs.img
Say yes, we know it’s not a block device
host# mount -o loop rootfs.img /mnt
host# tar -C /mnt -xaf openSUSE-13.1-ARM-JeOS.aarch64-rootfs.aarch64-1.12.1-Build32.2.tbz
Install the Xen tools that we built earlier
host# rsync -aH aarch64-chroot/root/xen/dist/install/ /mnt/
host# cat > /mnt/root/init.sh <<EOF #!/bin/sh set -x mount -o remount,rw / mount -t proc none /proc mount -t sysfs none /sys mount -t tmpfs none /run mkdir /run/lock mount -t devtmpfs dev /dev /sbin/udevd --daemon udevadm trigger --action=add mkdir /dev/pts mount -t devpts none /dev/pts mknod -m 640 /dev/xconsole p chown root:adm /dev/xconsole /sbin/klogd -c 1 -x /usr/sbin/syslogd cd /root export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin exec /bin/bash EOF host# chmod +x /mnt/root/init.sh
Get missing runtime dependencies for Xen
host$ wget http://download.opensuse.org/ports/aarch64/distribution/13.1/repo/oss/suse/aarch64/libyajl2-2.0.1-14.1.2.aarch64.rpm
host$ wget http://download.opensuse.org/ports/aarch64/distribution/13.1/repo/oss/suse/aarch64/libfdt1-1.4.0-2.1.3.aarch64.rpm
host# cp libyajl2-2.0.1-14.1.2.aarch64.rpm libfdt1-1.4.0-2.1.3.aarch64.rpm /mnt/root/
host# umount /mnt
host$ ./Foundation_v8pkg/models/Linux64_GCC-4.1/Foundation_v8 \ --image boot-wrapper-aarch64/xen-system.axf \ --block-device rootfs.img \ --network=nat
Silence some of the harmless warnings
model> mkdir /lib/modules/$(uname -r) model> depmod -a Install the runtime dependencies: model> rpm -ivh libfdt1-1.3.0-9.1.1.aarch64.rpm libyajl2-2.0.1-12.1.1.aarch64.rpm model> ldconfig
Start the Xen daemon, you can ignore the harmless message about i386 qemu if it appears.
model> /etc/init.d/xencommons start
If you get an error of missing file for /etc/init.d/xencommons re-run ldconfig.
/etc/init.d/xencommons
ldconfig
Confirm that Dom0 is up:
model> xl list
Name ID Mem VCPUs State Time(s)
Domain-0 0 512 2 r----- 13.9
Congratulations, you now have a working Xen toolstack. You can now shut down the model for now.
For the guest rootfs we will use a smaller OpenEmbedded based Linaro image rather than a full openSUSE image purely for space constraints.
host$ wget http://releases.linaro.org/latest/openembedded/aarch64/linaro-image-minimal-genericarmv8-20140223-649.rootfs.tar.gz host$ dd if=/dev/zero bs=1M count=128 of=domU.img host$ /sbin/mkfs.ext3 domU.img Again say yes, we know it’s not a block device host# mount -o loop domU.img /mnt host# tar -C /mnt -xaf linaro-image-minimal-genericarmv8-20140223-649.rootfs.tar.gz host# umount /mnt Make the DomU rootfs and kernel available to the Dom0 host# mount -o loop rootfs.img /mnt host# cp domU.img /mnt/root/domU.img host# cp linux/arch/arm64/boot/Image /mnt/root/Image
Create the config for the guest
host# cat > /mnt/root/domU.cfg <<EOF kernel = "/root/Image" name = "guest" memory = 512 vcpus = 1 extra = "console=hvc0 root=/dev/xvda ro" disk = [ 'phy:/dev/loop0,xvda,w' ] EOF host# umount /mnt
Start the model again:
host$ ./Foundation_v8pkg/models/Linux64_GCC-4.1/Foundation_v8 \ --image boot-wrapper-aarch64/xen-system.axf \ --block-device rootfs.img \ --network=nat model> losetup /dev/loop0 domU.img model> /etc/init.d/xencommons start Create the DomU using the config model> xl create domU.cfg
View the guest’s info on the Xen console
Screenshot of the Dom0 host
Start the Dom0
model> xl console guest
Screenshot of the DomU guest
Now all that’s left is to have a lot of fun!
h
i We are Working on Xen type1 (bare metal) hypervisor on ARM platform can give some detailed steps to port XEN from Ubutu pc to ARM board
Thanks&Regards,
Balaji Vivek
hi