The idea was discussed here before, so I'll keep this short.
We've set up a repository for people running Fedora 11 who would like
to test the rawhide/F12 virt packages. To use it, do e.g.
$> cat > /etc/yum.repos.d/fedora-virt-preview.repo << EOF
name=Virtualization Rawhide for Fedora 11
$> yum update
At the moment, it contains the F-12 versions of libvirt and qemu, but
as F-12 development continues, it will contain more. I'll send periodic
mails to the list detailing the latest updates.
Also, this is very much a work-in-progress. The TODO list includes:
- get createrepo installed on fedorapeople.org
- include debuginfo packages in the repo (need more quota)
- find a better location than markmc.fedorapeople.org
- enable package builders to upload directly to the repo
- automatically do preview build/upload for devel/ builds
- allow building against preview packages; e.g. for language bindings
Comments most welcome. Help with the TODO list is even more welcome :-)
 - https://www.redhat.com/archives/fedora-virt/2009-April/msg00154.html
I have built a new set of kernel packages based on fedora rawhide
kernels and the xen/dom0/hackery branch of Jeremy's git repository
( http://git.kernel.org/?p=linux/kernel/git/jeremy/xen.git;a=summary ).
This batch (kernel-2.6.29-0.114.2.6.rc6.fc11) is available via the koji
build system at
These are really for development and debugging purposes only, as I am
still having problems getting them to boot, but others have reported more
success at getting kernels based on this git repository working, so you
might be lucky.
Note to install these packages on Fedora 10 you will need to have
rpm-4.6.0-1.fc10 installed (currently in updates-testing but it should be
available in updates soon) because of the change to SHA-256 file digest
hashing in recent Fedora 11 builds.
Glauber and I are going to start using git to manage the patches for
the qemu package. The idea is that we'll use git's magical powers to
make it easier for us to fix problems in patches, cherry-pick patches
from upstream, re-base our patches to a newer upstream, etc.
Nothing is changing, really - patches should still go upstream first
and we'll still include patches as individual files in the source RPM -
this is purely about making managing those patches easier.
Here's the tree:
It might be useful to others who want to help with back-porting fixes
I've used a lot of time trying to get my custom Xen pv_ops dom0 kernel working with
virt-install and/or virt-manager on Fedora 10, and now it seems I got things
If you want to play with this you need:
1) New enough pv_ops dom0 kernel (2.6.29-rc8 or newer) so it has /sys/hypervisor support included
- Compile with CONFIG_HIGHPTE=n since it seems to be broken still
2) libvirt 0.6.1 and related packages from Fedora 10 updates-testing
In addition to those I'm using Xen 3.3.1-9 packages from rawhide/F11 rebuilt for F10.
With the older Fedora 10 libvirt packages libvirtd was crashing often for me, and
I had some other issues with virt-install console window not opening but stalling etc..
Today I was able to run the following on Fedora 10 32bit PAE pv_ops dom0:
- CentOS 5.3 32bit PAE PV domU
- Fedora 10 32bit PAE PV domU
- Use virt-install to install Fedora 10 32bit PAE PV domU (using custom kickstart
to force PAE kernel installation to avoid the anaconda BUG which installs
wrong non-PAE kernel as a default).
Fedora 11 (rawhide) installation most probably works too.
I'm using LVM volumes for domU disks (tap:aio is not yet supported by pv_ops
Network seems to work after running "ifdown eth0 && ifup eth0"
on the guest.. dunno why that's needed. That's something to figure out later:)
Graphical domU console works with virt-viewer and virt-install during installation.
- virt-manager complains about default network (virbr0) being inactive and
asks if I want to start it. If I click Yes, then I get error: "libvirtError: cannot create bridge 'virbr0': File exists"
virbr0 works just fine with virt-install. So dunno what's the problem with that..
have to look into that later.
Thanks to everyone involved for helping me with this!
[CC'd to fedora-virt]
On Wed, Jun 17, 2009 at 05:25:26PM -0400, Christopher Johnston wrote:
> I had a question about febootstrap and a specific
> use case of mine at my company where I am implementing a stateless solution
> for our grid. I wanted to be able to take the initramfs image that is spun
> up out of febootstrap and put a custom linuxrc in there which will
> essentially take the contents of the initramfs and copy them directly into a
> tmpfs filesystem thats mounted as /sysroot then do a pivot_root into it and
> then do a full execution of init.
I'm a bit confused why you'd want to do this, but maybe I'm missing
something. Why copy the initramfs image into a tmpfs? The initramfs
is already loaded into kernel memory at boot time and so it has all
the same properties / benefits of tmpfs.
> I have been using nash currently to do
> this and I have ran into a few issue when the system boots up once init
> forks off where it segfaults. Below is my custom linuxrc:
I would tend to avoid using nash. If it's possible to add bash to the
image, just use bash.
You might find it helpful to look at what we do in libguestfs, here:
> mount -t proc /proc /proc
> echo Mounting proc filesystem
> echo Mounting sysfs filesystem
> mount -t sysfs /sys /sys
> echo Creating /dev
> mount -o mode=0755 -t tmpfs /dev /dev
> mkdir /dev/pts
> mount -t devpts -o gid=5,mode=620 /dev/pts /dev/pts
> mkdir /dev/shm
> mkdir /dev/mapper
> echo Creating initial device nodes
> mknod /dev/null c 1 3
> mknod /dev/zero c 1 5
> mknod /dev/systty c 4 0
> mknod /dev/tty c 5 0
> mknod /dev/console c 5 1
> mknod /dev/ptmx c 5 2
> mknod /dev/fb c 29 0
> mknod /dev/tty0 c 4 0
> mknod /dev/tty1 c 4 1
> mknod /dev/tty12 c 4 12
> mknod /dev/ttyS0 c 4 64
> mknod /dev/ttyS1 c 4 65
> mknod /dev/ttyS2 c 4 66
> mknod /dev/ttyS3 c 4 67
> /lib/udev/console_init tty0
> echo Creating tmpfs filesystem
> mkdir -p /sysroot
> mkrootdev -t tmpfs -o defaults,ro /dev/root
> mount -o mode=0755 -t tmpfs /dev/root /sysroot
> mkdir -p /sysroot/proc
> mkdir -p /sysroot/sys
> mkdir -p /sysroot/.oldroot
> echo Copying rootfs->tmpfs
> cp -a bin /sysroot
> cp -a dev /sysroot
> cp -a etc /sysroot
> cp -a home /sysroot
> cp -a lib /sysroot
> cp -a lib64 /sysroot
> cp -a mnt /sysroot
> cp -a sbin /sysroot
> cp -a tmp /sysroot
> cp -a usr /sysroot
> cp -a var /sysroot
> cp -a root /sysroot
> echo Setting up the new root tmpfs filesystem
> echo Switching from rootfs to tmpfs
Richard Jones, Emerging Technologies, Red Hat http://et.redhat.com/~rjones
virt-df lists disk usage of guests without needing to install any
software inside the virtual machine. Supports Linux and Windows.
I'm having a problem with a virtual machine created on an F-11 host
that I didn't have with an F-10 host, and I'm wondering if I've
misconfigured something or missed a trick somewhere. I'm doing some
custom Linux kernel work for my employer, and I'm using a small
collection of virtual machines to test it. In the short time I've had
F-11 on my host machine, I've seen that virtual disk performance is
noticeably worse than it was with F-10. My virtual machines regularly
spew soft lockup messages to the system log, always with backtraces
that show the process is being starved of the disk. The tests put a
lot of stress on the disk, because this custom kernel work is going to
help some overloaded servers deal with the load.
It's actually working pretty well on the real machines with real
disks, but with the upgrade to F-11, my test VMs have become almost
useless for testing further development work. Before I take the
drastic step of rolling back to F-10, what factors affect the virtual
I created both the F-10 and F-11 machines with virt-manager, if that matters.
I've recently installed F-11 and am having a problem with networking
between the host and guest with the default network configuration -
specifically that the iptables rules for virbr0 are not being inserted
by libvirt as they used to be under F-10.
I am using the default configuration of the firewall as shipped with
F-11. The guest instance is a windowsXP image created under F-10 - I
simply recreated the cconfig files by "creating" a new guest under
virt-manager and pointing it to the disk image file. The guest boots
up fine, but no networking. The output of iptables -L doesn't contain
any reference to virbr0 or vnet0 (the latter automatically created
when starting the guest OS) - I have confirmed virbr0 and vnet0 are
present using ifconfig.. In case it's relevant this machine is using
NetworkManager and has a single wired ethernet adapter configured with
a static IP.
Any suggestions on how I can debug further ?
The following set of patches changes libguestfs to use virtio block
devices by default.
Device names change from /dev/sd* to /dev/vd*, but device name
translation should hide this change in most cases:
The change isn't totally straightforward. The CHS for virtio disks is
different from the CHS for IDE disks (even where the disks have
identical size). These caused sfdisk with explicit cylinder numbers
to fail, so I had to change all these. This is the reasoning behind
patch 2 and most of patch 3.
Patch 1: Just changes the way some messages are displayed from the
Patch 2: Changes the statvfs test so it doesn't get affected by the
Patch 3: (a) We change the list-devices and list-partitions functions
so that they see and return /dev/vd* devices. (b) The udev fix
previously posted to this list. (c) Change the partition sizes
because of the CHS change - see above. (d) Add ,if=virtio to -drive
parameters so that block devices are exported as virtio disks.
Patch 4: The generated code, just shown for completeness.
The tests all pass except one. A single test fails because pvremove
from the previous test causes sfdisk to fail (apparently the previous
pvremove is still happening, so sfdisk refuses to partition the device
because it is in use). I believe this is an underlying failure
revealed by this change to virtio, not caused by the change itself.
There doesn't appear to be any noticable increase in speed when
running the tests, but I didn't measure anything. The main reason to
use virtio is that it's inherently simpler than emulating IDE or SCSI
Richard Jones, Emerging Technologies, Red Hat http://et.redhat.com/~rjones
libguestfs lets you edit virtual machines. Supports shell scripting,
bindings from many languages. http://et.redhat.com/~rjones/libguestfs/
See what it can do: http://et.redhat.com/~rjones/libguestfs/recipes.html
I am planning on running several virtual machines on a single host. I
will have two or three Linux baeed virtual machines and one or two
Windoze. I plan on using a F11 host system.
I need most of these to run automatically on boot-up of the host
system. It would be really nice if I could use something like the
Ctl-Alt-FN to be able to access and switch between virtual machines.
This needs to be stable. The machines that these virtual machines are
intended to replace are often running hundreds of days between
My gut feel is that the virt-manager suite might be the way to go,
editting the apropriate xml files as required. I also see there
is a qemu launcher and it seems to work okay. I suspect there are
others as well.
What tends to be the consensus here on the various virtual machine
managers? Are there white papers somewhere that could give some
I am a newcomer to Fedora (linux) virtualization. However, I have been a long
time user of VMware products running first on Red Hat Linux and then on Fedora.
When I recently acquired a CPU with hardware virtualization support (AMD
Phenom II 940), I decided to "change my problem set" and give Fedora
Virtualization a try ... specifically qemu/kvm/etc.
I use virtuals for three purposes: 1) To test software which might destroy a
system or testing which requires a lot of re-booting. 2) For development
environment to build rpm packages for various systems, i386/x86_64, etc. 3)
To run windows.
OK, install the virtualization packages and then install a simple Fedora 11
guest ... naturally it worked fine.
Now lets get down to real business since I would be installing/running a
number of virtual systems. The first thing I found was that configuration files,
disk images, etc. were scattered but mostly under /etc/libvirt and
/var/lib/libvirt. I looked for a runtime parameter which specified where
things were to be stored but did not find anything. Furthermore, all of this
stuff was in the root ("/") partition and, when I upgraded to the next release,
a pain to bring across.
I have three suggestions (I will put these in bugzilla as soon as someone says
what package they should be filed against):
1. Put all files (disk images, configuration, etc.) under a single directory
(easier to manage).
2. Provide a virtualization configuration parameter for setting the top
directory to be used to store virtualization files (make it easier to get
things out of root).
3. Do not require DVD/CD ISO images to be in the image directory and do not
screw with SELinux settings on ISO files.
For those unfamiliar with it, this is more or less how VMware stuff sets things
So much for wishful thinking, how do I make things easier (and get the files
out of root)?
OK, what I have come up with is to create a separate partition for all of the
files and then "bind mount" it to /var/lib/libvirt and /etc/libvirt.
Does this make sense? Is this going to work? Any other suggestions?