Have Fedora 19 x86_64 installed, up-to-date with updates. Also have
I have downloaded both RHEL 6.4 x86_64 and RHEL 5.9 x86_64 directly from
RHEL 6.4 installs fine. After install, the install boots into the normal
RHEL 5.9 does not. First hint of issue is during install when it reports
an X Server failure to start, and the full graphical install does not
load. A more limited graphical install proceeds. When the install
finishes, and after reboot, it boots into init 3. When I attempt to
perform an init 5, X fails to start and it reports an error 11, which I
believe to be a segmentation fault.
Previously the RHEL 5 guest install worked under Fedora 16, 17, and 18 virt.
Has anyone seen the above, and know how to resolve in Fedora 19?
When I use a disk type storage pool that uses a multipath'ed backend, I am getting the following failure as I try to create the vol (or when I try to start the pool with existing partitions on the backend).
$ virsh vol-create-as poolname vol1 10G
error: Failed to create vol vol1
error: cannot stat file '/dev/mapper/mpathap1': No such file or directory
The partition is created successfully and the DSF for it is /dev/mapper/mpatha1 (has no "p").
Here is the pool definition:
I also tried passing /dev/mapper as the target path but got the same failure. Interestingly, things work fine when I take multipath/dm out of the picture. This is happening on F19 with the latest set of packages.
I enabled debug logging to collect more information. Here are some debug messages that seemed interesting (let me know if you need the full log and I can send it as well).
2013-07-18 22:43:44.264+0000: 11672: debug : virStorageBackendDiskPartBoundries:560 : find free area: allocation 12884901888, cyl size 8225280
2013-07-18 22:43:44.264+0000: 11672: debug : virStorageBackendDiskPartBoundries:613 : aligned alloc 12884901888
2013-07-18 22:43:44.264+0000: 11672: debug : virStorageBackendDiskPartBoundries:629 : final aligned start 17408, end 12884919295
2013-07-18 22:43:44.264+0000: 11672: debug : virCommandRunAsync:2243 : About to run /usr/sbin/parted /dev/mapper/mpatha mkpart --script primary 17408B 12884919295B
2013-07-18 22:43:44.264+0000: 11672: debug : virFileClose:72 : Closed fd 23
2013-07-18 22:43:44.264+0000: 11672: debug : virFileClose:72 : Closed fd 25
2013-07-18 22:43:44.264+0000: 11672: debug : virFileClose:72 : Closed fd 27
2013-07-18 22:43:44.265+0000: 11672: debug : virCommandRunAsync:2248 : Command result 0, with PID 11774
... <<<following set of debug messages repeats a few times>>>
2013-07-18 22:43:44.295+0000: 11669: debug : udevEventHandleCallback:1513 : udev action: 'add'
2013-07-18 22:43:44.295+0000: 11669: debug : udevGetDeviceProperty:121 : udev reports device 'dm-1' does not have property 'DRIVER'
2013-07-18 22:43:44.295+0000: 11669: debug : udevGetDeviceType:1139 : Found device type 'disk' for device 'dm-1'
2013-07-18 22:43:44.295+0000: 11669: debug : udevGetDeviceProperty:121 : udev reports device 'dm-1' does not have property 'ID_BUS'
2013-07-18 22:43:44.295+0000: 11669: debug : udevGetDeviceProperty:121 : udev reports device 'dm-1' does not have property 'ID_SERIAL'
2013-07-18 22:43:44.295+0000: 11669: debug : udevGetDeviceSysfsAttr:210 : udev reports device 'dm-1' does not have sysfs attr 'device/vendor'
2013-07-18 22:43:44.295+0000: 11669: debug : udevGetDeviceSysfsAttr:210 : udev reports device 'dm-1' does not have sysfs attr 'device/model'
2013-07-18 22:43:44.295+0000: 11669: debug : udevGetDeviceProperty:121 : udev reports device 'dm-1' does not have property 'ID_TYPE'
2013-07-18 22:43:44.295+0000: 11669: debug : udevGetDeviceProperty:121 : udev reports device 'dm-1' does not have property 'ID_DRIVE_FLOPPY'
2013-07-18 22:43:44.295+0000: 11669: debug : udevGetDeviceProperty:121 : udev reports device 'dm-1' does not have property 'ID_DRIVE_FLASH_SD'
2013-07-18 22:43:44.295+0000: 11669: debug : udevKludgeStorageType:995 : Could not find definitive storage type for device with sysfs path '/sys/devices/virtual/block/dm-1', trying to guess it
2013-07-18 22:43:44.295+0000: 11669: debug : udevKludgeStorageType:1007 : Could not determine storage type for device with sysfs path '/sys/devices/virtual/block/dm-1'
2013-07-18 22:43:44.295+0000: 11669: debug : udevProcessStorage:1124 : Storage ret=-1
2013-07-18 22:43:44.295+0000: 11669: debug : udevAddOneDevice:1382 : Discarding device -1 0x7f3f187f7f60 /sys/devices/virtual/block/dm-1
2013-07-18 22:43:54.288+0000: 11672: error : virStorageBackendVolOpenCheckMode:1047 : cannot stat file '/dev/mapper/mpathap1': No such file or directory
we apologize if you receive multiple copies of this CfP
CALL FOR PAPERS
8th Workshop on Virtualization in High-Performance Cloud Computing (VHPC '13)
as part of SC 13, Denver, Colorado | sponsored by ACM sighpc
Date: November 22, 2013
Workshop URL: http://vhpc.org
Paper Submission Deadline: September 23, 2013
CALL FOR PAPERS
Virtualization has become a common abstraction layer in modern data
centers, enabling resource owners to manage complex infrastructures
independently of their applications. Conjointly, virtualization is becoming
a driving technology for a manifold of industry grade IT services. The
cloud concept includes the notion of a separation between resource
owners and users, adding services such as hosted application
frameworks and queueing. Utilizing the same infrastructure, clouds
carry significant potential for use in cpu-intensive or data-intensive
computing. The ability of clouds to provide for requests and releases
of vast computing resources dynamically and close to the marginal
cost of providing the services is unprecedented in the history of
scientific and commercial computing.
This workshop aims to bring together industrial providers with the
application community in order to foster discussion, collaboration
and mutual exchange of knowledge and experience.
The workshop will be one day in length, composed of 20 min paper
presentations, each followed by 10 min discussion sections. Lightning
talks are limited to 5 minutes. Presentations may be accompanied by
Topics of interest include, but are not limited to:
- Management, deployment and monitoring of VM-based environments
- VM-cloud performance monitoring
- VM cloud topology management and optimization
- Operating systems virtualization supportpptimization
- VM-based cloud performance modelling
- Network virtualization for VM-environments
- Data virtualization
- Evolved grid architectures including such based on network virtualization
- Workload characterization for VM-based environments
- Optimized communication libraries/protocols in the cloud
- System and process/bytecode VM convergence
- Cloud frameworks and APIs
- GPU Virtualization architectures and APIs
- Checkpointing/migration of large compute jobs
- Instrumentation interfaces and languages
- VMM performance (auto-)tuning on various load types
- Cloud reliability, fault-tolerance, and security
- Heterogeneous virtualized environments
- Paravirtualized I/O
- Services in cloud HPC
- Research and education use cases
- Virtualization in cloud, cluster and grid environments
- Cross-layer VM optimizations
- Cloud HPC use cases including optimizations
- Energy-aware virtualization
- Performance and cost modelling
- QoS and and service levels
- Languages for describing highly-distributed compute jobs
- VM cloud and cluster distribution algorithms, load balancing
- Instrumentation interfaces and languages
- Hypervisor extensions and tools for cluster and grid computing
- Virtual machine monitor platforms
- Cluster provisioning in the cloud
Rolling Paper registration
September 23, 2013 - Full paper submission
October 21, 2013 - Acceptance notification
November 8, 2013 - Camera-ready version due
August 9, 2013 - Deadline for lightning talk abstracts
September 2, 2013 - Lightning talk notification
November 22, 2013 - Workshop Date
Michael Alexander (chair), TU Wien, Austria
Gianluigi Zanetti (co-chair), CRS4, Italy
Anastassios Nanos (co-chair), NTUA, Greece
Costas Bekas, IBM, Switzerland
Jakob Blomer, CERN
Giovanni Busonera, CRS4, Italy
Roberto Canonico, University of Napoli Federico II, Italy
Simon Crosby, Bromium, USA
Tommaso Cucinotta, Alcatel-Lucent Bell Labs, Ireland
Casimer DeCusatis, IBM, USA
William Gardner, University of Guelph, USA
Marcus Hardt, Forschungszentrum Karlsruhe, Germany
Sverre Jarp, CERN, Switzerland
Xuxian Jiang, NC State, USA
Krishna Kant, George Mason University, USA
Romeo Kinzler, IBM, Switzerland
Nectarios Koziris, National Technical University of Athens, Greece
Simone Leo, CRS4, Italy
Jean-Marc Menaud, Ecole des Mines de Nantes, France
Dimitrios Nikolopoulos, Queen's University of Belfast, UK
Josh Simons, VMWare, USA
Borja Sotomayor, University of Chicago, USA
Yoshio Turner, HP Labs, USA
Kurt Tutschku, Blekinge Institute of Technology, Sweden
Chao-Tung Yang, Tunghai University, Taiwan
Papers submitted to the workshop will be reviewed by at least two
members of the program committee and external reviewers. Submissions
should include abstract, key words, the e-mail address of the
corresponding author, and must not exceed 8 pages, including tables
and figures at a main font size no smaller than 11 point. Submission
of a paper should be regarded as a commitment that, should the paper
be accepted, at least one of the authors will register and attend the
conference to present the work.
Accepted papers will be published in the ACM International Conference
Proceedings Series. The format must be according to the ACM SIG style.
Initial submissions are in PDF; authors of accepted papers will be
requested to provide source files.
Abstract Submission Link:
Lightning Talks are non-paper track synoptical in nature that are strictly
limited to 5 minutes. They can be used to gain early feedback on ongoing
research, for demonstrations, to present research results, early research
ideas, perspectives and positions of interest to the community.
The workshop will be held as part of SC’13, Denver, Colorado.
SC 2013: http://sc13.supercomputing.org/
I have now updated xen on Fedora 20 to version 4.3.0. I have built it with
the XSM/FLASK security subsystem available if you add a xenpolicy file as
a module in the grub configuration. Note that the xen-hypervisor package
on i686 is currently empty because it was dropped in this version, but a
32-bit dom0 should work on a 64-bit hypervisor if your hardware supports
I have a Fedora 18 server hosting several Fedora 18 guests with a
bridge. The server, guests of the server, and peers of the server can
all freely interact. I have YUM repositories available via http and
home and shared directories available via nfs from the server. And DNS,
authentication and policies for autofs and sudo are available from IPA
on one of server guests. Life is good.
[root@host ~]# cat /etc/sysconfig/network-scripts/ifcfg-em1
# Configured manually
[root@host ~]# cat /etc/sysconfig/network-scripts/ifcfg-br1
# Configured manually
[root@host ~]# virsh net-dumpxml Subnet1
<bridge name='br1' />
Now Fedora 19 comes along and I need to test new versions of the server
guests in isolation from existing server guests and server peers; ie.
everything except for the http and nfs services of the server. I
welcome your suggestions.
Back with one more (probably last for the moment) problem, with sound this
When I connect to a Windows 7 (ich6 audio device) with virt-manager or
spicec from Gnome, sound works like a charm.
However, for the intended use, my standard boot sequence is booting Fedora
in the multi-user.target (runlevel 3), and run spicec with xinit.
In this situation, I logically get a connection refused to pulseaudio since
it isn't running. If I start pulseaudio with start-pulseaudio-x11 as the
user running spicec, I don't get any errors, but no sound either.
I know this has more to do with pulseaudio than virtualization, however,
since I know sound has been a long time issue with Spice on previous
versions, maybe someone can help me on this ?
Thanks in advance !
I installed Fedora 19 and moved the VMs I was running on Fedora 17
previously to it. The VNC display is now corrupted--the desktop
background is black and many widgets (the guest is Windows XP), making
it pretty unusable. Basically, it looks like it is failing to paint
large blocks of the UI, but sometimes pieces of it reappear when
clicking on menus, etc. Is this a known issue with Fedora 19
virt-viewer & VNC displays?
I tried getting Windows XP to install the Spice driver, but had
problems way-back-when. I suppose I could try again if VNC is no
I'm having another problem with my KVM setup, using Fedora 18. For
debugging pruposes, I added two serial ports to my virtual machines, like
<target port="1 (or 2)" />
On the Linux guest, everything works fine. On the Windows 7 guest, I only
get one of the two ports. The one showing up works well, but I don't
understand why the second one doesn't show up. libvirt reports the
creation of the char devices on the logs, without any error.
Couldn't find anything about this on Google. Any clues ? Thanks !
I'm currently setting up a virtualization system using Fedora 18, KVM and
Open vSwitch. I've created and successfully integrated my OVS bridge into
libvirt. However, if this bridge doesn't have an assigned IP address (and
it won't in production mode), I can't start any VM, with Spice complaining
with the following error :
((null):6425): Spice-Warning **:
reds.c:2977:reds_init_socket: getaddrinfo(127.0.0.1,5900): Address family
for hostname not supported
qemu-kvm: failed to initialize spice server
I don't understand from where it comes, since the local loop interface is
up. Can anyone help me on this ? Thanks !
I have clock synchronization problems with Fedora 18/KVM host and windows
server guests. Different 64bit windows servers, all has some time
Some of them will slow down time until time difference is 60 seconds and
then clock is synchronizes from internet. But 60 seconds is a long time.
Kerberos authentication sometimes files due to time skew.
Some machines will run OK day or two, but then clock stops. After some
hours clock is synchronized again and continues to work properly.
Curious, that during time is stopped, server works properly (except kerberos
authentications, which is very time sensitive.
For example, attaching 2 graphs from 2 windows servers. They show time
difference measured by check_mk nagios plugin. 10k means 10000 seconds.
How I should set clock/time/NTP on these guests?
I have absolutelly no problems with Linux guests, they work perfectly.