If I use sparce=true or false I end up with a 100Gb qcow2 (raw) file. However If I look at the disk size in virt-manager the file is 20Gb.
I am having problems with mounting the / partition during the install.
I am using this virt-install command to create and install a RHEL 6 VM on F20 (updated OS as of today):
--connect qemu:///system \
--name Stage \
--memory 2048 \
--os-variant rhel6 \
--os-type linux \
--disk path=/var/lib/libvirt/images/stage.qcow2,cache=none,bus=virtio,size=100,sparse=true,format=qcow2 \
--network bridge=bridge0 \
--virt-type qemu \
--graphics vnc \
--extra-args "ks=file:/rhel6-ks.ks" \
The /var/lib/libvirt/images directory is a glusterfs file system, between two machines (kvm1 and kvm2) and images are supposed to reside on this mount.
and this kickstart file (rhel6-ks.ks):
#RHEL 6 KS file
network --onboot=yes --device=eth0 --bootproto=dhcp --hostname=v-dst-stgjah-01.nexus.xxxxxxx.com
authconfig --enableshadow --passalgo=sha512
timezone --utc Europe/London
bootloader --location=mbr --driveorder=vda --append="rhgb quiet"
clearpart --all --drives=vda
part /boot --fstype=ext4 --size=1024
part pv.008002 --size=94371840
volgroup vg_rhel6 pv.008002
logvol / --fstype=ext4 --name=lv_root --vgname=vg_rhel6 --size=15000
logvol swap --name=lv_swap --vgname=vg_rhel6 --size=4096
logvol /repos --fstype=ext4 --name=lv_repos --vgname=vg_rhel6 --grow --size=100
The error message I get from anaconda: "An error occurred mounting device /dev/mapper/vg_rhel6-lv_root at / mount failed: (9, None)."
Manually creating the disk using the truncate command also same problem.
I have created other RHEL 6 VMs on 10G disks and the OS installs without any problems.
I just realized that when one does revert to a previous snapshot ("start
a snapshot"), the properties of the actual virtual machine don't revert
back as they looked when the snapshot was taken. For example (on a
- take snapshot X
- Add some hardware (a new disk)
- revert to snapshot X
At this point I would expect the recently added disk to be removed from
the VM definition (and for the qcow2 file to be deleted from disk as
well) since those weren't there when the snapshot was taken.
I think that's the correct way it should behave and thereforew, when you
revert to a snapshot, instead of saying;
"All disk changes since the last snapshot was created will be discarded."
... it should simply say:
"All changes since the last snapshot was created will be discarded."
What do you guys think?
On Thu, Mar 27, 2014 at 10:26:42PM +0000, Richard W.M. Jones wrote:
> I'm pleased to announce libguestfs 1.26, a library and set of tools
> for accessing and modifying virtual machine disk images. This release
> took more than 6 months of work by a considerable number of people,
> and has many new features (see release notes below).
> You can get libguestfs 1.26 here:
> Main website: http://libguestfs.org/
> Source: http://libguestfs.org/download/1.26-stable/
> You will also need latest supermin from here:
> Fedora 20/21: http://koji.fedoraproject.org/koji/packageinfo?packageID=8391
> It will appear as an update for F20 in about a week.
Fedora 20 users can test and give feedback here:
> Debian/experimental coming soon, see:
> The Fedora and Debian packages have split dependencies so you can
> download just the features you need.
> From http://libguestfs.org/guestfs-release-notes.1.html :
> RELEASE NOTES FOR LIBGUESTFS 1.26
> New features
> virt-customize(1) is a new tool for customizing virtual machine disk
> images. It lets you install packages, edit configuration files, run
> scripts, set passwords and so on. virt-builder(1) and virt-sysprep(1)
> use virt-customize, and command line options across all these tools are
> now identical.
> virt-diff(1) is a new tool for showing the differences between the
> filesystems of two virtual machines. It is mainly useful when showing
> what files have been changed between snapshots.
> virt-builder(1) has been greatly enhanced. There are many more ways to
> customize the virtual machine. It can pull templates from multiple
> repositories. A parallelized internal xzcat implementation speeds up
> template decompression. Virt-builder uses an optimizing planner to
> choose the fastest way to build the VM. It is now easier to use
> virt-builder from other programs. Internationalization support has been
> added to metadata. More efficient SELinux relabelling of files. Can
> build guests for multiple architectures. Error messages have been
> improved. (Pino Toscano)
> virt-sparsify(1) has a new --in-place option. This sparsifies an image
> in place (without copying it) and is also much faster. (Lots of help
> provided by Paolo Bonzini)
> virt-sysprep(1) can delete and scrub files under user control. You can
> lock user accounts or set random passwords on accounts. Can remove more
> log files. Can unsubscribe a guest from Red Hat Subscription Manager.
> New flexible way to enable and disable operations. (Wanlong Gao, Pino
> virt-win-reg(1) allows you to use URIs to specify remote disk images.
> virt-format(1) can now pass the extra space that it recovers back to
> the host.
> guestfish(1) has additional environment variables to give fine control
> over the ><fs> prompt. Guestfish reads its (rarely used) configuration
> file in a different order now so that local settings override global
> settings. (Pino Toscano)
> virt-make-fs(1) was rewritten in C, but is unchanged in terms of
> functionality and command line usage.
> Language bindings
> The OCaml bindings have a new Guestfs.Errno module, used to check the
> error number returned by Guestfs.last_errno.
> PHP tests now work. (Pino Toscano)
> Inspection can recognize Debian live images.
> ARMv7 (32 bit) now supports KVM acceleration.
> Aarch64 (ARM 64 bit) is supported, but the appliance part does not work
> PPC64 support has been fixed and enhanced.
> Denial of service when inspecting disk images with corrupt btrfs
> It was possible to crash libguestfs (and programs that use libguestfs
> as a library) by presenting a disk image containing a corrupt btrfs
> This was caused by a NULL pointer dereference causing a denial of
> service, and is not thought to be exploitable any further.
> See commit d70ceb4cbea165c960710576efac5a5716055486 for the fix. This
> fix is included in libguestfs stable branches ≥ 1.26.0, ≥ 1.24.6 and
> ≥ 1.22.8, and also in RHEL ≥ 7.0. Earlier versions of libguestfs are
> not vulnerable.
> Better generation of random root passwords and random seeds
> When generating random root passwords and random seeds, two bugs were
> fixed which are possibly security related. Firstly we no longer read
> excessive bytes from /dev/urandom (most of which were just thrown
> away). Secondly we changed the code to avoid modulo bias. These
> issues were not thought to be exploitable. (Both changes suggested by
> Edwin Török)
> GUID parameters are now validated when they are passed to API calls,
> whereas previously you could have passed any string. (Pino Toscano)
> New APIs
> guestfs_add_drive_opts: new discard parameter
> The new discard parameter allows fine-grained control over
> discard/trim support for a particular disk. This allows the host file
> to become more sparse (or thin-provisioned) when you delete files or
> issue the guestfs_fstrim API call.
> guestfs_add_domain: new parameters: cachemode, discard
> These parameters are passed through when adding the domain's disks.
> Discard all blocks on a guestfs device. Combined with the discard
> parameter above, this makes the host file sparse.
> Test if discarded blocks read back as zeroes.
> For each struct returned through the API, libguestfs now generates
> guestfs_compare_* and guestfs_copy_* functions to allow you to
> compare and copy structs.
> Copy attributes (like permissions, xattrs, ownership) from one file
> to another. (Pino Toscano)
> A flexible API for creating empty disk images from scratch. This
> avoids the need to call out to external programs like qemu-img(1).
> Per-backend settings (can also be set via the environment variable
> LIBGUESTFS_BACKEND_SETTINGS). The main use for this is forcing TCG
> mode in the qemu-based backends, for example:
> export LIBGUESTFS_BACKEND=direct
> export LIBGUESTFS_BACKEND_SETTINGS=force_tcg
> Get the label or name of a partition (for GPT disk images).
> Build changes
> The following extra packages are required to build libguestfs 1.26:
> supermin ≥ 5
> Supermin version 5 is required to build this version of libguestfs.
> flex, bison
> Virt-builder now uses a real parser to parse its metadata file, so
> these tools are required.
> This is now a required build dependency, where previously it was (in
> theory) optional.
> PO message extraction rewritten to be more robust. (Pino Toscano)
> podwrapper gives an error if the --insert or --verbatim argument
> pattern is not found.
> Libguestfs now passes the qemu -enable-fips option to enable FIPS, if
> qemu supports it.
> ./configure --without-qemu can be used if you don't want to specify a
> default hypervisor.
> Copy-on-write [COW] overlays, used for example for read-only drives,
> are now created through an internal backend API (.create_cow_overlay).
> Libvirt backend uses some funky C macros to generate XML. These are
> simpler and safer.
> The ChangeLog file format has changed. It is now just the same as git
> log, instead of using a custom format.
> Appliance start-up has changed:
> * The libguestfs appliance now initializes LVM the same way as it is
> done on physical machines.
> * The libguestfs appliance does not write an empty string to
> /proc/sys/kernel/hotplug when starting up.
> Note that you must configure your kernel to have
> CONFIG_UEVENT_HELPER_PATH="" otherwise you will get strange LVM
> errors (this applies as much to any Linux machine, not just
> libguestfs). (Peter Rajnoha)
> Libguestfs can now be built on arches that have ocamlc(1) but not
> ocamlopt(1). (Hilko Bengen, Olaf Hering)
> You cannot use ./configure --disable-daemon --enable-appliance. It made
> no sense anyway. Now it is expressly forbidden by the configure script.
> The packagelist file uses m4 for macro expansion instead of cpp.
> Bugs fixed
> java bindings inspect_list_applications2 throws
> [RFE] enable subscription manager clean or unregister operation to
> virt-resize does not preserve GPT partition names
> mount-local should give a clearer error if root is not mounted
> virt-sparsify overwrites block devices if used as output files
> libguestfs: error: invalid backend: appliance
> guestfs_pvs prints "unknown device" if a physical volume is missing
> Recommended default clock/timer settings
> ruby-libguestfs throws "expecting 0 or 1 arguments" on
> Cannot inspect cirros 0.3.1 disk image fully
> LIBVIRT_DEFAULT_URI=qemu:///system breaks libguestfs
> virt-builder network (eg. --install) doesn't work if resolv.conf sets
> nameserver 127.0.0.1
> When SSSD is installed, libvirt configuration requires
> authentication, but not clear to user
> virt-make-fs fails making fat/vfat whole disk: Device partition
> expected, not making filesystem on entire device '/dev/sda' (use -I
> to override)
> virt-sysprep to delete more logfiles
> RFE: libguestfs inspection does not recognize Free4NAS live CD
> RFE: virt-sysprep/virt-builder should have an option to lock a user
> libguestfs fails examining libvirt guest with ceph drives: rbd: image
> name must begin with a '/'
> virt-builder fails if $HOME/.cache doesn't exist
> libguestfs: do not use versioned jar file
> All libguestfs LVM operations fail on Debian/Ubuntu
> Need update helpout of part-set-gpt-type
> virt-sysprep does not correctly set the hostname on Debian/Ubuntu
> guestfish prints literal "\n" in error messages
> guestmount: "touch" command fails: touch: setting times of
> `timestamp': Invalid argument
> [RFE] function to get partition name
> list-devices returns devices of different types out of order
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
virt-top is 'top' for virtual machines. Tiny program with many
powerful monitoring features, net stats, disk stats, logging, etc.
I am planning to update the xen package in Rawhide to version 4.4.0 in the
next few days. The xend functionality including the xm command is now an
optional extra which I was intending to leave out. Is that likely to cause
problems with libvirt? Also, this version makes it easier to use upstream
qemu, but Fedora qemu currently builds without xen support. How easy would
it be to change this (after xen has been updated)?
I am experimenting with GlusterFS between two VM hosts in F20. The idea is the the VM images are stored on the Gluster filesystems with no VM disk cache.
You could extend so the VMs can mount a Gluster Brick? Not sure. Maybe someone can confirm if this would work?
Looking for some recommendations as to when directory passthrough
started working or will become available.
I would like to read-only mount a directory from the host (inside the
guest). Technically looking to do this in RHEL 7, but just curious as to
the proper virsh define xml to make it happen.
Any pointers to current documentation/methods?
I've got one virtual machine that provides some services used
by another virtual machine. What's the simplest way to notice
when one of the machines goes down and automagically tell
the other to shutdown?
Can I get notifications of some kind from libvirt or
udev (maybe the vnet interface going away) or is it
simpler to just poll every so often to see which ones
I came up with a nifty way to do this using VLANs, in
my router, but my new router doesn't support VLANs,
so I keep thinking I really ought to be able to do this
with iptables, but nothing I try seems to work.
Here's my old technique:
Now I need to figure out some way to make everything
run on the host without any help from the router.
Am I going to have to run a 2nd virtual machine just
to serve as a "router" for the isolated machine
and block all local lan traffic inside the 2nd VM
(I'm pretty sure I could get that to work, but it
seems like a lot bigger hammer than I ought to need :).
we apologize if you receive multiple copies of this CfP
CALL FOR PAPERS
9th Workshop on Virtualization in High-Performance Cloud Computing (VHPC
held in conjunction with Euro-Par 2014, August 25-29, Porto, Portugal
Date: August 26, 2014
Workshop URL: http://vhpc.org
Paper Submission Deadline: May 30, 2014
CALL FOR PAPERS
Virtualization technologies constitute a key enabling factor for flexible
management in modern data centers, and particularly in cloud environments.
Cloud providers need to dynamically manage complex infrastructures in a
seamless fashion for varying workloads and hosted applications,
the customers deploying software or users submitting highly dynamic and
heterogeneous workloads. Thanks to virtualization, we have the ability to
vast computing and networking resources dynamically and close to the
cost of providing the services, which is unprecedented in the history of
and commercial computing.
Various virtualization technologies contribute to the overall picture in
ways: machine virtualization, with its capability to enable consolidation
under-utilized servers with heterogeneous software and operating systems
and its capability to live-migrate a fully operating virtual machine (VM)
with a very
short downtime, enables novel and dynamic ways to manage physical servers;
OS-level virtualization, with its capability to isolate multiple user-space
environments and to allow for their co-existence within the same OS kernel,
promises to provide many of the advantages of machine virtualization with
levels of responsiveness and performance; I/O Virtualization allows physical
NICs/HBAs to take traffic from multiple VMs; network virtualization, with
capability to create logical network overlays that are independent of the
underlying physical topology and IP addressing, provides the fundamental
ground on top of which evolved network services can be realized with an
unprecedented level of dynamicity and flexibility; the increasingly adopted
paradigm of Software-Defined Networking (SDN) promises to extend this
flexibility to the control and data planes of network paths. These
have to be inter-mixed and integrated in an intelligent way, to support
workloads that are increasingly demanding in terms of absolute performance,
responsiveness and interactivity, and have to respect well-specified
Level Agreements (SLAs), as needed for industrial-grade provided services.
Indeed, among emerging and increasingly interesting application domains
for virtualization, we can find big-data application workloads in cloud
infrastructures, interactive and real-time multimedia services in the cloud,
including real-time big-data streaming platforms such as used in real-time
analytics supporting nowadays a plethora of application domains. Distributed
cloud infrastructures promise to offer unprecedented responsiveness levels
hosted applications, but that is only possible if the underlying
technologies can overcome most of the latency impairments typical of current
virtualized infrastructures (e.g., far worse tail-latency). What is more,
communications Network Function Virtualization (NFV) is becoming a key
technology enabling a shift from supplying hardware-based network functions,
to providing them in a software-based and elastic way. In conjunction with
(public and private) cloud technologies, NFV may be used for constructing
foundation for cost-effective network functions that can easily and
adapt to demand, still keeping their major carrier-grade characteristics in
of QoS and reliability.
The Workshop on Virtualization in High-Performance Cloud Computing (VHPC)
aims to bring together researchers and industrial practitioners facing the
posed by virtualization in order to foster discussion, collaboration,
of knowledge and experience, enabling research to ultimately provide novel
solutions for virtualized computing systems of tomorrow.
The workshop will be one day in length, composed of 20 min paper
each followed by 10 min discussion sections, and lightning talks, limited
minutes. Presentations may be accompanied by interactive demonstrations.
Topics of interest include, but are not limited to:
- Management, deployment and monitoring of virtualized environments
- Language-process virtual machines
- Performance monitoring for virtualized/cloud workloads
- Virtual machine monitor platforms
- Topology management and optimization for distributed virtualized
- Paravirtualized I/O
- Improving I/O and network virtualization including use of RDMA,
- Improving performance in VM access to GPUs, GPU clusters, GP-GPUs
- HPC storage virtualization
- Virtualized systems for big-data and analytics workloads
- Optimizations and enhancements to OS virtualization support
- Improving OS-level virtualization and its integration within cloud
- Performance modelling for virtualized/cloud applications
- Heterogeneous virtualized environments
- Network virtualization
- Software defined networking
- Network function virtualization
- Hypervisor and network virtualization QoS and SLAs
- Evolved European grid architectures including such based on network
- Workload characterization for VM-based environments
- Optimized communication libraries/protocols in the cloud
- System and process/bytecode VM convergence
- Cloud frameworks and APIs
- Checkpointing/migration of VM-based large compute jobs
- Job scheduling/control/policy with VMs
- Instrumentation interfaces and languages
- VMM performance (auto-)tuning on various load types
- Cloud reliability, fault-tolerance, and security
- Research, industrial and educational use cases
- Virtualization in cloud, cluster and grid environments
- Cross-layer VM optimizations
- Cloud HPC use cases including optimizations
- Services in cloud HPC
- Hypervisor extensions and tools for cluster and grid computing
- Cluster provisioning in the cloud
- Performance and cost modelling
- Languages for describing highly-distributed compute jobs
- VM cloud and cluster distribution algorithms, load balancing
- Instrumentation interfaces and languages
- Energy-aware virtualization
Rolling Paper registration
May 30, 2014 - Full paper submission
July 4, 2014 - Acceptance notification
October 3, 2014 - Camera-ready version due
August 26, 2014 - Workshop Date
Michael Alexander (chair), TU Wien, Austria
Anastassios Nanos (co-chair), NTUA, Greece
Tommaso Cucinotta (co-chair), Bell Labs, Dublin, Ireland
Costas Bekas, IBM
Jakob Blomer, CERN
Roberto Canonico, University of Napoli Federico II, Italy
Paolo Costa, MS Research Cambridge, England
Jorge Ejarque Artigas, Barcelona Supercomputing Center, Spain
William Gardner, University of Guelph, USA
Balazs Gerofi, University of Tokyo, Japan
Krishna Kant, Temple University, USA
Romeo Kinzler, IBM
Nectarios Koziris, National Technical University of Athens, Greece
Giuseppe Lettieri, University of Pisa, Italy
Jean-Marc Menaud, Ecole des Mines de Nantes, France
Christine Morin, INRIA, France
Dimitrios Nikolopoulos, Queen's University of Belfast, UK
Herbert Poetzl, VServer, Austria
Luigi Rizzo, University of Pisa, Italy
Josh Simons, VMWare, USA
Borja Sotomayor, University of Chicago, USA
Vangelis Tasoulas, Simula Research Lab, Norway
Yoshio Turner, HP Labs, USA
Kurt Tutschku, Blekinge Institute of Technology, Sweden
Chao-Tung Yang, Tunghai University, Taiwan
Papers submitted to the workshop will be reviewed by at least two
members of the program committee and external reviewers. Submissions
should include abstract, key words, the e-mail address of the
corresponding author, and must not exceed 10 pages, including tables
and figures at a main font size no smaller than 11 point. Submission
of a paper should be regarded as a commitment that, should the paper
be accepted, at least one of the authors will register and attend the
conference to present the work.
Accepted papers will be published in the Springer LNCS series - the
format must be according to the Springer LNCS Style. Initial
submissions are in PDF; authors of accepted papers will be requested
to provide source files.
EasyChair Abstract Submission Link:
The workshop is one day in length and will be held in conjunction with
Euro-Par 2014, 25-29 August, Porto, Portugal