> From: Philip Rhoades
> I can ssh from/to the host/guest OK but how do I set up a route (or
> whatever is necessary) so that another machine:
> eth0: 192.168.0.12
> can ssh to the guest? - "ssh 192.168.122.68" gives "no route to host" -
> http://docs.fedoraproject.org/virtualization-guide/f12/en-US/html/ but
> the problem does not seem to be covered there.
Alexander is correct in saying that bridging would allow you to do that.
There are two networking discussed in the guide.
The first is a NAT (network address translation), in which the guests are
given "private" ip addresses and any outbound traffic appears to be coming
from the host machine's IP address. This is the same as the setup on your
ADSL router where the internal network machines get addresses of
192.168.x.x but the internet sees your requests as coming from the IP
address of your router.
There should be lots of documentation in linux firewalling guides under
sections on NAT (or possibly called IP Masquerading in some). Have a look
at these for information on port forwarding to reveal services
inside the virtual (such as ssh).
The other option is bridging. This shares the physical network interface
of the host with the guest. In this case the VM acts as though it's a
machine plugged into the same subnet as the host, its services are
accessible like those of the host and it's as vulnerable to attack as the
I'm in this situation and would like to be certain before doing bad things...
Thanks in advance for your help and sorry for off topic
rh el 6.1 host and a vm with one disk that is LVM based
- shutdown guest
- lock exclusively the lv (it is actually a cluster of two rh el 6.1)
on one node:
# lvchange -an /dev/VG_VIRT01/zensrv_002
on the other one:
# lvchange -aey /dev/VG_VIRT01/zensrv_002
- create snapshot
# lvcreate --size 5G --snapshot --name zensrv_002_snap /dev/VG_VIRT01/zensrv_002
Logical volume "zensrv_002_snap" created
- power on vm and going ahead doing some intrusive tests
Note that I maintained the original LV as the disk for the VM
- At the end of my work the situation I have changed about 1.7Gb of my
# lvs /dev/VG_VIRT01/zensrv_002*
LV VG Attr LSize Origin Snap% Move Log
zensrv_002 VG_VIRT01 owi-ao 10.00g
zensrv_002_snap VG_VIRT01 swi-a- 5.00g zensrv_002 34.77
- Now I'm confident with the changes..
Is it ok to simply remove the snapshot?
- What if I'm not confident with the changes and would like to revert?
lvconvert --merge /dev/VG_VIRT01/zensrv_002_snap
Or did I make it totally wrong and had to work with the snapshot and
not with the original LV????
Keep on reading docs, both rhel and general but I'm sort of confused....
Probably someone on this list already went through this way...
BTW: any plan to integrate a sort of snapshotting feature for lvm
based disks in virt-manager?
I have a Dell R815 with plenty of memory where I'm testing nested kvm,
as the processor is
AMD Opteron(tm) Processor 6174
and nested kvm is enabled by default.
The OS is Fedora 15 x86_64 and I have installed a fully updated Centos
To be able to use full virtualization inside guests of the CentOS
guests, I have configured its cpu as "copy to cpu"
so that now I have "Opteron_G3" in this centos cpu type.
I have configured it as 2 cpu.
The F15 host is quad socket / 12 cores each
Flags on f15 host I can see are:
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt
pdpe1gb rdtscp lm 3dnowext 3dnow constant_tsc rep_good nopl
nonstop_tsc extd_apicid amd_dcm pni monitor cx16 popcnt lahf_lm
cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch
osvw ibs skinit wdt nodeid_msr arat npt lbrv svm_lock nrip_save
flags on 5.6 guest from its cpuinfo are:
processor : 1
vendor_id : AuthenticAMD
cpu family : 16
model : 2
model name : AMD Phenom(tm) 9550 Quad-Core Processor
stepping : 3
cpu MHz : 2200.024
cache size : 512 KB
fpu : yes
fpu_exception : yes
cpuid level : 5
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov
pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb
lm pni cx16 popcnt lahf_lm cmp_legacy svm cr8_legacy altmovcr8 abm
bogomips : 4400.61
TLB size : 1024 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
I can successfully install linux nested guests.
But I have problems installing windows xp nested guests.
I try to configure them as "standard", so with ide cd and default network card.
The installation proceeds ok up to the first reboot, but then at that
time I only get a white cursor blinking at top left...
Any reason a win guest could have these problems?
Anyone already installed win nested guests?
Any log I can provide to understand?
I suspect that only a subset f cpu flags are "nestable" and so
probably instead of "copy to cpu" it could be better to pass only a
subset of them as it is possible from virt-manager gui... but I don't
know what they could be...
Thanks in advance for suggestions
Next step could be to install a Fedora 15 guest and try nested
virtualization with it, to understand if the problem could be
virtualization feature limitation of codebase found in CentOS 5.6
BTW: I saw great animation at beginning of May related to nested
virtualization arriving at code level for Intel cpu too... any
information about expected integration in fedora qemu/kvm versions...
(or rawhide eventually)?
virt-manager worked fine until I add myself into wheel group (for sudo),
now it complain about libvirtd not running, see below for details:
$ virt-manager --debug
2011-06-06 12:54:54,429 (error:66): dialog message: Unable to open a
connection to the libvirt management daemon.
Libvirt URI is: qemu:///system
- The 'libvirtd' daemon has been started
: authentication failed
Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/connection.py", line 1055,
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 107, in
if ret is None:raise libvirtError('virConnectOpenAuth() failed')
libvirtError: authentication failed
2011-06-06 12:54:57,619 (engine:477): Exiting app normally.
$ virsh -c qemu:///system
error: authentication failed
error: failed to connect to the hypervisor
$ sudo virsh -c qemu:///system
[sudo] password for athmane:
Welcome to virsh, the virtualization interactive terminal.
Type: 'help' for help with commands
'quit' to quit
$ sudo systemctl status libvirtd.service
libvirtd.service - LSB: daemon for libvirt virtualization API
Loaded: loaded (/etc/rc.d/init.d/libvirtd)
Active: active (running) since Mon, 06 Jun 2011 12:50:40 -0100; 12min ago
Process: 3452 ExecStop=/etc/rc.d/init.d/libvirtd stop (code=exited,
Process: 3460 ExecStart=/etc/rc.d/init.d/libvirtd start (code=exited,
Main PID: 3468 (libvirtd)
├ 1320 /usr/sbin/dnsmasq --strict-order --bind-interfaces...
└ 3468 libvirtd --daemon
Using SPICE from virt-manager and selecting View -> Fullscreen, the
menu bar is still displayed, shifting the display downward.
Anyone know of a workaround?
Ian Pilcher arequipeno(a)gmail.com
"If you're going to shift my paradigm ... at least buy me dinner first."