Hello,
The quantum network side is not covered in https://fedoraproject.org/wiki/Getting_started_with_OpenStack_EPEL#Adding_a_... so I'm trying to clarify this.
On a second node I've followed the instructions in the wiki minus the network obviously since I'm running quantum+openvswitch (hybrid). I'm applying some of the instructions found here: https://fedoraproject.org/wiki/QA:Testcase_Quantum_V2#Setup but when I'm doing "quantum-server-setup --plugin openvswitch" it's asking me to install mysql, the scripts should be modified to also ask if there's a DB running elsewhere (i.e. on the controller).
Anyway, I let it do its thing, install mysql and setup nova.conf, then modified nova.conf with the quantum section from the controller's nova.conf. Now my question is - in this case quantum_admin_auth_url and quantum_url on the node should they point to localhost or to the controller's address?
I'm thinking quantum_admin_auth_url should point to the keystone service on the controller's IP and quantum_url should be http://localhost:9696/?
Additionally:
quantum.conf - qpid_hostname should point to the controller's IP quantum/api-paste.ini - [filter:authtoken] auth_host should be the controller quantum/l3_agent.ini - auth_url should point to the controller plugins/openvswitch/ovs_quantum_plugin.ini - sql_connection should be using the controller's mysql server
Let me know if any of that is wrong or I missed some other settings that should be notified.
Regards, Lucian
Hi, Please see my inline. Thanks Gary
On 11/20/2012 01:14 PM, Nux! wrote:
Hello,
The quantum network side is not covered in https://fedoraproject.org/wiki/Getting_started_with_OpenStack_EPEL#Adding_a_... so I'm trying to clarify this.
On a second node I've followed the instructions in the wiki minus the network obviously since I'm running quantum+openvswitch (hybrid). I'm applying some of the instructions found here: https://fedoraproject.org/wiki/QA:Testcase_Quantum_V2#Setup but when I'm doing "quantum-server-setup --plugin openvswitch" it's asking me to install mysql, the scripts should be modified to also ask if there's a DB running elsewhere (i.e. on the controller).
You do not need to install and run the Quantum service again. This only needs to be run once on a "controller node"
On the compute node you need only to run the Quantum agent. Please note that there is a utility script that you can use: quantum-node-setup
Thanks Gary
Anyway, I let it do its thing, install mysql and setup nova.conf, then modified nova.conf with the quantum section from the controller's nova.conf. Now my question is - in this case quantum_admin_auth_url and quantum_url on the node should they point to localhost or to the controller's address?
I'm thinking quantum_admin_auth_url should point to the keystone service on the controller's IP and quantum_url should be http://localhost:9696/?
Additionally:
quantum.conf - qpid_hostname should point to the controller's IP
This is done by the abovementioned script
quantum/api-paste.ini - [filter:authtoken] auth_host should be the controller
No need
quantum/l3_agent.ini - auth_url should point to the controller
No need - you only need one l3 agent
plugins/openvswitch/ovs_quantum_plugin.ini - sql_connection should be using the controller's mysql server
This is tread by the script. Please note that there is no access from the agent to the quantum database. This is done via the message broker.
Let me know if any of that is wrong or I missed some other settings that should be notified.
Regards, Lucian
On 20.11.2012 11:26, Gary Kotton wrote:
Hi, Please see my inline. Thanks Gary
Wow, that's great!
So to conclude, when adding a compute node we need to run the following services: openstack-nova-compute openstack-nova-api quantum-openvswitch-agent
Looking at the script I see it's not dealing with chkconfig so we must do it manually.
On 11/20/2012 01:14 PM, Nux! wrote:
Hello,
The quantum network side is not covered in https://fedoraproject.org/wiki/Getting_started_with_OpenStack_EPEL#Adding_a_... so I'm trying to clarify this.
On a second node I've followed the instructions in the wiki minus the network obviously since I'm running quantum+openvswitch (hybrid). I'm applying some of the instructions found here: https://fedoraproject.org/wiki/QA:Testcase_Quantum_V2#Setup but when I'm doing "quantum-server-setup --plugin openvswitch" it's asking me to install mysql, the scripts should be modified to also ask if there's a DB running elsewhere (i.e. on the controller).
You do not need to install and run the Quantum service again. This only needs to be run once on a "controller node"
On the compute node you need only to run the Quantum agent. Please note that there is a utility script that you can use: quantum-node-setup
Thanks Gary
Anyway, I let it do its thing, install mysql and setup nova.conf, then modified nova.conf with the quantum section from the controller's nova.conf. Now my question is - in this case quantum_admin_auth_url and quantum_url on the node should they point to localhost or to the controller's address?
I'm thinking quantum_admin_auth_url should point to the keystone service on the controller's IP and quantum_url should be http://localhost:9696/?
Additionally:
quantum.conf - qpid_hostname should point to the controller's IP
This is done by the abovementioned script
quantum/api-paste.ini - [filter:authtoken] auth_host should be the controller
No need
quantum/l3_agent.ini - auth_url should point to the controller
No need - you only need one l3 agent
plugins/openvswitch/ovs_quantum_plugin.ini - sql_connection should be using the controller's mysql server
This is tread by the script. Please note that there is no access from the agent to the quantum database. This is done via the message broker.
Let me know if any of that is wrong or I missed some other settings that should be notified.
Regards, Lucian
cloud mailing list cloud@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/cloud
On 11/20/2012 01:47 PM, Nux! wrote:
On 20.11.2012 11:26, Gary Kotton wrote:
Hi, Please see my inline. Thanks Gary
Wow, that's great!
So to conclude, when adding a compute node we need to run the following services: openstack-nova-compute openstack-nova-api
I am not 100% sure about the nova api. Maybe others from Nova can chime in here.
quantum-openvswitch-agent
You will also need to ensure that you OVS is up and running on the host
Looking at the script I see it's not dealing with chkconfig so we must do it manually.
Yes (this is similar with the other services)
On 11/20/2012 01:14 PM, Nux! wrote:
Hello,
The quantum network side is not covered in https://fedoraproject.org/wiki/Getting_started_with_OpenStack_EPEL#Adding_a_... so I'm trying to clarify this.
On a second node I've followed the instructions in the wiki minus the network obviously since I'm running quantum+openvswitch (hybrid). I'm applying some of the instructions found here: https://fedoraproject.org/wiki/QA:Testcase_Quantum_V2#Setup but when I'm doing "quantum-server-setup --plugin openvswitch" it's asking me to install mysql, the scripts should be modified to also ask if there's a DB running elsewhere (i.e. on the controller).
You do not need to install and run the Quantum service again. This only needs to be run once on a "controller node"
On the compute node you need only to run the Quantum agent. Please note that there is a utility script that you can use: quantum-node-setup
Thanks Gary
Anyway, I let it do its thing, install mysql and setup nova.conf, then modified nova.conf with the quantum section from the controller's nova.conf. Now my question is - in this case quantum_admin_auth_url and quantum_url on the node should they point to localhost or to the controller's address?
I'm thinking quantum_admin_auth_url should point to the keystone service on the controller's IP and quantum_url should be http://localhost:9696/?
Additionally:
quantum.conf - qpid_hostname should point to the controller's IP
This is done by the abovementioned script
quantum/api-paste.ini - [filter:authtoken] auth_host should be the controller
No need
quantum/l3_agent.ini - auth_url should point to the controller
No need - you only need one l3 agent
plugins/openvswitch/ovs_quantum_plugin.ini - sql_connection should be using the controller's mysql server
This is tread by the script. Please note that there is no access from the agent to the quantum database. This is done via the message broker.
Let me know if any of that is wrong or I missed some other settings that should be notified.
Regards, Lucian
cloud mailing list cloud@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/cloud
Right, hit some problems. After installing openvswitch, quantum, creating the int and ex bridges and following the instructions for a new compute node I can see it can successfully contacts the controller (nova-manage service list <- lists the new compute service), and it even attempts to build a VM but I'm getting the following:
/var/log/nova/compute.log
2012-11-20 16:49:22 INFO nova.compute.resource_tracker [-] Compute_service record updated for node2242 2012-11-20 16:49:22 ERROR nova.network.quantumv2 [-] _get_auth_token() failed 2012-11-20 16:49:22 TRACE nova.network.quantumv2 Traceback (most recent call last): 2012-11-20 16:49:22 TRACE nova.network.quantumv2 File "/usr/lib/python2.6/site-packages/nova/network/quantumv2/__init__.py", line 38, in _get_auth_token 2012-11-20 16:49:22 TRACE nova.network.quantumv2 httpclient.authenticate() 2012-11-20 16:49:22 TRACE nova.network.quantumv2 File "/usr/lib/python2.6/site-packages/quantumclient/client.py", line 199, in authenticate 2012-11-20 16:49:22 TRACE nova.network.quantumv2 raise exceptions.Unauthorized(message=body) 2012-11-20 16:49:22 TRACE nova.network.quantumv2 Unauthorized: [Errno 111] ECONNREFUSED 2012-11-20 16:49:22 TRACE nova.network.quantumv2 2012-11-20 16:47:16 TRACE nova.network.quantumv2 2012-11-20 16:47:33 INFO nova.virt.libvirt.driver [req-ffb7c7cf-8c40-4622-828e-7e6916f608cc 3c5b8ffd534347b2b30474b34f57dd3f 6cb0773ff31d480ab04ad9264bdb5f44] [instance: b0f4956b-6416-4270-8026-9c28ec927f42] Injecting key into image 727b6e19-988e-43b3-ba7c-014beafb6d29
Who refuses the connection to quantum? :-/
Then, also from compute log:
2012-11-20 16:47:58 ERROR nova.compute.manager [req-ffb7c7cf-8c40-4622-828e-7e6916f608cc 3c5b8ffd534347b2b30474b34f57dd3f 6cb0773ff31d480ab04ad9264bdb5f44] [instance: b0f4956b-6416-4270-8026-9c28ec927f42] Instance failed to spawn 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] Traceback (most recent call last): 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 743, in _spawn 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] block_device_info) 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] File "/usr/lib/python2.6/site-packages/nova/exception.py", line 117, in wrapped 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] temp_level, payload) 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__ 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] self.gen.next() 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] File "/usr/lib/python2.6/site-packages/nova/exception.py", line 92, in wrapped 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] return f(*args, **kw) 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1062, in spawn 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] block_device_info) 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1888, in _create_domain_and_network 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] domain = self._create_domain(xml) 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1867, in _create_domain 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] domain.createWithFlags(launch_flags) 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 187, in doit 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] result = proxy_call(self._autowrap, f, *args, **kwargs) 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 147, in proxy_call 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] rv = execute(f,*args,**kwargs) 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 76, in tworker 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] rv = meth(*args,**kwargs) 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] File "/usr/lib64/python2.6/site-packages/libvirt.py", line 650, in createWithFlags 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self) 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] libvirtError: Cannot get interface MTU on '': No such device 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] 2012-11-20 16:47:59 INFO nova.compute.resource_tracker [req-ffb7c7cf-8c40-4622-828e-7e6916f608cc 3c5b8ffd534347b2b30474b34f57dd3f 6cb0773ff31d480ab04ad9264bdb5f44] Aborting claim: [Claim b0f4956b-6416-4270-8026-9c28ec927f42: 512 MB memory, 0 GB disk, 1 VCPUS] 2012-11-20 16:47:59 ERROR nova.compute.manager [req-ffb7c7cf-8c40-4622-828e-7e6916f608cc 3c5b8ffd534347b2b30474b34f57dd3f 6cb0773ff31d480ab04ad9264bdb5f44] [instance: b0f4956b-6416-4270-8026-9c28ec927f42] Build error: ['Traceback (most recent call last):\n', ' File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 496, in _run_instance\n injected_files, admin_password)\n', ' File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 743, in _spawn\n block_device_info)\n', ' File "/usr/lib/python2.6/site-packages/nova/exception.py", line 117, in wrapped\n temp_level, payload)\n', ' File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__\n self.gen.next()\n', ' File "/usr/lib/python2.6/site-packages/nova/exception.py", line 92, in wrapped\n return f(*args, **kw)\n', ' File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1062, in spawn\n block_device_info)\n', ' File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1888, in _create_domain_and_network\n domain = self._create_domain(xml)\n', ' File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1867, in _create_domain\n domain.createWithFlags(launch_flags)\n', ' File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 187, in doit\n result = proxy_call(self._autowrap, f, *args, **kwargs)\n', ' File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 147, in proxy_call\n rv = execute(f,*args,**kwargs)\n', ' File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 76, in tworker\n rv = meth(*args,**kwargs)\n', ' File "/usr/lib64/python2.6/site-packages/libvirt.py", line 650, in createWithFlags\n if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self)\n', "libvirtError: Cannot get interface MTU on '': No such device\n"] 2012-11-20 16:48:18 AUDIT nova.compute.resource_tracker [-] Free ram (MB): 2796 2012-11-20 16:48:18 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 225 2012-11-20 16:48:18 AUDIT nova.compute.resource_tracker [-] Free VCPUS: 7 2012-11-20 16:48:18 INFO nova.compute.resource_tracker [-] Compute_service record updated for node2242.cumulus.coreix.net 2012-11-20 16:48:18 ERROR nova.network.quantumv2 [-] _get_auth_token() failed 2012-11-20 16:48:18 TRACE nova.network.quantumv2 Traceback (most recent call last): 2012-11-20 16:48:18 TRACE nova.network.quantumv2 File "/usr/lib/python2.6/site-packages/nova/network/quantumv2/__init__.py", line 38, in _get_auth_token 2012-11-20 16:48:18 TRACE nova.network.quantumv2 httpclient.authenticate() 2012-11-20 16:48:18 TRACE nova.network.quantumv2 File "/usr/lib/python2.6/site-packages/quantumclient/client.py", line 199, in authenticate 2012-11-20 16:48:18 TRACE nova.network.quantumv2 raise exceptions.Unauthorized(message=body) 2012-11-20 16:48:18 TRACE nova.network.quantumv2 Unauthorized: [Errno 111] ECONNREFUSED 2012-11-20 16:48:18 TRACE nova.network.quantumv2
/var/log/libvirt/libvirtd.log says:
2012-11-20 16:41:56.762+0000: 1497: info : libvirt version: 0.9.10, package: 21.el6_3.5 (CentOS BuildSystem http://bugs.centos.org, 2012-10-11-13:57:12, c6b9.bsys.dev.centos.org) 2012-11-20 16:41:56.762+0000: 1497: error : virNetDevBridgeCreate:224 : Unable to create bridge virbr0: Package not installed 2012-11-20 16:45:26.423+0000: 2488: info : libvirt version: 0.9.10, package: 21.el6_3.5 (CentOS BuildSystem http://bugs.centos.org, 2012-10-11-13:57:12, c6b9.bsys.dev.centos.org) 2012-11-20 16:45:26.423+0000: 2488: error : virNetDevBridgeSet:136 : Unable to set bridge virbr0 forward_delay: Operation not supported 2012-11-20 16:47:58.628+0000: 2482: error : virNetDevGetMTU:347 : Cannot get interface MTU on '': No such device 2012-11-20 16:47:58.653+0000: 2482: error : virNetDevGetIndex:657 : Unable to get index for interface vnet0: No such device
Selinux is in permissive mode.
What's wrong?
On 11/20/2012 06:56 PM, Nux! wrote:
Right, hit some problems. After installing openvswitch, quantum, creating the int and ex bridges and following the instructions for a new compute node I can see it can successfully contacts the controller (nova-manage service list <- lists the new compute service), and it even attempts to build a VM but I'm getting the following:
/var/log/nova/compute.log
2012-11-20 16:49:22 INFO nova.compute.resource_tracker [-] Compute_service record updated for node2242 2012-11-20 16:49:22 ERROR nova.network.quantumv2 [-] _get_auth_token() failed 2012-11-20 16:49:22 TRACE nova.network.quantumv2 Traceback (most recent call last): 2012-11-20 16:49:22 TRACE nova.network.quantumv2 File "/usr/lib/python2.6/site-packages/nova/network/quantumv2/__init__.py", line 38, in _get_auth_token 2012-11-20 16:49:22 TRACE nova.network.quantumv2 httpclient.authenticate() 2012-11-20 16:49:22 TRACE nova.network.quantumv2 File "/usr/lib/python2.6/site-packages/quantumclient/client.py", line 199, in authenticate 2012-11-20 16:49:22 TRACE nova.network.quantumv2 raise exceptions.Unauthorized(message=body) 2012-11-20 16:49:22 TRACE nova.network.quantumv2 Unauthorized: [Errno 111] ECONNREFUSED 2012-11-20 16:49:22 TRACE nova.network.quantumv2 2012-11-20 16:47:16 TRACE nova.network.quantumv2 2012-11-20 16:47:33 INFO nova.virt.libvirt.driver [req-ffb7c7cf-8c40-4622-828e-7e6916f608cc 3c5b8ffd534347b2b30474b34f57dd3f 6cb0773ff31d480ab04ad9264bdb5f44] [instance: b0f4956b-6416-4270-8026-9c28ec927f42] Injecting key into image 727b6e19-988e-43b3-ba7c-014beafb6d29
Who refuses the connection to quantum? :-/
In order for Nova to deploy a VM it needs to interface with Quantum. Can you please check that the /etc/nova.conf is updated correctly. This should have the keystone credentails for Quantum (you can compare with the nova.conf that you have on the controller node)
Then, also from compute log:
2012-11-20 16:47:58 ERROR nova.compute.manager [req-ffb7c7cf-8c40-4622-828e-7e6916f608cc 3c5b8ffd534347b2b30474b34f57dd3f 6cb0773ff31d480ab04ad9264bdb5f44] [instance: b0f4956b-6416-4270-8026-9c28ec927f42] Instance failed to spawn 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] Traceback (most recent call last): 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 743, in _spawn 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] block_device_info) 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] File "/usr/lib/python2.6/site-packages/nova/exception.py", line 117, in wrapped 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] temp_level, payload) 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__ 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] self.gen.next() 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] File "/usr/lib/python2.6/site-packages/nova/exception.py", line 92, in wrapped 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] return f(*args, **kw) 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1062, in spawn 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] block_device_info) 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1888, in _create_domain_and_network 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] domain = self._create_domain(xml) 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1867, in _create_domain 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] domain.createWithFlags(launch_flags) 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 187, in doit 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] result = proxy_call(self._autowrap, f, *args, **kwargs) 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 147, in proxy_call 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] rv = execute(f,*args,**kwargs) 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 76, in tworker 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] rv = meth(*args,**kwargs) 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] File "/usr/lib64/python2.6/site-packages/libvirt.py", line 650, in createWithFlags 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self) 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] libvirtError: Cannot get interface MTU on '': No such device 2012-11-20 16:47:58 TRACE nova.compute.manager [instance: b0f4956b-6416-4270-8026-9c28ec927f42] 2012-11-20 16:47:59 INFO nova.compute.resource_tracker [req-ffb7c7cf-8c40-4622-828e-7e6916f608cc 3c5b8ffd534347b2b30474b34f57dd3f 6cb0773ff31d480ab04ad9264bdb5f44] Aborting claim: [Claim b0f4956b-6416-4270-8026-9c28ec927f42: 512 MB memory, 0 GB disk, 1 VCPUS] 2012-11-20 16:47:59 ERROR nova.compute.manager [req-ffb7c7cf-8c40-4622-828e-7e6916f608cc 3c5b8ffd534347b2b30474b34f57dd3f 6cb0773ff31d480ab04ad9264bdb5f44] [instance: b0f4956b-6416-4270-8026-9c28ec927f42] Build error: ['Traceback (most recent call last):\n', ' File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 496, in _run_instance\n injected_files, admin_password)\n', ' File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 743, in _spawn\n block_device_info)\n', ' File "/usr/lib/python2.6/site-packages/nova/exception.py", line 117, in wrapped\n temp_level, payload)\n', ' File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__\n self.gen.next()\n', ' File "/usr/lib/python2.6/site-packages/nova/exception.py", line 92, in wrapped\n return f(*args, **kw)\n', ' File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1062, in spawn\n block_device_info)\n', ' File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1888, in _create_domain_and_network\n domain = self._create_domain(xml)\n', ' File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1867, in _create_domain\n domain.createWithFlags(launch_flags)\n', ' File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 187, in doit\n result = proxy_call(self._autowrap, f, *args, **kwargs)\n', ' File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 147, in proxy_call\n rv = execute(f,*args,**kwargs)\n', ' File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 76, in tworker\n rv = meth(*args,**kwargs)\n', ' File "/usr/lib64/python2.6/site-packages/libvirt.py", line 650, in createWithFlags\n if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self)\n', "libvirtError: Cannot get interface MTU on '': No such device\n"] 2012-11-20 16:48:18 AUDIT nova.compute.resource_tracker [-] Free ram (MB): 2796 2012-11-20 16:48:18 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 225 2012-11-20 16:48:18 AUDIT nova.compute.resource_tracker [-] Free VCPUS: 7 2012-11-20 16:48:18 INFO nova.compute.resource_tracker [-] Compute_service record updated for node2242.cumulus.coreix.net 2012-11-20 16:48:18 ERROR nova.network.quantumv2 [-] _get_auth_token() failed 2012-11-20 16:48:18 TRACE nova.network.quantumv2 Traceback (most recent call last): 2012-11-20 16:48:18 TRACE nova.network.quantumv2 File "/usr/lib/python2.6/site-packages/nova/network/quantumv2/__init__.py", line 38, in _get_auth_token 2012-11-20 16:48:18 TRACE nova.network.quantumv2 httpclient.authenticate() 2012-11-20 16:48:18 TRACE nova.network.quantumv2 File "/usr/lib/python2.6/site-packages/quantumclient/client.py", line 199, in authenticate 2012-11-20 16:48:18 TRACE nova.network.quantumv2 raise exceptions.Unauthorized(message=body) 2012-11-20 16:48:18 TRACE nova.network.quantumv2 Unauthorized: [Errno 111] ECONNREFUSED 2012-11-20 16:48:18 TRACE nova.network.quantumv2
/var/log/libvirt/libvirtd.log says:
2012-11-20 16:41:56.762+0000: 1497: info : libvirt version: 0.9.10, package: 21.el6_3.5 (CentOS BuildSystem http://bugs.centos.org, 2012-10-11-13:57:12, c6b9.bsys.dev.centos.org) 2012-11-20 16:41:56.762+0000: 1497: error : virNetDevBridgeCreate:224 : Unable to create bridge virbr0: Package not installed 2012-11-20 16:45:26.423+0000: 2488: info : libvirt version: 0.9.10, package: 21.el6_3.5 (CentOS BuildSystem http://bugs.centos.org, 2012-10-11-13:57:12, c6b9.bsys.dev.centos.org) 2012-11-20 16:45:26.423+0000: 2488: error : virNetDevBridgeSet:136 : Unable to set bridge virbr0 forward_delay: Operation not supported 2012-11-20 16:47:58.628+0000: 2482: error : virNetDevGetMTU:347 : Cannot get interface MTU on '': No such device 2012-11-20 16:47:58.653+0000: 2482: error : virNetDevGetIndex:657 : Unable to get index for interface vnet0: No such device
Selinux is in permissive mode.
What's wrong?
Did you not have this working a few days ago?
On 20.11.2012 17:26, Gary Kotton wrote:
On 11/20/2012 06:56 PM, Nux! wrote:
Right, hit some problems. After installing openvswitch, quantum, creating the int and ex bridges and following the instructions for a new compute node I can see it can successfully contacts the controller (nova-manage service list <- lists the new compute service), and it even attempts to build a VM but I'm getting the following:
/var/log/nova/compute.log
2012-11-20 16:49:22 INFO nova.compute.resource_tracker [-] Compute_service record updated for node2242 2012-11-20 16:49:22 ERROR nova.network.quantumv2 [-] _get_auth_token() failed 2012-11-20 16:49:22 TRACE nova.network.quantumv2 Traceback (most recent call last): 2012-11-20 16:49:22 TRACE nova.network.quantumv2 File "/usr/lib/python2.6/site-packages/nova/network/quantumv2/__init__.py", line 38, in _get_auth_token 2012-11-20 16:49:22 TRACE nova.network.quantumv2 httpclient.authenticate() 2012-11-20 16:49:22 TRACE nova.network.quantumv2 File "/usr/lib/python2.6/site-packages/quantumclient/client.py", line 199, in authenticate 2012-11-20 16:49:22 TRACE nova.network.quantumv2 raise exceptions.Unauthorized(message=body) 2012-11-20 16:49:22 TRACE nova.network.quantumv2 Unauthorized: [Errno 111] ECONNREFUSED 2012-11-20 16:49:22 TRACE nova.network.quantumv2 2012-11-20 16:47:16 TRACE nova.network.quantumv2 2012-11-20 16:47:33 INFO nova.virt.libvirt.driver [req-ffb7c7cf-8c40-4622-828e-7e6916f608cc 3c5b8ffd534347b2b30474b34f57dd3f 6cb0773ff31d480ab04ad9264bdb5f44] [instance: b0f4956b-6416-4270-8026-9c28ec927f42] Injecting key into image 727b6e19-988e-43b3-ba7c-014beafb6d29
Who refuses the connection to quantum? :-/
In order for Nova to deploy a VM it needs to interface with Quantum. Can you please check that the /etc/nova.conf is updated correctly. This should have the keystone credentails for Quantum (you can compare with the nova.conf that you have on the controller node)
Yes, my network/quantum settings in nova.conf are the same as on the controller...
2012-11-20 16:41:56.762+0000: 1497: info : libvirt version: 0.9.10, package: 21.el6_3.5 (CentOS BuildSystem http://bugs.centos.org, 2012-10-11-13:57:12, c6b9.bsys.dev.centos.org) 2012-11-20 16:41:56.762+0000: 1497: error : virNetDevBridgeCreate:224 : Unable to create bridge virbr0: Package not installed 2012-11-20 16:45:26.423+0000: 2488: info : libvirt version: 0.9.10, package: 21.el6_3.5 (CentOS BuildSystem http://bugs.centos.org, 2012-10-11-13:57:12, c6b9.bsys.dev.centos.org) 2012-11-20 16:45:26.423+0000: 2488: error : virNetDevBridgeSet:136 : Unable to set bridge virbr0 forward_delay: Operation not supported 2012-11-20 16:47:58.628+0000: 2482: error : virNetDevGetMTU:347 : Cannot get interface MTU on '': No such device 2012-11-20 16:47:58.653+0000: 2482: error : virNetDevGetIndex:657 : Unable to get index for interface vnet0: No such device
Selinux is in permissive mode.
What's wrong?
Did you not have this working a few days ago?
I did, on the controller, it works fine, but not on this compute node. I'll try to dig some more, probably I misconfigured something.
On 20.11.2012 17:41, Nux! wrote:
What's wrong?
Did you not have this working a few days ago?
I did, on the controller, it works fine, but not on this compute node. I'll try to dig some more, probably I misconfigured something.
Any reason why the keystonerc from the controller (which works fine) shouldn't work also on the additional node? I did change the OS_AUTH_URL to: xport OS_AUTH_URL=http://controller:5000/v2.0/
Also, another thing that seems odd to me, there is no /var/glance on the additional node. Where are the images stored?
On 11/20/2012 08:26 PM, Nux! wrote:
On 20.11.2012 17:41, Nux! wrote:
What's wrong?
Did you not have this working a few days ago?
I did, on the controller, it works fine, but not on this compute node. I'll try to dig some more, probably I misconfigured something.
Any reason why the keystonerc from the controller (which works fine) shouldn't work also on the additional node? I did change the OS_AUTH_URL to: xport OS_AUTH_URL=http://controller:5000/v2.0/
Also, another thing that seems odd to me, there is no /var/glance on the additional node. Where are the images stored?
I need to update the wiki for a multi host setup. Hopefully I'll get this done in the coming days.
In the above case the you need to add the IP for the controller to the hosts file.
Thanks Gary
On 20.11.2012 18:30, Gary Kotton wrote:
On 11/20/2012 08:26 PM, Nux! wrote:
On 20.11.2012 17:41, Nux! wrote:
What's wrong?
Did you not have this working a few days ago?
I did, on the controller, it works fine, but not on this compute node. I'll try to dig some more, probably I misconfigured something.
Any reason why the keystonerc from the controller (which works fine) shouldn't work also on the additional node? I did change the OS_AUTH_URL to: xport OS_AUTH_URL=http://controller:5000/v2.0/
Also, another thing that seems odd to me, there is no /var/glance on the additional node. Where are the images stored?
I need to update the wiki for a multi host setup. Hopefully I'll get this done in the coming days.
That'd be great. This is all quite confusing.
In the above case the you need to add the IP for the controller to the hosts file.
Yeah, name resolution works correctly. There's also not a firewall issue (checked manually with telnet).
I'll keep trying in the following days.
cloud@lists.stg.fedoraproject.org