Hi fellow oVirters!
The network team and a few others have toyed in the past with several important changes like using open vSwitch, talking D-BUS to NM, making the network non-persistent, etc.
It is with some of this changes in mind that we (special thanks go to Livnat Peer, Dan Kenigsberg and Igor Lvovsky) have worked in a proposal for a new architecture for vdsm's networking part. This proposal is intended to make our software more adaptable to new components and use cases, eliminate distro dependancies as much as possible and improve the responsiveness and scalability of the networking operations.
To do so, it proposes an object oriented representation of the different elements that come into play in our networking use cases.
But enough of introduction, please go to the feature page that we have put together and help us with your feedback, questions proposals and extensions.
http://www.ovirt.org/Feature/NetworkReloaded
Best regards,
Toni
Hi,
Bridges: via the "brctl" cmdline tool.
I was told that the canonical way to configure bridges in recent distros is also ip(8): # ip link add br0 type bridge # ip link set eth0 master br0 # ip link set eth0 nomaster
(RHEL6 still has to rely on brctl, though)
Alias * Users have shown interest in the likes of eth0:4. We should find out if this is really required of oVirt.
If you consider ipfwadm support (Linux 2.0 feature also replaced in 2.2 with something else), go on.
net-tools only pretend to work (add ip to a device via ip(8) or direct kernel cals and try to find corresponding alias for instance...) so we shouldn't contribute to life support of them.
David
Antoni Segura Puimedon píše v Čt 07. 02. 2013 v 17:54 -0500:
Hi fellow oVirters!
The network team and a few others have toyed in the past with several important changes like using open vSwitch, talking D-BUS to NM, making the network non-persistent, etc.
It is with some of this changes in mind that we (special thanks go to Livnat Peer, Dan Kenigsberg and Igor Lvovsky) have worked in a proposal for a new architecture for vdsm's networking part. This proposal is intended to make our software more adaptable to new components and use cases, eliminate distro dependancies as much as possible and improve the responsiveness and scalability of the networking operations.
To do so, it proposes an object oriented representation of the different elements that come into play in our networking use cases.
But enough of introduction, please go to the feature page that we have put together and help us with your feedback, questions proposals and extensions.
http://www.ovirt.org/Feature/NetworkReloaded
Best regards,
Toni _______________________________________________ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Thanks for the feedback David.
----- Original Message -----
From: "David Jaša" djasa@redhat.com To: arch@ovirt.org, vdsm-devel@fedorahosted.org Sent: Friday, February 8, 2013 10:50:53 AM Subject: Re: [vdsm] vdsm networking changes proposal
Hi,
Bridges: via the "brctl" cmdline tool.
I was told that the canonical way to configure bridges in recent distros is also ip(8): # ip link add br0 type bridge # ip link set eth0 master br0 # ip link set eth0 nomaster
(RHEL6 still has to rely on brctl, though)
For F18+ and RHEL7 I think that this point makes a lot of sense, that we should use ip link instead of brctl.
Additionally, as the libnl3 python bindings mature, it would naturally lead to use that instead of iproute2 tools.
Alias * Users have shown interest in the likes of eth0:4. We should find out if this is really required of oVirt.
If you consider ipfwadm support (Linux 2.0 feature also replaced in 2.2 with something else), go on.
net-tools only pretend to work (add ip to a device via ip(8) or direct kernel cals and try to find corresponding alias for instance...) so we shouldn't contribute to life support of them.
David
Antoni Segura Puimedon píše v Čt 07. 02. 2013 v 17:54 -0500:
Hi fellow oVirters!
The network team and a few others have toyed in the past with several important changes like using open vSwitch, talking D-BUS to NM, making the network non-persistent, etc.
It is with some of this changes in mind that we (special thanks go to Livnat Peer, Dan Kenigsberg and Igor Lvovsky) have worked in a proposal for a new architecture for vdsm's networking part. This proposal is intended to make our software more adaptable to new components and use cases, eliminate distro dependancies as much as possible and improve the responsiveness and scalability of the networking operations.
To do so, it proposes an object oriented representation of the different elements that come into play in our networking use cases.
But enough of introduction, please go to the feature page that we have put together and help us with your feedback, questions proposals and extensions.
http://www.ovirt.org/Feature/NetworkReloaded
Best regards,
Toni _______________________________________________ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
--
David Jaša, RHCE
SPICE QE based in Brno GPG Key: 22C33E24 Fingerprint: 513A 060B D1B4 2A72 7F0D 0278 B125 CD00 22C3 3E24
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Hi Antoni,
Regarding 'Bond' how is it logical to have VLANs as slaves? Is it something we would like to support, or just be able to represent, if we ever come across this odd combination?
Also, can bond slaves have ipConfig set?
Regarding 'VLAN' is it possible to have the VLAN over bridge as you written? I'm not sure if that's something supported currently, but is there a plan to support this mode?
Generally seems like these entities have a lot in common that can be in a common 'interface' entity.
Also for OVS I don't understand if the current model will be extensible to support for example GRE tunnels, which aren't necessarily supported in other configurers.
Regards, Mike
----- Original Message -----
Hi fellow oVirters!
The network team and a few others have toyed in the past with several important changes like using open vSwitch, talking D-BUS to NM, making the network non-persistent, etc.
It is with some of this changes in mind that we (special thanks go to Livnat Peer, Dan Kenigsberg and Igor Lvovsky) have worked in a proposal for a new architecture for vdsm's networking part. This proposal is intended to make our software more adaptable to new components and use cases, eliminate distro dependancies as much as possible and improve the responsiveness and scalability of the networking operations.
To do so, it proposes an object oriented representation of the different elements that come into play in our networking use cases.
But enough of introduction, please go to the feature page that we have put together and help us with your feedback, questions proposals and extensions.
http://www.ovirt.org/Feature/NetworkReloaded
Best regards,
Toni _______________________________________________ Arch mailing list Arch@ovirt.org http://lists.ovirt.org/mailman/listinfo/arch
Hello Antoni,
Great work! I am very excited we are going this route, it is first of many to allow us to be run on different distributions. I apologize I got to this so late.
Notes for the model, I am unsure if someone already noted.
I think that the abstraction should be more than entity and properties.
For example:
nic is a network interface bridge is a network interface and ports network interfaces bound is a network interface and slave network interfaces vlan is a network interface and vlan id
network interface can have: - name - ip config - state - mtu
this way it would be easier to share common code that handle pure interfaces.
I don't quite understand the 'Team' configurator, are you suggesting a provider for each technology?
bridge - iproute2 provider - ovs provider - ifcfg provider
bond - iproute2 - team - ovs - ifcfg
vlan - iproute2 - ovs - ifcfg
So we can get a configuration of: bridge:iproute2 bond:team vlan:ovs
?
I also would like us to explore a future alternative of the network configuration via crypto vpn directly from qemu to another qemu, the idea is to have a kerberos like key per layer3(or layer2) destination, while communication is encrypted at user space and sent to a flat network. The advantage of this is that we manage logical network and not physical network, while relaying on hardware to find the best route to destination. The question is how and if we can provide this via the suggestion abstraction. But maybe it is too soon to address this kind of future.
For the open questions:
1. Yes, I think that mode should be non-persistence, persistence providers should emulate non-persistence operations by diff between what they have and the goal.
2. Once vdsm is installed, the mode it runs should be fixed. So the only question is what is the selected profile during host deployment.
3. I think that if we can avoid aliases it would be nice.
4. Keeping the least persistence information would be flexible. I would love to see a zero persistence mode available, for example if management interface is dhcp or manually configured.
I am very fond of the iproute2 configuration, and don't mind if administrator configures the management interface manually. I think this can supersede the ifcfg quite easily in most cases. In these rare cases administrator use ovirt to modify the network interface we may consider delegating persistence to totally different model. But as far as I understand the problem is solely related to the management connectivity, so we can implement a simple bootstrap of non-persistence module to reconstruct the management network setup from vdsm configuration instead of persisting it to the distribution width configuration.
Regards, Alon Bar-Lev
----- Original Message -----
From: "Antoni Segura Puimedon" asegurap@redhat.com To: arch@ovirt.org, vdsm-devel@fedorahosted.org Sent: Friday, February 8, 2013 12:54:23 AM Subject: vdsm networking changes proposal
Hi fellow oVirters!
The network team and a few others have toyed in the past with several important changes like using open vSwitch, talking D-BUS to NM, making the network non-persistent, etc.
It is with some of this changes in mind that we (special thanks go to Livnat Peer, Dan Kenigsberg and Igor Lvovsky) have worked in a proposal for a new architecture for vdsm's networking part. This proposal is intended to make our software more adaptable to new components and use cases, eliminate distro dependancies as much as possible and improve the responsiveness and scalability of the networking operations.
To do so, it proposes an object oriented representation of the different elements that come into play in our networking use cases.
But enough of introduction, please go to the feature page that we have put together and help us with your feedback, questions proposals and extensions.
http://www.ovirt.org/Feature/NetworkReloaded
Best regards,
Toni _______________________________________________ Arch mailing list Arch@ovirt.org http://lists.ovirt.org/mailman/listinfo/arch
Hi,
Alon Bar-Lev píše v Ne 17. 02. 2013 v 15:57 -0500:
Hello Antoni,
Great work! I am very excited we are going this route, it is first of many to allow us to be run on different distributions. I apologize I got to this so late.
Notes for the model, I am unsure if someone already noted.
I think that the abstraction should be more than entity and properties.
For example:
nic is a network interface bridge is a network interface and ports network interfaces bound is a network interface and slave network interfaces vlan is a network interface and vlan id
network interface can have:
- name
- ip config
- state
- mtu
this way it would be easier to share common code that handle pure interfaces.
I don't quite understand the 'Team' configurator, are you suggesting a provider for each technology?
Team is a new implementation of bonding in Linux kernel IIRC.
bridge
- iproute2 provider
- ovs provider
- ifcfg provider
bond
- iproute2
- team
- ovs
- ifcfg
vlan
- iproute2
- ovs
- ifcfg
So we can get a configuration of: bridge:iproute2 bond:team vlan:ovs
?
I also would like us to explore a future alternative of the network configuration via crypto vpn directly from qemu to another qemu, the idea is to have a kerberos like key per layer3(or layer2) destination, while communication is encrypted at user space and sent to a flat network. The advantage of this is that we manage logical network and not physical network, while relaying on hardware to find the best route to destination. The question is how and if we can provide this via the suggestion abstraction. But maybe it is too soon to address this kind of future.
Isn't it better to separate the two goals and persuade qemu developers to implement TLS for migration connections?
David
For the open questions:
Yes, I think that mode should be non-persistence, persistence providers should emulate non-persistence operations by diff between what they have and the goal.
Once vdsm is installed, the mode it runs should be fixed. So the only question is what is the selected profile during host deployment.
I think that if we can avoid aliases it would be nice.
Keeping the least persistence information would be flexible. I would love to see a zero persistence mode available, for example if management interface is dhcp or manually configured.
I am very fond of the iproute2 configuration, and don't mind if administrator configures the management interface manually. I think this can supersede the ifcfg quite easily in most cases. In these rare cases administrator use ovirt to modify the network interface we may consider delegating persistence to totally different model. But as far as I understand the problem is solely related to the management connectivity, so we can implement a simple bootstrap of non-persistence module to reconstruct the management network setup from vdsm configuration instead of persisting it to the distribution width configuration.
Regards, Alon Bar-Lev
----- Original Message -----
From: "Antoni Segura Puimedon" asegurap@redhat.com To: arch@ovirt.org, vdsm-devel@fedorahosted.org Sent: Friday, February 8, 2013 12:54:23 AM Subject: vdsm networking changes proposal
Hi fellow oVirters!
The network team and a few others have toyed in the past with several important changes like using open vSwitch, talking D-BUS to NM, making the network non-persistent, etc.
It is with some of this changes in mind that we (special thanks go to Livnat Peer, Dan Kenigsberg and Igor Lvovsky) have worked in a proposal for a new architecture for vdsm's networking part. This proposal is intended to make our software more adaptable to new components and use cases, eliminate distro dependancies as much as possible and improve the responsiveness and scalability of the networking operations.
To do so, it proposes an object oriented representation of the different elements that come into play in our networking use cases.
But enough of introduction, please go to the feature page that we have put together and help us with your feedback, questions proposals and extensions.
http://www.ovirt.org/Feature/NetworkReloaded
Best regards,
Toni _______________________________________________ Arch mailing list Arch@ovirt.org http://lists.ovirt.org/mailman/listinfo/arch
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
On 02/18/2013 05:23 PM, David Jaša wrote:
Hi,
Alon Bar-Lev píše v Ne 17. 02. 2013 v 15:57 -0500:
Hello Antoni,
Great work! I am very excited we are going this route, it is first of many to allow us to be run on different distributions. I apologize I got to this so late.
Notes for the model, I am unsure if someone already noted.
I think that the abstraction should be more than entity and properties.
For example:
nic is a network interface bridge is a network interface and ports network interfaces bound is a network interface and slave network interfaces vlan is a network interface and vlan id
network interface can have:
- name
- ip config
- state
- mtu
this way it would be easier to share common code that handle pure interfaces.
I don't quite understand the 'Team' configurator, are you suggesting a provider for each technology?
Team is a new implementation of bonding in Linux kernel IIRC.
bridge
- iproute2 provider
- ovs provider
- ifcfg provider
bond
- iproute2
- team
- ovs
- ifcfg
vlan
- iproute2
- ovs
- ifcfg
So we can get a configuration of: bridge:iproute2 bond:team vlan:ovs
?
I also would like us to explore a future alternative of the network configuration via crypto vpn directly from qemu to another qemu, the idea is to have a kerberos like key per layer3(or layer2) destination, while communication is encrypted at user space and sent to a flat network. The advantage of this is that we manage logical network and not physical network, while relaying on hardware to find the best route to destination. The question is how and if we can provide this via the suggestion abstraction. But maybe it is too soon to address this kind of future.
Isn't it better to separate the two goals and persuade qemu developers to implement TLS for migration connections?
+1 for implementing it in qemu
David
For the open questions:
Yes, I think that mode should be non-persistence, persistence providers should emulate non-persistence operations by diff between what they have and the goal.
Once vdsm is installed, the mode it runs should be fixed. So the only question is what is the selected profile during host deployment.
I think that if we can avoid aliases it would be nice.
Keeping the least persistence information would be flexible. I would love to see a zero persistence mode available, for example if management interface is dhcp or manually configured.
I am very fond of the iproute2 configuration, and don't mind if administrator configures the management interface manually. I think this can supersede the ifcfg quite easily in most cases. In these rare cases administrator use ovirt to modify the network interface we may consider delegating persistence to totally different model. But as far as I understand the problem is solely related to the management connectivity, so we can implement a simple bootstrap of non-persistence module to reconstruct the management network setup from vdsm configuration instead of persisting it to the distribution width configuration.
Regards, Alon Bar-Lev
----- Original Message -----
From: "Antoni Segura Puimedon" asegurap@redhat.com To: arch@ovirt.org, vdsm-devel@fedorahosted.org Sent: Friday, February 8, 2013 12:54:23 AM Subject: vdsm networking changes proposal
Hi fellow oVirters!
The network team and a few others have toyed in the past with several important changes like using open vSwitch, talking D-BUS to NM, making the network non-persistent, etc.
It is with some of this changes in mind that we (special thanks go to Livnat Peer, Dan Kenigsberg and Igor Lvovsky) have worked in a proposal for a new architecture for vdsm's networking part. This proposal is intended to make our software more adaptable to new components and use cases, eliminate distro dependancies as much as possible and improve the responsiveness and scalability of the networking operations.
To do so, it proposes an object oriented representation of the different elements that come into play in our networking use cases.
But enough of introduction, please go to the feature page that we have put together and help us with your feedback, questions proposals and extensions.
http://www.ovirt.org/Feature/NetworkReloaded
Best regards,
Toni _______________________________________________ Arch mailing list Arch@ovirt.org http://lists.ovirt.org/mailman/listinfo/arch
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
On Thu 21 Feb 2013 04:46:16 PM CST, Mark Wu wrote:
On 02/18/2013 05:23 PM, David Jaša wrote:
Hi,
Alon Bar-Lev píše v Ne 17. 02. 2013 v 15:57 -0500:
Hello Antoni,
Great work! I am very excited we are going this route, it is first of many to allow us to be run on different distributions. I apologize I got to this so late.
Notes for the model, I am unsure if someone already noted.
I think that the abstraction should be more than entity and properties.
For example:
nic is a network interface bridge is a network interface and ports network interfaces bound is a network interface and slave network interfaces vlan is a network interface and vlan id
network interface can have:
- name
- ip config
- state
- mtu
this way it would be easier to share common code that handle pure interfaces.
I don't quite understand the 'Team' configurator, are you suggesting a provider for each technology?
Team is a new implementation of bonding in Linux kernel IIRC.
bridge
- iproute2 provider
- ovs provider
- ifcfg provider
bond
- iproute2
- team
- ovs
- ifcfg
vlan
- iproute2
- ovs
- ifcfg
So we can get a configuration of: bridge:iproute2 bond:team vlan:ovs
?
I also would like us to explore a future alternative of the network configuration via crypto vpn directly from qemu to another qemu, the idea is to have a kerberos like key per layer3(or layer2) destination, while communication is encrypted at user space and sent to a flat network. The advantage of this is that we manage logical network and not physical network, while relaying on hardware to find the best route to destination. The question is how and if we can provide this via the suggestion abstraction. But maybe it is too soon to address this kind of future.
Isn't it better to separate the two goals and persuade qemu developers to implement TLS for migration connections?
+1 for implementing it in qemu
David
For the open questions:
- Yes, I think that mode should be non-persistence, persistence
providers should emulate non-persistence operations by diff between what they have and the goal.
- Once vdsm is installed, the mode it runs should be fixed. So the
only question is what is the selected profile during host deployment.
I think that if we can avoid aliases it would be nice.
Keeping the least persistence information would be flexible. I
would love to see a zero persistence mode available, for example if management interface is dhcp or manually configured.
I am very fond of the iproute2 configuration, and don't mind if administrator configures the management interface manually. I think this can supersede the ifcfg quite easily in most cases. In these rare cases administrator use ovirt to modify the network interface we may consider delegating persistence to totally different model. But as far as I understand the problem is solely related to the management connectivity, so we can implement a simple bootstrap of non-persistence module to reconstruct the management network setup from vdsm configuration instead of persisting it to the distribution width configuration.
Regards, Alon Bar-Lev
----- Original Message -----
From: "Antoni Segura Puimedon" asegurap@redhat.com To: arch@ovirt.org, vdsm-devel@fedorahosted.org Sent: Friday, February 8, 2013 12:54:23 AM Subject: vdsm networking changes proposal
Hi fellow oVirters!
The network team and a few others have toyed in the past with several important changes like using open vSwitch, talking D-BUS to NM, making the network non-persistent, etc.
It is with some of this changes in mind that we (special thanks go to Livnat Peer, Dan Kenigsberg and Igor Lvovsky) have worked in a proposal for a new architecture for vdsm's networking part. This proposal is intended to make our software more adaptable to new components and use cases, eliminate distro dependancies as much as possible and improve the responsiveness and scalability of the networking operations.
To do so, it proposes an object oriented representation of the different elements that come into play in our networking use cases.
But enough of introduction, please go to the feature page that we have put together and help us with your feedback, questions proposals and extensions.
http://www.ovirt.org/Feature/NetworkReloaded
Best regards,
Toni _______________________________________________ Arch mailing list Arch@ovirt.org http://lists.ovirt.org/mailman/listinfo/arch
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Sorry for coming to it so late. I get the following comments and questions about the proposal.
I suggest to add a field of top interface to the network, and only apply IpConfig and mtu to it.
For the openvswitch configurator, it needs assistance of iproute2 because it can't configure ip/netmask/gw and mtu.
I can't figure out the point to allow different configurators except openvswitch coexist. It could cause unnecessary complexity.
In the proposal, the rollback mechanism can be used to persist configuration for iproute2. Why do we still need NetworkManager?
I think the solution of "iproute2 + openvswitch + serializing configuration objects" can meet all our requirements. I remember that Dan had a concern of adding a new standard about it in previous discussion. Have we already get agreement on it?
Mark
On Thu, Feb 21, 2013 at 05:55:45PM +0800, Mark Wu wrote:
On Thu 21 Feb 2013 04:46:16 PM CST, Mark Wu wrote:
On 02/18/2013 05:23 PM, David Jaša wrote:
Sorry for coming to it so late.
Happy new year!
I get the following comments and questions about the proposal.
I suggest to add a field of top interface to the network, and only apply IpConfig and mtu to it.
I'm not sure how such an added field would help. Isn't the info already available within the stucture of interface objects? Or do you suggest a read-only field? I'd appreciate more details.
For the openvswitch configurator, it needs assistance of iproute2 because it can't configure ip/netmask/gw and mtu.
Thanks, added to wiki.
I can't figure out the point to allow different configurators except openvswitch coexist. It could cause unnecessary complexity.
I agree that we should decide on a very limited set of valid configurator combinations.
In the proposal, the rollback mechanism can be used to persist configuration for iproute2. Why do we still need NetworkManager?
We may need NetworkManager. It is present and running by default on our target platforms -- inlcuding my laptop -- and it can be a bit rude to other services that try to configure network devices not through it.
I think the solution of "iproute2 + openvswitch + serializing configuration objects" can meet all our requirements. I remember that Dan had a concern of adding a new standard about it in previous discussion. Have we already get agreement on it?
Well, I'd say that I've caved in. I see no other way forward without introducing our own form of persisting network definitions. At least we keep our current setupNetworks API for that.
Dan.
----- Original Message -----
From: "Dan Kenigsberg" danken@redhat.com To: "Mark Wu" wudxw@linux.vnet.ibm.com Cc: arch@ovirt.org, vdsm-devel@fedorahosted.org Sent: Monday, February 25, 2013 12:49:19 PM Subject: Re: [vdsm] vdsm networking changes proposal
On Thu, Feb 21, 2013 at 05:55:45PM +0800, Mark Wu wrote:
On Thu 21 Feb 2013 04:46:16 PM CST, Mark Wu wrote:
On 02/18/2013 05:23 PM, David Jaša wrote:
Sorry for coming to it so late.
Happy new year!
I get the following comments and questions about the proposal.
I suggest to add a field of top interface to the network, and only apply IpConfig and mtu to it.
I'm not sure how such an added field would help. Isn't the info already available within the stucture of interface objects? Or do you suggest a read-only field? I'd appreciate more details.
For the openvswitch configurator, it needs assistance of iproute2 because it can't configure ip/netmask/gw and mtu.
Thanks, added to wiki.
I can't figure out the point to allow different configurators except openvswitch coexist. It could cause unnecessary complexity.
I agree that we should decide on a very limited set of valid configurator combinations.
In the proposal, the rollback mechanism can be used to persist configuration for iproute2. Why do we still need NetworkManager?
We may need NetworkManager. It is present and running by default on our target platforms -- inlcuding my laptop -- and it can be a bit rude to other services that try to configure network devices not through it.
Maybe the reason to have NetworkManager provider is for pure debug/development purposes, so you can run it on your laptop.
I think that as go forward we [might] see that the considerations for hypervisors out-ways these of the desktop distribution.
I think the solution of "iproute2 + openvswitch + serializing configuration objects" can meet all our requirements. I remember that Dan had a concern of adding a new standard about it in previous discussion. Have we already get agreement on it?
Well, I'd say that I've caved in. I see no other way forward without introducing our own form of persisting network definitions. At least we keep our current setupNetworks API for that.
Dan. _______________________________________________ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
----- Original Message -----
From: "David Jaša" djasa@redhat.com To: vdsm-devel@fedorahosted.org, arch@ovirt.org Sent: Monday, February 18, 2013 11:23:21 AM Subject: Re: [vdsm] vdsm networking changes proposal
Hi,
Alon Bar-Lev píše v Ne 17. 02. 2013 v 15:57 -0500:
Hello Antoni,
Great work! I am very excited we are going this route, it is first of many to allow us to be run on different distributions. I apologize I got to this so late.
Notes for the model, I am unsure if someone already noted.
I think that the abstraction should be more than entity and properties.
For example:
nic is a network interface bridge is a network interface and ports network interfaces bound is a network interface and slave network interfaces vlan is a network interface and vlan id
network interface can have:
- name
- ip config
- state
- mtu
this way it would be easier to share common code that handle pure interfaces.
I don't quite understand the 'Team' configurator, are you suggesting a provider for each technology?
Team is a new implementation of bonding in Linux kernel IIRC.
bridge
- iproute2 provider
- ovs provider
- ifcfg provider
bond
- iproute2
- team
- ovs
- ifcfg
vlan
- iproute2
- ovs
- ifcfg
So we can get a configuration of: bridge:iproute2 bond:team vlan:ovs
?
I also would like us to explore a future alternative of the network configuration via crypto vpn directly from qemu to another qemu, the idea is to have a kerberos like key per layer3(or layer2) destination, while communication is encrypted at user space and sent to a flat network. The advantage of this is that we manage logical network and not physical network, while relaying on hardware to find the best route to destination. The question is how and if we can provide this via the suggestion abstraction. But maybe it is too soon to address this kind of future.
Isn't it better to separate the two goals and persuade qemu developers to implement TLS for migration connections?
Sure :) But someone/something will need to configure it... :)
For the open questions:
- Yes, I think that mode should be non-persistence, persistence
providers should emulate non-persistence operations by diff between what they have and the goal.
- Once vdsm is installed, the mode it runs should be fixed. So the
only question is what is the selected profile during host deployment.
I think that if we can avoid aliases it would be nice.
Keeping the least persistence information would be flexible. I
would love to see a zero persistence mode available, for example if management interface is dhcp or manually configured.
I am very fond of the iproute2 configuration, and don't mind if administrator configures the management interface manually. I think this can supersede the ifcfg quite easily in most cases. In these rare cases administrator use ovirt to modify the network interface we may consider delegating persistence to totally different model. But as far as I understand the problem is solely related to the management connectivity, so we can implement a simple bootstrap of non-persistence module to reconstruct the management network setup from vdsm configuration instead of persisting it to the distribution width configuration.
Regards, Alon Bar-Lev
----- Original Message -----
From: "Antoni Segura Puimedon" asegurap@redhat.com To: arch@ovirt.org, vdsm-devel@fedorahosted.org Sent: Friday, February 8, 2013 12:54:23 AM Subject: vdsm networking changes proposal
Hi fellow oVirters!
The network team and a few others have toyed in the past with several important changes like using open vSwitch, talking D-BUS to NM, making the network non-persistent, etc.
It is with some of this changes in mind that we (special thanks go to Livnat Peer, Dan Kenigsberg and Igor Lvovsky) have worked in a proposal for a new architecture for vdsm's networking part. This proposal is intended to make our software more adaptable to new components and use cases, eliminate distro dependancies as much as possible and improve the responsiveness and scalability of the networking operations.
To do so, it proposes an object oriented representation of the different elements that come into play in our networking use cases.
But enough of introduction, please go to the feature page that we have put together and help us with your feedback, questions proposals and extensions.
http://www.ovirt.org/Feature/NetworkReloaded
Best regards,
Toni _______________________________________________ Arch mailing list Arch@ovirt.org http://lists.ovirt.org/mailman/listinfo/arch
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
--
David Jaša, RHCE
SPICE QE based in Brno GPG Key: 22C33E24 Fingerprint: 513A 060B D1B4 2A72 7F0D 0278 B125 CD00 22C3 3E24
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
On Sun, Feb 17, 2013 at 03:57:33PM -0500, Alon Bar-Lev wrote:
Hello Antoni,
Great work! I am very excited we are going this route, it is first of many to allow us to be run on different distributions. I apologize I got to this so late.
Notes for the model, I am unsure if someone already noted.
I think that the abstraction should be more than entity and properties.
For example:
nic is a network interface bridge is a network interface and ports network interfaces bound is a network interface and slave network interfaces vlan is a network interface and vlan id
network interface can have:
- name
- ip config
- state
- mtu
this way it would be easier to share common code that handle pure interfaces.
I agree with you - even though OOD is falling out of fashion in certain circles.
I don't quite understand the 'Team' configurator, are you suggesting a provider for each technology?
Just as we may decide to move away from standard linux bridge to ovs-based bridging, we may switch from bonding to teaming. I do not think that we should do it now, but make sure that the design accomodates this.
bridge
- iproute2 provider
- ovs provider
- ifcfg provider
bond
- iproute2
- team
- ovs
- ifcfg
vlan
- iproute2
- ovs
- ifcfg
So we can get a configuration of: bridge:iproute2 bond:team vlan:ovs
I do not think that such complex combinations are of real interest. The client should not (currently) be allowed to request them. Some say that the specific combination that is used by Vdsm to implement the network should be defined in a config file. I think that a python file is good enough for that, at least for now.
?
I also would like us to explore a future alternative of the network configuration via crypto vpn directly from qemu to another qemu, the idea is to have a kerberos like key per layer3(or layer2) destination, while communication is encrypted at user space and sent to a flat network. The advantage of this is that we manage logical network and not physical network, while relaying on hardware to find the best route to destination. The question is how and if we can provide this via the suggestion abstraction. But maybe it is too soon to address this kind of future.
This is something completely different, as we say in Python. The nice thing about your idea, is that in the context of host network configuration we need nothing more than our current bridge-bond-nic. The sad thing about your idea, is that it would scale badly with the nubmer of virtual networks. If a new VM comes live and sends an ARP who-has broadcast message - which VMs should be bothered to attempt to decrypt it?
For the open questions:
Yes, I think that mode should be non-persistence, persistence providers should emulate non-persistence operations by diff between what they have and the goal.
Once vdsm is installed, the mode it runs should be fixed. So the only question is what is the selected profile during host deployment.
I think that if we can avoid aliases it would be nice.
I wonder if everybody agrees that aliases are not needed.
- Keeping the least persistence information would be flexible. I
would love to see a zero persistence mode available, for example if management interface is dhcp or manually configured.
I am very fond of the iproute2 configuration, and don't mind if administrator configures the management interface manually. I think this can supersede the ifcfg quite easily in most cases. In these rare cases administrator use ovirt to modify the network interface we may consider delegating persistence to totally different model. But as far as I understand the problem is solely related to the management connectivity, so we can implement a simple bootstrap of non-persistence module to reconstruct the management network setup from vdsm configuration instead of persisting it to the distribution width configuration.
Regards, Alon Bar-Lev
Thanks, Dan.
----- Original Message -----
From: "Dan Kenigsberg" danken@redhat.com To: "Alon Bar-Lev" alonbl@redhat.com Cc: "Antoni Segura Puimedon" asegurap@redhat.com, vdsm-devel@fedorahosted.org, arch@ovirt.org Sent: Monday, February 25, 2013 12:34:46 PM Subject: Re: vdsm networking changes proposal
On Sun, Feb 17, 2013 at 03:57:33PM -0500, Alon Bar-Lev wrote:
Hello Antoni,
Great work! I am very excited we are going this route, it is first of many to allow us to be run on different distributions. I apologize I got to this so late.
Notes for the model, I am unsure if someone already noted.
I think that the abstraction should be more than entity and properties.
For example:
nic is a network interface bridge is a network interface and ports network interfaces bound is a network interface and slave network interfaces vlan is a network interface and vlan id
network interface can have:
- name
- ip config
- state
- mtu
this way it would be easier to share common code that handle pure interfaces.
I agree with you - even though OOD is falling out of fashion in certain circles.
If we develop software like dressing fashion, we end up with software working for a single season.
I don't quite understand the 'Team' configurator, are you suggesting a provider for each technology?
Just as we may decide to move away from standard linux bridge to ovs-based bridging, we may switch from bonding to teaming. I do not think that we should do it now, but make sure that the design accomodates this.
So there should a separate provider for each object type, unless I am missing something.
bridge
- iproute2 provider
- ovs provider
- ifcfg provider
bond
- iproute2
- team
- ovs
- ifcfg
vlan
- iproute2
- ovs
- ifcfg
So we can get a configuration of: bridge:iproute2 bond:team vlan:ovs
I do not think that such complex combinations are of real interest. The client should not (currently) be allowed to request them. Some say that the specific combination that is used by Vdsm to implement the network should be defined in a config file. I think that a python file is good enough for that, at least for now.
I completely lost you, and how it got to do with python nor file.
If we have implementation of iproute2 that does bridge, vlan, bond, but we like to use ovs for bridge and vlan, how can we reuse the iproute2 provider for the bond?
If we register provider per object type we may allow easier reuse.
This, however, does not imply that the implementation is in python (oh well...) nor if the implementation is single file or multiple file...
?
I also would like us to explore a future alternative of the network configuration via crypto vpn directly from qemu to another qemu, the idea is to have a kerberos like key per layer3(or layer2) destination, while communication is encrypted at user space and sent to a flat network. The advantage of this is that we manage logical network and not physical network, while relaying on hardware to find the best route to destination. The question is how and if we can provide this via the suggestion abstraction. But maybe it is too soon to address this kind of future.
This is something completely different, as we say in Python. The nice thing about your idea, is that in the context of host network configuration we need nothing more than our current bridge-bond-nic. The sad thing about your idea, is that it would scale badly with the nubmer of virtual networks. If a new VM comes live and sends an ARP who-has broadcast message - which VMs should be bothered to attempt to decrypt it?
This is easily filtered by a tag. Just like in MPLS.
For the open questions:
- Yes, I think that mode should be non-persistence, persistence
providers should emulate non-persistence operations by diff between what they have and the goal.
- Once vdsm is installed, the mode it runs should be fixed. So the
only question is what is the selected profile during host deployment.
- I think that if we can avoid aliases it would be nice.
I wonder if everybody agrees that aliases are not needed.
- Keeping the least persistence information would be flexible. I
would love to see a zero persistence mode available, for example if management interface is dhcp or manually configured.
I am very fond of the iproute2 configuration, and don't mind if administrator configures the management interface manually. I think this can supersede the ifcfg quite easily in most cases. In these rare cases administrator use ovirt to modify the network interface we may consider delegating persistence to totally different model. But as far as I understand the problem is solely related to the management connectivity, so we can implement a simple bootstrap of non-persistence module to reconstruct the management network setup from vdsm configuration instead of persisting it to the distribution width configuration.
Regards, Alon Bar-Lev
Thanks, Dan.
On Tue, Feb 26, 2013 at 10:11:46AM -0500, Alon Bar-Lev wrote:
----- Original Message -----
From: "Dan Kenigsberg" danken@redhat.com To: "Alon Bar-Lev" alonbl@redhat.com Cc: "Antoni Segura Puimedon" asegurap@redhat.com, vdsm-devel@fedorahosted.org, arch@ovirt.org Sent: Monday, February 25, 2013 12:34:46 PM Subject: Re: vdsm networking changes proposal
On Sun, Feb 17, 2013 at 03:57:33PM -0500, Alon Bar-Lev wrote:
Hello Antoni,
Great work! I am very excited we are going this route, it is first of many to allow us to be run on different distributions. I apologize I got to this so late.
Notes for the model, I am unsure if someone already noted.
I think that the abstraction should be more than entity and properties.
For example:
nic is a network interface bridge is a network interface and ports network interfaces bound is a network interface and slave network interfaces vlan is a network interface and vlan id
network interface can have:
- name
- ip config
- state
- mtu
this way it would be easier to share common code that handle pure interfaces.
I agree with you - even though OOD is falling out of fashion in certain circles.
If we develop software like dressing fashion, we end up with software working for a single season.
I don't quite understand the 'Team' configurator, are you suggesting a provider for each technology?
Just as we may decide to move away from standard linux bridge to ovs-based bridging, we may switch from bonding to teaming. I do not think that we should do it now, but make sure that the design accomodates this.
So there should a separate provider for each object type, unless I am missing something.
bridge
- iproute2 provider
- ovs provider
- ifcfg provider
bond
- iproute2
- team
- ovs
- ifcfg
vlan
- iproute2
- ovs
- ifcfg
So we can get a configuration of: bridge:iproute2 bond:team vlan:ovs
I do not think that such complex combinations are of real interest. The client should not (currently) be allowed to request them. Some say that the specific combination that is used by Vdsm to implement the network should be defined in a config file. I think that a python file is good enough for that, at least for now.
I completely lost you, and how it got to do with python nor file.
If we have implementation of iproute2 that does bridge, vlan, bond, but we like to use ovs for bridge and vlan, how can we reuse the iproute2 provider for the bond?
If we register provider per object type we may allow easier reuse.
Yes, this is the plan. However I do not think it is wise to support all conceivable combinations of provider/object. A fixed one, such as "ovs for bridge and vlan, iproute2 for bond" is good enough.
This, however, does not imply that the implementation is in python (oh well...) nor if the implementation is single file or multiple file...
?
I also would like us to explore a future alternative of the network configuration via crypto vpn directly from qemu to another qemu, the idea is to have a kerberos like key per layer3(or layer2) destination, while communication is encrypted at user space and sent to a flat network. The advantage of this is that we manage logical network and not physical network, while relaying on hardware to find the best route to destination. The question is how and if we can provide this via the suggestion abstraction. But maybe it is too soon to address this kind of future.
This is something completely different, as we say in Python. The nice thing about your idea, is that in the context of host network configuration we need nothing more than our current bridge-bond-nic. The sad thing about your idea, is that it would scale badly with the nubmer of virtual networks. If a new VM comes live and sends an ARP who-has broadcast message - which VMs should be bothered to attempt to decrypt it?
This is easily filtered by a tag. Just like in MPLS.
How is it different from a vlan tag, then? Or that you suggest that we trust qemu to do the tagging, instead of the host kernel?
----- Original Message -----
From: "Dan Kenigsberg" danken@redhat.com To: "Alon Bar-Lev" alonbl@redhat.com Cc: "Antoni Segura Puimedon" asegurap@redhat.com, vdsm-devel@fedorahosted.org, arch@ovirt.org Sent: Tuesday, February 26, 2013 5:45:50 PM Subject: Re: vdsm networking changes proposal
On Tue, Feb 26, 2013 at 10:11:46AM -0500, Alon Bar-Lev wrote:
----- Original Message -----
From: "Dan Kenigsberg" danken@redhat.com To: "Alon Bar-Lev" alonbl@redhat.com Cc: "Antoni Segura Puimedon" asegurap@redhat.com, vdsm-devel@fedorahosted.org, arch@ovirt.org Sent: Monday, February 25, 2013 12:34:46 PM Subject: Re: vdsm networking changes proposal
On Sun, Feb 17, 2013 at 03:57:33PM -0500, Alon Bar-Lev wrote:
Hello Antoni,
Great work! I am very excited we are going this route, it is first of many to allow us to be run on different distributions. I apologize I got to this so late.
Notes for the model, I am unsure if someone already noted.
I think that the abstraction should be more than entity and properties.
For example:
nic is a network interface bridge is a network interface and ports network interfaces bound is a network interface and slave network interfaces vlan is a network interface and vlan id
network interface can have:
- name
- ip config
- state
- mtu
this way it would be easier to share common code that handle pure interfaces.
I agree with you - even though OOD is falling out of fashion in certain circles.
If we develop software like dressing fashion, we end up with software working for a single season.
I don't quite understand the 'Team' configurator, are you suggesting a provider for each technology?
Just as we may decide to move away from standard linux bridge to ovs-based bridging, we may switch from bonding to teaming. I do not think that we should do it now, but make sure that the design accomodates this.
So there should a separate provider for each object type, unless I am missing something.
bridge
- iproute2 provider
- ovs provider
- ifcfg provider
bond
- iproute2
- team
- ovs
- ifcfg
vlan
- iproute2
- ovs
- ifcfg
So we can get a configuration of: bridge:iproute2 bond:team vlan:ovs
I do not think that such complex combinations are of real interest. The client should not (currently) be allowed to request them. Some say that the specific combination that is used by Vdsm to implement the network should be defined in a config file. I think that a python file is good enough for that, at least for now.
I completely lost you, and how it got to do with python nor file.
If we have implementation of iproute2 that does bridge, vlan, bond, but we like to use ovs for bridge and vlan, how can we reuse the iproute2 provider for the bond?
If we register provider per object type we may allow easier reuse.
Yes, this is the plan. However I do not think it is wise to support all conceivable combinations of provider/object. A fixed one, such as "ovs for bridge and vlan, iproute2 for bond" is good enough.
The whole point of the abstraction/provider thing is to vdsm *NOT* be aware of the underline technologies. I would not like to see 'if ovs then' or any other similar one in vdsm code after we have this mechanism in place.
Not that I say that a total generic sequence will require to work, but the ovs for bridge and vlan should be compatible with iproute for bond, while iproute for bridge and iproute for vlan and iproute for bond are compatible as well.
This, however, does not imply that the implementation is in python (oh well...) nor if the implementation is single file or multiple file...
?
I also would like us to explore a future alternative of the network configuration via crypto vpn directly from qemu to another qemu, the idea is to have a kerberos like key per layer3(or layer2) destination, while communication is encrypted at user space and sent to a flat network. The advantage of this is that we manage logical network and not physical network, while relaying on hardware to find the best route to destination. The question is how and if we can provide this via the suggestion abstraction. But maybe it is too soon to address this kind of future.
This is something completely different, as we say in Python. The nice thing about your idea, is that in the context of host network configuration we need nothing more than our current bridge-bond-nic. The sad thing about your idea, is that it would scale badly with the nubmer of virtual networks. If a new VM comes live and sends an ARP who-has broadcast message - which VMs should be bothered to attempt to decrypt it?
This is easily filtered by a tag. Just like in MPLS.
How is it different from a vlan tag, then? Or that you suggest that we trust qemu to do the tagging, instead of the host kernel?
I think that like MPLS, there will be vlan emulation server that qemu will register into. But I think I am way ahead of this specific discussion.
Alon
On Tue, Feb 26, 2013 at 10:51:12AM -0500, Alon Bar-Lev wrote:
----- Original Message -----
From: "Dan Kenigsberg" danken@redhat.com To: "Alon Bar-Lev" alonbl@redhat.com Cc: "Antoni Segura Puimedon" asegurap@redhat.com, vdsm-devel@fedorahosted.org, arch@ovirt.org Sent: Tuesday, February 26, 2013 5:45:50 PM Subject: Re: vdsm networking changes proposal
On Tue, Feb 26, 2013 at 10:11:46AM -0500, Alon Bar-Lev wrote:
----- Original Message -----
From: "Dan Kenigsberg" danken@redhat.com To: "Alon Bar-Lev" alonbl@redhat.com Cc: "Antoni Segura Puimedon" asegurap@redhat.com, vdsm-devel@fedorahosted.org, arch@ovirt.org Sent: Monday, February 25, 2013 12:34:46 PM Subject: Re: vdsm networking changes proposal
On Sun, Feb 17, 2013 at 03:57:33PM -0500, Alon Bar-Lev wrote:
Hello Antoni,
Great work! I am very excited we are going this route, it is first of many to allow us to be run on different distributions. I apologize I got to this so late.
Notes for the model, I am unsure if someone already noted.
I think that the abstraction should be more than entity and properties.
For example:
nic is a network interface bridge is a network interface and ports network interfaces bound is a network interface and slave network interfaces vlan is a network interface and vlan id
network interface can have:
- name
- ip config
- state
- mtu
this way it would be easier to share common code that handle pure interfaces.
I agree with you - even though OOD is falling out of fashion in certain circles.
If we develop software like dressing fashion, we end up with software working for a single season.
I don't quite understand the 'Team' configurator, are you suggesting a provider for each technology?
Just as we may decide to move away from standard linux bridge to ovs-based bridging, we may switch from bonding to teaming. I do not think that we should do it now, but make sure that the design accomodates this.
So there should a separate provider for each object type, unless I am missing something.
bridge
- iproute2 provider
- ovs provider
- ifcfg provider
bond
- iproute2
- team
- ovs
- ifcfg
vlan
- iproute2
- ovs
- ifcfg
So we can get a configuration of: bridge:iproute2 bond:team vlan:ovs
I do not think that such complex combinations are of real interest. The client should not (currently) be allowed to request them. Some say that the specific combination that is used by Vdsm to implement the network should be defined in a config file. I think that a python file is good enough for that, at least for now.
I completely lost you, and how it got to do with python nor file.
If we have implementation of iproute2 that does bridge, vlan, bond, but we like to use ovs for bridge and vlan, how can we reuse the iproute2 provider for the bond?
If we register provider per object type we may allow easier reuse.
Yes, this is the plan. However I do not think it is wise to support all conceivable combinations of provider/object. A fixed one, such as "ovs for bridge and vlan, iproute2 for bond" is good enough.
The whole point of the abstraction/provider thing is to vdsm *NOT* be aware of the underline technologies. I would not like to see 'if ovs then' or any other similar one in vdsm code after we have this mechanism in place.
Vdsm has to be aware of the underlying technologies, but this awareness has to be confined to two places: - the providers. - the thing that selects which provider should be used today.
Not that I say that a total generic sequence will require to work, but the ovs for bridge and vlan should be compatible with iproute for bond, while iproute for bridge and iproute for vlan and iproute for bond are compatible as well.
Sure.
Dan.
----- Original Message -----
From: "Dan Kenigsberg" danken@redhat.com To: "Alon Bar-Lev" alonbl@redhat.com Cc: "Antoni Segura Puimedon" asegurap@redhat.com, vdsm-devel@fedorahosted.org, arch@ovirt.org Sent: Wednesday, February 27, 2013 11:14:35 AM Subject: Re: vdsm networking changes proposal
On Tue, Feb 26, 2013 at 10:51:12AM -0500, Alon Bar-Lev wrote:
----- Original Message -----
From: "Dan Kenigsberg" danken@redhat.com To: "Alon Bar-Lev" alonbl@redhat.com Cc: "Antoni Segura Puimedon" asegurap@redhat.com, vdsm-devel@fedorahosted.org, arch@ovirt.org Sent: Tuesday, February 26, 2013 5:45:50 PM Subject: Re: vdsm networking changes proposal
On Tue, Feb 26, 2013 at 10:11:46AM -0500, Alon Bar-Lev wrote:
----- Original Message -----
From: "Dan Kenigsberg" danken@redhat.com To: "Alon Bar-Lev" alonbl@redhat.com Cc: "Antoni Segura Puimedon" asegurap@redhat.com, vdsm-devel@fedorahosted.org, arch@ovirt.org Sent: Monday, February 25, 2013 12:34:46 PM Subject: Re: vdsm networking changes proposal
On Sun, Feb 17, 2013 at 03:57:33PM -0500, Alon Bar-Lev wrote:
Hello Antoni,
Great work! I am very excited we are going this route, it is first of many to allow us to be run on different distributions. I apologize I got to this so late.
Notes for the model, I am unsure if someone already noted.
I think that the abstraction should be more than entity and properties.
For example:
nic is a network interface bridge is a network interface and ports network interfaces bound is a network interface and slave network interfaces vlan is a network interface and vlan id
network interface can have:
- name
- ip config
- state
- mtu
this way it would be easier to share common code that handle pure interfaces.
I agree with you - even though OOD is falling out of fashion in certain circles.
If we develop software like dressing fashion, we end up with software working for a single season.
I don't quite understand the 'Team' configurator, are you suggesting a provider for each technology?
Just as we may decide to move away from standard linux bridge to ovs-based bridging, we may switch from bonding to teaming. I do not think that we should do it now, but make sure that the design accomodates this.
So there should a separate provider for each object type, unless I am missing something.
bridge
- iproute2 provider
- ovs provider
- ifcfg provider
bond
- iproute2
- team
- ovs
- ifcfg
vlan
- iproute2
- ovs
- ifcfg
So we can get a configuration of: bridge:iproute2 bond:team vlan:ovs
I do not think that such complex combinations are of real interest. The client should not (currently) be allowed to request them. Some say that the specific combination that is used by Vdsm to implement the network should be defined in a config file. I think that a python file is good enough for that, at least for now.
I completely lost you, and how it got to do with python nor file.
If we have implementation of iproute2 that does bridge, vlan, bond, but we like to use ovs for bridge and vlan, how can we reuse the iproute2 provider for the bond?
If we register provider per object type we may allow easier reuse.
Yes, this is the plan. However I do not think it is wise to support all conceivable combinations of provider/object. A fixed one, such as "ovs for bridge and vlan, iproute2 for bond" is good enough.
The whole point of the abstraction/provider thing is to vdsm *NOT* be aware of the underline technologies. I would not like to see 'if ovs then' or any other similar one in vdsm code after we have this mechanism in place.
Vdsm has to be aware of the underlying technologies, but this awareness has to be confined to two places:
- the providers.
- the thing that selects which provider should be used today.
I don't understand the 2nd item... why is 'today' important? and what is 'thing'?
Not that I say that a total generic sequence will require to work, but the ovs for bridge and vlan should be compatible with iproute for bond, while iproute for bridge and iproute for vlan and iproute for bond are compatible as well.
Sure.
Dan.
On Wed, Feb 27, 2013 at 06:06:30AM -0500, Alon Bar-Lev wrote:
----- Original Message -----
From: "Dan Kenigsberg" danken@redhat.com To: "Alon Bar-Lev" alonbl@redhat.com Cc: "Antoni Segura Puimedon" asegurap@redhat.com, vdsm-devel@fedorahosted.org, arch@ovirt.org Sent: Wednesday, February 27, 2013 11:14:35 AM Subject: Re: vdsm networking changes proposal
On Tue, Feb 26, 2013 at 10:51:12AM -0500, Alon Bar-Lev wrote:
----- Original Message -----
From: "Dan Kenigsberg" danken@redhat.com To: "Alon Bar-Lev" alonbl@redhat.com Cc: "Antoni Segura Puimedon" asegurap@redhat.com, vdsm-devel@fedorahosted.org, arch@ovirt.org Sent: Tuesday, February 26, 2013 5:45:50 PM Subject: Re: vdsm networking changes proposal
On Tue, Feb 26, 2013 at 10:11:46AM -0500, Alon Bar-Lev wrote:
----- Original Message -----
From: "Dan Kenigsberg" danken@redhat.com To: "Alon Bar-Lev" alonbl@redhat.com Cc: "Antoni Segura Puimedon" asegurap@redhat.com, vdsm-devel@fedorahosted.org, arch@ovirt.org Sent: Monday, February 25, 2013 12:34:46 PM Subject: Re: vdsm networking changes proposal
On Sun, Feb 17, 2013 at 03:57:33PM -0500, Alon Bar-Lev wrote: > Hello Antoni, > > Great work! > I am very excited we are going this route, it is first of > many > to > allow us to be run on different distributions. > I apologize I got to this so late. > > Notes for the model, I am unsure if someone already noted. > > I think that the abstraction should be more than entity and > properties. > > For example: > > nic is a network interface > bridge is a network interface and ports network interfaces > bound is a network interface and slave network interfaces > vlan is a network interface and vlan id > > network interface can have: > - name > - ip config > - state > - mtu > > this way it would be easier to share common code that > handle > pure > interfaces.
I agree with you - even though OOD is falling out of fashion in certain circles.
If we develop software like dressing fashion, we end up with software working for a single season.
> > I don't quite understand the 'Team' configurator, are you > suggesting a > provider for each technology?
Just as we may decide to move away from standard linux bridge to ovs-based bridging, we may switch from bonding to teaming. I do not think that we should do it now, but make sure that the design accomodates this.
So there should a separate provider for each object type, unless I am missing something.
> > bridge > - iproute2 provider > - ovs provider > - ifcfg provider > > bond > - iproute2 > - team > - ovs > - ifcfg > > vlan > - iproute2 > - ovs > - ifcfg > > So we can get a configuration of: > bridge:iproute2 > bond:team > vlan:ovs
I do not think that such complex combinations are of real interest. The client should not (currently) be allowed to request them. Some say that the specific combination that is used by Vdsm to implement the network should be defined in a config file. I think that a python file is good enough for that, at least for now.
I completely lost you, and how it got to do with python nor file.
If we have implementation of iproute2 that does bridge, vlan, bond, but we like to use ovs for bridge and vlan, how can we reuse the iproute2 provider for the bond?
If we register provider per object type we may allow easier reuse.
Yes, this is the plan. However I do not think it is wise to support all conceivable combinations of provider/object. A fixed one, such as "ovs for bridge and vlan, iproute2 for bond" is good enough.
The whole point of the abstraction/provider thing is to vdsm *NOT* be aware of the underline technologies. I would not like to see 'if ovs then' or any other similar one in vdsm code after we have this mechanism in place.
Vdsm has to be aware of the underlying technologies, but this awareness has to be confined to two places:
- the providers.
- the thing that selects which provider should be used today.
I don't understand the 2nd item... why is 'today' important? and what is 'thing'?
It's not really difficult, nor important, but let me try again.
Assume we have code of two providers for bridge. One is ifcfg-based, the other is ovs-based. That's item 1.
Now we get a command to create a bridge. We do not intend to change the API to let Engine select which of the two providers should be used, so it is vdsm's obligation. That's item 2.
----- Original Message -----
From: "Dan Kenigsberg" danken@redhat.com To: "Alon Bar-Lev" alonbl@redhat.com Cc: "Antoni Segura Puimedon" asegurap@redhat.com, vdsm-devel@fedorahosted.org, arch@ovirt.org Sent: Wednesday, February 27, 2013 2:02:48 PM Subject: Re: vdsm networking changes proposal
On Wed, Feb 27, 2013 at 06:06:30AM -0500, Alon Bar-Lev wrote:
----- Original Message -----
From: "Dan Kenigsberg" danken@redhat.com To: "Alon Bar-Lev" alonbl@redhat.com Cc: "Antoni Segura Puimedon" asegurap@redhat.com, vdsm-devel@fedorahosted.org, arch@ovirt.org Sent: Wednesday, February 27, 2013 11:14:35 AM Subject: Re: vdsm networking changes proposal
On Tue, Feb 26, 2013 at 10:51:12AM -0500, Alon Bar-Lev wrote:
----- Original Message -----
From: "Dan Kenigsberg" danken@redhat.com To: "Alon Bar-Lev" alonbl@redhat.com Cc: "Antoni Segura Puimedon" asegurap@redhat.com, vdsm-devel@fedorahosted.org, arch@ovirt.org Sent: Tuesday, February 26, 2013 5:45:50 PM Subject: Re: vdsm networking changes proposal
On Tue, Feb 26, 2013 at 10:11:46AM -0500, Alon Bar-Lev wrote:
----- Original Message ----- > From: "Dan Kenigsberg" danken@redhat.com > To: "Alon Bar-Lev" alonbl@redhat.com > Cc: "Antoni Segura Puimedon" asegurap@redhat.com, > vdsm-devel@fedorahosted.org, arch@ovirt.org > Sent: Monday, February 25, 2013 12:34:46 PM > Subject: Re: vdsm networking changes proposal > > On Sun, Feb 17, 2013 at 03:57:33PM -0500, Alon Bar-Lev > wrote: > > Hello Antoni, > > > > Great work! > > I am very excited we are going this route, it is first > > of > > many > > to > > allow us to be run on different distributions. > > I apologize I got to this so late. > > > > Notes for the model, I am unsure if someone already > > noted. > > > > I think that the abstraction should be more than entity > > and > > properties. > > > > For example: > > > > nic is a network interface > > bridge is a network interface and ports network > > interfaces > > bound is a network interface and slave network > > interfaces > > vlan is a network interface and vlan id > > > > network interface can have: > > - name > > - ip config > > - state > > - mtu > > > > this way it would be easier to share common code that > > handle > > pure > > interfaces. > > I agree with you - even though OOD is falling out of > fashion > in > certain > circles.
If we develop software like dressing fashion, we end up with software working for a single season.
> > > > > I don't quite understand the 'Team' configurator, are > > you > > suggesting a > > provider for each technology? > > Just as we may decide to move away from standard linux > bridge > to > ovs-based bridging, we may switch from bonding to > teaming. I > do > not > think that we should do it now, but make sure that the > design > accomodates this.
So there should a separate provider for each object type, unless I am missing something.
> > > > bridge > > - iproute2 provider > > - ovs provider > > - ifcfg provider > > > > bond > > - iproute2 > > - team > > - ovs > > - ifcfg > > > > vlan > > - iproute2 > > - ovs > > - ifcfg > > > > So we can get a configuration of: > > bridge:iproute2 > > bond:team > > vlan:ovs > > I do not think that such complex combinations are of real > interest. > The > client should not (currently) be allowed to request them. > Some > say > that > the specific combination that is used by Vdsm to > implement > the > network > should be defined in a config file. I think that a python > file is > good > enough for that, at least for now.
I completely lost you, and how it got to do with python nor file.
If we have implementation of iproute2 that does bridge, vlan, bond, but we like to use ovs for bridge and vlan, how can we reuse the iproute2 provider for the bond?
If we register provider per object type we may allow easier reuse.
Yes, this is the plan. However I do not think it is wise to support all conceivable combinations of provider/object. A fixed one, such as "ovs for bridge and vlan, iproute2 for bond" is good enough.
The whole point of the abstraction/provider thing is to vdsm *NOT* be aware of the underline technologies. I would not like to see 'if ovs then' or any other similar one in vdsm code after we have this mechanism in place.
Vdsm has to be aware of the underlying technologies, but this awareness has to be confined to two places:
- the providers.
- the thing that selects which provider should be used today.
I don't understand the 2nd item... why is 'today' important? and what is 'thing'?
It's not really difficult, nor important, but let me try again.
Assume we have code of two providers for bridge. One is ifcfg-based, the other is ovs-based. That's item 1.
Now we get a command to create a bridge. We do not intend to change the API to let Engine select which of the two providers should be used, so it is vdsm's obligation. That's item 2.
I think that the settings of which provider to use for what technology is to be specified via configuration, vdsm should not hard code anything regarding specific provider behavior, technology or other.
The obligation of having a sane configuration is on the administrator (or the host-deploy automation).
This approach will ensure will will be able to support multiple configurations without releasing new versions of vdsm, or be able to support different provider layout per distribution or usage (development/production).
If vdsm is to be aware of specific provider behavior, this behavior should be exposed at the provider's interface.
Alon
vdsm-devel@lists.stg.fedorahosted.org