Hi,
I submited a patch for text-based console http://gerrit.ovirt.org/#/c/7165/
the issue I want to discussing as below: 1. fix port VS dynamic port
Use fix port for all VM's console. connect console with 'ssh vmUUID@ip -p port'. Distinguishing VM by vmUUID.
The current implement was vdsm will allocated port for console dynamically and spawn sub-process when VM creating. In sub-process the main thread responsible for accept new connection and dispatch output of console to each connection. When new connection is coming, main processing create new thread for each new connection. Dynamic port will allocated port for each VM and use range port. It isn't good for firewall rules.
so I got a suggestion that use fix port. and connect console with 'ssh vmuuid@hostip -p fixport'. this is simple for user. We need one process for accept new connection from fix port and when new connection is coming, spawn sub-process for each vm. But because the console only can open by one process, main process need responsible for dispatching console's output of all vms and all connection. So the code will be a little complex then dynamic port.
So this is dynamic port VS fix port and simple code VS complex code.
Thanks!
On Thu, Aug 30, 2012 at 11:32:02AM +0800, Xu He Jie wrote:
Hi,
I submited a patch for text-based console http://gerrit.ovirt.org/#/c/7165/
the issue I want to discussing as below:
- fix port VS dynamic port
Use fix port for all VM's console. connect console with 'ssh vmUUID@ip -p port'. Distinguishing VM by vmUUID.
The current implement was vdsm will allocated port for console dynamically and spawn sub-process when VM creating. In sub-process the main thread responsible for accept new connection and dispatch output of console to each connection. When new connection is coming, main processing create new thread for each new connection. Dynamic port will allocated port for each VM and use range port. It isn't good for firewall rules.
so I got a suggestion that use fix port. and connect console with 'ssh vmuuid@hostip -p fixport'. this is simple for user. We need one process for accept new connection from fix port and when new connection is coming, spawn sub-process for each vm. But because the console only can open by one process, main process need responsible for dispatching console's output of all vms and all connection. So the code will be a little complex then dynamic port.
So this is dynamic port VS fix port and simple code VS complex code.
From a usability point of view, I think the fixed port suggestion is nicer.
This means that a system administrator needs only to open one port to enable remote console access. If your initial implementation limits console access to one connection per VM would that simplify the code?
On Thu, Aug 30, 2012 at 04:26:31PM -0500, Adam Litke wrote:
On Thu, Aug 30, 2012 at 11:32:02AM +0800, Xu He Jie wrote:
Hi,
I submited a patch for text-based console http://gerrit.ovirt.org/#/c/7165/
the issue I want to discussing as below:
- fix port VS dynamic port
Use fix port for all VM's console. connect console with 'ssh vmUUID@ip -p port'. Distinguishing VM by vmUUID.
The current implement was vdsm will allocated port for console dynamically and spawn sub-process when VM creating. In sub-process the main thread responsible for accept new connection and dispatch output of console to each connection. When new connection is coming, main processing create new thread for each new connection. Dynamic port will allocated port for each VM and use range port. It isn't good for firewall rules.
so I got a suggestion that use fix port. and connect console with 'ssh vmuuid@hostip -p fixport'. this is simple for user. We need one process for accept new connection from fix port and when new connection is coming, spawn sub-process for each vm. But because the console only can open by one process, main process need responsible for dispatching console's output of all vms and all connection. So the code will be a little complex then dynamic port.
So this is dynamic port VS fix port and simple code VS complex code.
From a usability point of view, I think the fixed port suggestion is nicer.
This means that a system administrator needs only to open one port to enable remote console access. If your initial implementation limits console access to one connection per VM would that simplify the code?
Yes, using a fixed port for all consoles of all VMs seems like a cooler idea. Besides the firewall issue, there's user experience: instead of calling getVmStats to tell the vm port, and then use ssh, only one ssh call is needed. (Taking this one step further - it would make sense to add another layer on top, directing console clients to the specific host currently running the Vm.)
I did not take a close look at your implementation, and did not research this myself, but have you considered using sshd for this? I suppose you can configure sshd to collect the list of known "users" from `getAllVmStats`, and force it to run a command that redirects VM's console to the ssh client. It has a potential of being a more robust implementation.
We may want to start thinking about migration. It would be great if we could have a smart console client that connects to the source and destination consoles, and moves to the destination on-line, without loosing a character.
Regards, Dan.
----- Original Message -----
From: "Dan Kenigsberg" danken@redhat.com To: "Xu He Jie" xuhj@linux.vnet.ibm.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Monday, September 3, 2012 10:33:42 AM Subject: Re: [vdsm] [RFC]about the implement of text-based console
On Thu, Aug 30, 2012 at 04:26:31PM -0500, Adam Litke wrote:
On Thu, Aug 30, 2012 at 11:32:02AM +0800, Xu He Jie wrote:
Hi,
I submited a patch for text-based console http://gerrit.ovirt.org/#/c/7165/
the issue I want to discussing as below:
- fix port VS dynamic port
Use fix port for all VM's console. connect console with 'ssh vmUUID@ip -p port'. Distinguishing VM by vmUUID.
The current implement was vdsm will allocated port for console dynamically and spawn sub-process when VM creating. In sub-process the main thread responsible for accept new connection and dispatch output of console to each connection. When new connection is coming, main processing create new thread for each new connection. Dynamic port will allocated port for each VM and use range port. It isn't good for firewall rules.
so I got a suggestion that use fix port. and connect console with 'ssh vmuuid@hostip -p fixport'. this is simple for user. We need one process for accept new connection from fix port and when new connection is coming, spawn sub-process for each vm. But because the console only can open by one process, main process need responsible for dispatching console's output of all vms and all connection. So the code will be a little complex then dynamic port.
So this is dynamic port VS fix port and simple code VS complex code.
From a usability point of view, I think the fixed port suggestion is nicer.
This means that a system administrator needs only to open one port to enable remote console access. If your initial implementation limits console access to one connection per VM would that simplify the code?
Yes, using a fixed port for all consoles of all VMs seems like a cooler idea. Besides the firewall issue, there's user experience: instead of calling getVmStats to tell the vm port, and then use ssh, only one ssh call is needed. (Taking this one step further - it would make sense to add another layer on top, directing console clients to the specific host currently running the Vm.)
I did not take a close look at your implementation, and did not research this myself, but have you considered using sshd for this? I suppose you can configure sshd to collect the list of known "users" from `getAllVmStats`, and force it to run a command that redirects VM's console to the ssh client. It has a potential of being a more robust implementation.
We need to consider what implications this might have for security on the node - for example common criteria certification.
I"ll see if we can pull someone from Red Hat's security team into the discussion who would understand the implications.
We may want to start thinking about migration. It would be great if we could have a smart console client that connects to the source and destination consoles, and moves to the destination on-line, without loosing a character.
Regards, Dan. _______________________________________________ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
On 09/03/2012 10:33 PM, Dan Kenigsberg wrote:
On Thu, Aug 30, 2012 at 04:26:31PM -0500, Adam Litke wrote:
On Thu, Aug 30, 2012 at 11:32:02AM +0800, Xu He Jie wrote:
Hi,
I submited a patch for text-based console http://gerrit.ovirt.org/#/c/7165/
the issue I want to discussing as below:
- fix port VS dynamic port
Use fix port for all VM's console. connect console with 'ssh vmUUID@ip -p port'. Distinguishing VM by vmUUID.
The current implement was vdsm will allocated port for console dynamically and spawn sub-process when VM creating. In sub-process the main thread responsible for accept new connection and dispatch output of console to each connection. When new connection is coming, main processing create new thread for each new connection. Dynamic port will allocated port for each VM and use range port. It isn't good for firewall rules.
so I got a suggestion that use fix port. and connect console with 'ssh vmuuid@hostip -p fixport'. this is simple for user. We need one process for accept new connection from fix port and when new connection is coming, spawn sub-process for each vm. But because the console only can open by one process, main process need responsible for dispatching console's output of all vms and all connection. So the code will be a little complex then dynamic port.
So this is dynamic port VS fix port and simple code VS complex code. From a usability point of view, I think the fixed port suggestion is nicer.
This means that a system administrator needs only to open one port to enable remote console access. If your initial implementation limits console access to one connection per VM would that simplify the code?
Yes, using a fixed port for all consoles of all VMs seems like a cooler idea. Besides the firewall issue, there's user experience: instead of calling getVmStats to tell the vm port, and then use ssh, only one ssh call is needed. (Taking this one step further - it would make sense to add another layer on top, directing console clients to the specific host currently running the Vm.)
I did not take a close look at your implementation, and did not research this myself, but have you considered using sshd for this? I suppose you can configure sshd to collect the list of known "users" from `getAllVmStats`, and force it to run a command that redirects VM's console to the ssh client. It has a potential of being a more robust implementation.
I have considered using sshd and ssh tunnel. They can't implement fixed port and share console. Current implement we can do anything that what we want.
We may want to start thinking about migration. It would be great if we could have a smart console client that connects to the source and destination consoles, and moves to the destination on-line, without loosing a character.
This is interesting. My first thinking is it's easy implement at client side. I think we will implement ssh client in webbrowser. Engine will know the vm was migrated. Engine can tell client reconnect console to another host. I will try to think about is there any better idea.
Regards, Dan.
----- Original Message -----
From: "Xu He Jie" xuhj@linux.vnet.ibm.com To: "Dan Kenigsberg" danken@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Tuesday, 4 September, 2012 10:05:37 AM Subject: Re: [vdsm] [RFC]about the implement of text-based console
On 09/03/2012 10:33 PM, Dan Kenigsberg wrote:
On Thu, Aug 30, 2012 at 04:26:31PM -0500, Adam Litke wrote:
On Thu, Aug 30, 2012 at 11:32:02AM +0800, Xu He Jie wrote:
Hi,
I submited a patch for text-based console http://gerrit.ovirt.org/#/c/7165/
the issue I want to discussing as below:
- fix port VS dynamic port
Use fix port for all VM's console. connect console with 'ssh vmUUID@ip -p port'. Distinguishing VM by vmUUID.
The current implement was vdsm will allocated port for console dynamically and spawn sub-process when VM creating. In sub-process the main thread responsible for accept new connection and dispatch output of console to each connection. When new connection is coming, main processing create new thread for each new connection. Dynamic port will allocated port for each VM and use range port. It isn't good for firewall rules.
so I got a suggestion that use fix port. and connect console with 'ssh vmuuid@hostip -p fixport'. this is simple for user. We need one process for accept new connection from fix port and when new connection is coming, spawn sub-process for each vm. But because the console only can open by one process, main process need responsible for dispatching console's output of all vms and all connection. So the code will be a little complex then dynamic port.
So this is dynamic port VS fix port and simple code VS complex code. From a usability point of view, I think the fixed port suggestion is nicer.
This means that a system administrator needs only to open one port to enable remote console access. If your initial implementation limits console access to one connection per VM would that simplify the code?
Yes, using a fixed port for all consoles of all VMs seems like a cooler idea. Besides the firewall issue, there's user experience: instead of calling getVmStats to tell the vm port, and then use ssh, only one ssh call is needed. (Taking this one step further - it would make sense to add another layer on top, directing console clients to the specific host currently running the Vm.)
I did not take a close look at your implementation, and did not research this myself, but have you considered using sshd for this? I suppose you can configure sshd to collect the list of known "users" from `getAllVmStats`, and force it to run a command that redirects VM's console to the ssh client. It has a potential of being a more robust implementation.
I have considered using sshd and ssh tunnel. They can't implement fixed port and share console. Current implement we can do anything that what we want.
We may want to start thinking about migration. It would be great if we could have a smart console client that connects to the source and destination consoles, and moves to the destination on-line, without loosing a character.
This is interesting. My first thinking is it's easy implement at client side. I think we will implement ssh client in webbrowser. Engine will know the vm was migrated. Engine can tell client reconnect console to another host. I will try to think about is there any better idea.
If we implement this in a web client, we lose the use case of people without a GUI, who really have to use the serial text consoles.
If we really need a separate console for every VM, how about we keep a console server as a VM in the system, and that console server will be running sshd, with an open session to every VM. And in order to connect to a VMs serial console, we will actually ssh to this console server VM as a certain console user.
The way I see this is once a VM gets started, the console server will create a user/passwd for that VM, and once someone opens an ssh session to the console server as this user, it will automatically connect the ssh session to the console on whatever host the target VM is running on. When the VM stops, the user can be removed.
Regards, Dan.
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
On 09/04/2012 10:14 AM, Dan Yasny wrote:
----- Original Message -----
From: "Xu He Jie" xuhj@linux.vnet.ibm.com To: "Dan Kenigsberg" danken@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Tuesday, 4 September, 2012 10:05:37 AM Subject: Re: [vdsm] [RFC]about the implement of text-based console
On 09/03/2012 10:33 PM, Dan Kenigsberg wrote:
On Thu, Aug 30, 2012 at 04:26:31PM -0500, Adam Litke wrote:
On Thu, Aug 30, 2012 at 11:32:02AM +0800, Xu He Jie wrote:
Hi,
I submited a patch for text-based console
http://gerrit.ovirt.org/#/c/7165/
the issue I want to discussing as below:
- fix port VS dynamic port
Use fix port for all VM's console. connect console with 'ssh vmUUID@ip -p port'. Distinguishing VM by vmUUID.
The current implement was vdsm will allocated port for console
dynamically and spawn sub-process when VM creating. In sub-process the main thread responsible for accept new connection and dispatch output of console to each connection. When new connection is coming, main processing create new thread for each new connection. Dynamic port will allocated port for each VM and use range port. It isn't good for firewall rules.
so I got a suggestion that use fix port. and connect console with
'ssh vmuuid@hostip -p fixport'. this is simple for user. We need one process for accept new connection from fix port and when new connection is coming, spawn sub-process for each vm. But because the console only can open by one process, main process need responsible for dispatching console's output of all vms and all connection. So the code will be a little complex then dynamic port.
So this is dynamic port VS fix port and simple code VS complex code.
From a usability point of view, I think the fixed port suggestion is nicer.
This means that a system administrator needs only to open one port to enable remote console access. If your initial implementation limits console access to one connection per VM would that simplify the code?
Yes, using a fixed port for all consoles of all VMs seems like a cooler idea. Besides the firewall issue, there's user experience: instead of calling getVmStats to tell the vm port, and then use ssh, only one ssh call is needed. (Taking this one step further - it would make sense to add another layer on top, directing console clients to the specific host currently running the Vm.)
I did not take a close look at your implementation, and did not research this myself, but have you considered using sshd for this? I suppose you can configure sshd to collect the list of known "users" from `getAllVmStats`, and force it to run a command that redirects VM's console to the ssh client. It has a potential of being a more robust implementation.
I have considered using sshd and ssh tunnel. They can't implement fixed port and share console. Current implement we can do anything that what we want.
We may want to start thinking about migration. It would be great if we could have a smart console client that connects to the source and destination consoles, and moves to the destination on-line, without loosing a character.
This is interesting. My first thinking is it's easy implement at client side. I think we will implement ssh client in webbrowser. Engine will know the vm was migrated. Engine can tell client reconnect console to another host. I will try to think about is there any better idea.
If we implement this in a web client, we lose the use case of people without a GUI, who really have to use the serial text consoles.
If we really need a separate console for every VM, how about we keep a console server as a VM in the system, and that console server will be running sshd, with an open session to every VM. And in order to connect to a VMs serial console, we will actually ssh to this console server VM as a certain console user.
The way I see this is once a VM gets started, the console server will create a user/passwd for that VM, and once someone opens an ssh session to the console server as this user, it will automatically connect the ssh session to the console on whatever host the target VM is running on. When the VM stops, the user can be removed.
i'm less concerned by single or multiple ports, since we already do multiple ports for vnc/spice. the main question to me is can we easily frontend the serial of the VM by an ssh server authenticating based on the SetVmTicket flow (which qemu doesn't support). (well, unless qemu adds builtin ssh support to the serial port)
----- Original Message -----
From: "Itamar Heim" iheim@redhat.com To: "Dan Yasny" dyasny@redhat.com Cc: "Xu He Jie" xuhj@linux.vnet.ibm.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Tuesday, 4 September, 2012 10:39:18 AM Subject: Re: [vdsm] [RFC]about the implement of text-based console
On 09/04/2012 10:14 AM, Dan Yasny wrote:
----- Original Message -----
From: "Xu He Jie" xuhj@linux.vnet.ibm.com To: "Dan Kenigsberg" danken@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Tuesday, 4 September, 2012 10:05:37 AM Subject: Re: [vdsm] [RFC]about the implement of text-based console
On 09/03/2012 10:33 PM, Dan Kenigsberg wrote:
On Thu, Aug 30, 2012 at 04:26:31PM -0500, Adam Litke wrote:
On Thu, Aug 30, 2012 at 11:32:02AM +0800, Xu He Jie wrote:
Hi,
I submited a patch for text-based console
http://gerrit.ovirt.org/#/c/7165/
the issue I want to discussing as below:
- fix port VS dynamic port
Use fix port for all VM's console. connect console with 'ssh vmUUID@ip -p port'. Distinguishing VM by vmUUID.
The current implement was vdsm will allocated port for console
dynamically and spawn sub-process when VM creating. In sub-process the main thread responsible for accept new connection and dispatch output of console to each connection. When new connection is coming, main processing create new thread for each new connection. Dynamic port will allocated port for each VM and use range port. It isn't good for firewall rules.
so I got a suggestion that use fix port. and connect console with
'ssh vmuuid@hostip -p fixport'. this is simple for user. We need one process for accept new connection from fix port and when new connection is coming, spawn sub-process for each vm. But because the console only can open by one process, main process need responsible for dispatching console's output of all vms and all connection. So the code will be a little complex then dynamic port.
So this is dynamic port VS fix port and simple code VS complex code.
From a usability point of view, I think the fixed port suggestion is nicer.
This means that a system administrator needs only to open one port to enable remote console access. If your initial implementation limits console access to one connection per VM would that simplify the code?
Yes, using a fixed port for all consoles of all VMs seems like a cooler idea. Besides the firewall issue, there's user experience: instead of calling getVmStats to tell the vm port, and then use ssh, only one ssh call is needed. (Taking this one step further - it would make sense to add another layer on top, directing console clients to the specific host currently running the Vm.)
I did not take a close look at your implementation, and did not research this myself, but have you considered using sshd for this? I suppose you can configure sshd to collect the list of known "users" from `getAllVmStats`, and force it to run a command that redirects VM's console to the ssh client. It has a potential of being a more robust implementation.
I have considered using sshd and ssh tunnel. They can't implement fixed port and share console. Current implement we can do anything that what we want.
We may want to start thinking about migration. It would be great if we could have a smart console client that connects to the source and destination consoles, and moves to the destination on-line, without loosing a character.
This is interesting. My first thinking is it's easy implement at client side. I think we will implement ssh client in webbrowser. Engine will know the vm was migrated. Engine can tell client reconnect console to another host. I will try to think about is there any better idea.
If we implement this in a web client, we lose the use case of people without a GUI, who really have to use the serial text consoles.
If we really need a separate console for every VM, how about we keep a console server as a VM in the system, and that console server will be running sshd, with an open session to every VM. And in order to connect to a VMs serial console, we will actually ssh to this console server VM as a certain console user.
The way I see this is once a VM gets started, the console server will create a user/passwd for that VM, and once someone opens an ssh session to the console server as this user, it will automatically connect the ssh session to the console on whatever host the target VM is running on. When the VM stops, the user can be removed.
i'm less concerned by single or multiple ports, since we already do multiple ports for vnc/spice. the main question to me is can we easily frontend the serial of the VM by an ssh server authenticating based on the SetVmTicket flow (which qemu doesn't support).
We can connect directly to libvirt instead of qemu here. a remote call for `virsh console` with a VM name should work. We don't want to expose that to anyone directly, but if we use a console server, it can have access to connect to virsh console, and the user will go in to the console server by ssh
(well, unless qemu adds builtin ssh support to the serial port)
That would be great, but...
On 09/04/2012 03:39 PM, Itamar Heim wrote:
On 09/04/2012 10:14 AM, Dan Yasny wrote:
----- Original Message -----
From: "Xu He Jie" xuhj@linux.vnet.ibm.com To: "Dan Kenigsberg" danken@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Tuesday, 4 September, 2012 10:05:37 AM Subject: Re: [vdsm] [RFC]about the implement of text-based console
On 09/03/2012 10:33 PM, Dan Kenigsberg wrote:
On Thu, Aug 30, 2012 at 04:26:31PM -0500, Adam Litke wrote:
On Thu, Aug 30, 2012 at 11:32:02AM +0800, Xu He Jie wrote:
Hi,
I submited a patch for text-based console
http://gerrit.ovirt.org/#/c/7165/
the issue I want to discussing as below:
- fix port VS dynamic port
Use fix port for all VM's console. connect console with 'ssh vmUUID@ip -p port'. Distinguishing VM by vmUUID.
The current implement was vdsm will allocated port for console
dynamically and spawn sub-process when VM creating. In sub-process the main thread responsible for accept new connection and dispatch output of console to each connection. When new connection is coming, main processing create new thread for each new connection. Dynamic port will allocated port for each VM and use range port. It isn't good for firewall rules.
so I got a suggestion that use fix port. and connect console with
'ssh vmuuid@hostip -p fixport'. this is simple for user. We need one process for accept new connection from fix port and when new connection is coming, spawn sub-process for each vm. But because the console only can open by one process, main process need responsible for dispatching console's output of all vms and all connection. So the code will be a little complex then dynamic port.
So this is dynamic port VS fix port and simple code VS complex code.
From a usability point of view, I think the fixed port suggestion is nicer.
This means that a system administrator needs only to open one port to enable remote console access. If your initial implementation limits console access to one connection per VM would that simplify the code?
Yes, using a fixed port for all consoles of all VMs seems like a cooler idea. Besides the firewall issue, there's user experience: instead of calling getVmStats to tell the vm port, and then use ssh, only one ssh call is needed. (Taking this one step further - it would make sense to add another layer on top, directing console clients to the specific host currently running the Vm.)
I did not take a close look at your implementation, and did not research this myself, but have you considered using sshd for this? I suppose you can configure sshd to collect the list of known "users" from `getAllVmStats`, and force it to run a command that redirects VM's console to the ssh client. It has a potential of being a more robust implementation.
I have considered using sshd and ssh tunnel. They can't implement fixed port and share console. Current implement we can do anything that what we want.
We may want to start thinking about migration. It would be great if we could have a smart console client that connects to the source and destination consoles, and moves to the destination on-line, without loosing a character.
This is interesting. My first thinking is it's easy implement at client side. I think we will implement ssh client in webbrowser. Engine will know the vm was migrated. Engine can tell client reconnect console to another host. I will try to think about is there any better idea.
If we implement this in a web client, we lose the use case of people without a GUI, who really have to use the serial text consoles.
If we really need a separate console for every VM, how about we keep a console server as a VM in the system, and that console server will be running sshd, with an open session to every VM. And in order to connect to a VMs serial console, we will actually ssh to this console server VM as a certain console user.
The way I see this is once a VM gets started, the console server will create a user/passwd for that VM, and once someone opens an ssh session to the console server as this user, it will automatically connect the ssh session to the console on whatever host the target VM is running on. When the VM stops, the user can be removed.
i'm less concerned by single or multiple ports, since we already do multiple ports for vnc/spice. the main question to me is can we easily frontend the serial of the VM by an ssh server authenticating based on the SetVmTicket flow (which qemu doesn't support). (well, unless qemu adds builtin ssh support to the serial port)
yes, that is what I try to do, Both of single and multiple port can based on setVmTicket flow.
On 09/04/2012 03:14 PM, Dan Yasny wrote:
----- Original Message -----
From: "Xu He Jie" xuhj@linux.vnet.ibm.com To: "Dan Kenigsberg" danken@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Tuesday, 4 September, 2012 10:05:37 AM Subject: Re: [vdsm] [RFC]about the implement of text-based console
On 09/03/2012 10:33 PM, Dan Kenigsberg wrote:
On Thu, Aug 30, 2012 at 04:26:31PM -0500, Adam Litke wrote:
On Thu, Aug 30, 2012 at 11:32:02AM +0800, Xu He Jie wrote:
Hi,
I submited a patch for text-based console
http://gerrit.ovirt.org/#/c/7165/
the issue I want to discussing as below:
- fix port VS dynamic port
Use fix port for all VM's console. connect console with 'ssh vmUUID@ip -p port'. Distinguishing VM by vmUUID.
The current implement was vdsm will allocated port for console
dynamically and spawn sub-process when VM creating. In sub-process the main thread responsible for accept new connection and dispatch output of console to each connection. When new connection is coming, main processing create new thread for each new connection. Dynamic port will allocated port for each VM and use range port. It isn't good for firewall rules.
so I got a suggestion that use fix port. and connect console with
'ssh vmuuid@hostip -p fixport'. this is simple for user. We need one process for accept new connection from fix port and when new connection is coming, spawn sub-process for each vm. But because the console only can open by one process, main process need responsible for dispatching console's output of all vms and all connection. So the code will be a little complex then dynamic port.
So this is dynamic port VS fix port and simple code VS complex code.
From a usability point of view, I think the fixed port suggestion is nicer.
This means that a system administrator needs only to open one port to enable remote console access. If your initial implementation limits console access to one connection per VM would that simplify the code?
Yes, using a fixed port for all consoles of all VMs seems like a cooler idea. Besides the firewall issue, there's user experience: instead of calling getVmStats to tell the vm port, and then use ssh, only one ssh call is needed. (Taking this one step further - it would make sense to add another layer on top, directing console clients to the specific host currently running the Vm.)
I did not take a close look at your implementation, and did not research this myself, but have you considered using sshd for this? I suppose you can configure sshd to collect the list of known "users" from `getAllVmStats`, and force it to run a command that redirects VM's console to the ssh client. It has a potential of being a more robust implementation.
I have considered using sshd and ssh tunnel. They can't implement fixed port and share console. Current implement we can do anything that what we want.
We may want to start thinking about migration. It would be great if we could have a smart console client that connects to the source and destination consoles, and moves to the destination on-line, without loosing a character.
This is interesting. My first thinking is it's easy implement at client side. I think we will implement ssh client in webbrowser. Engine will know the vm was migrated. Engine can tell client reconnect console to another host. I will try to think about is there any better idea.
If we implement this in a web client, we lose the use case of people without a GUI, who really have to use the serial text consoles.
If we implement it at client, we have engine command line tools for user, and we also can implement it in that tools.
If we really need a separate console for every VM, how about we keep a console server as a VM in the system, and that console server will be running sshd, with an open session to every VM. And in order to connect to a VMs serial console, we will actually ssh to this console server VM as a certain console user.
Thanks for your idea. But I think it's too heavy. The console server VM will be another centralize management server for managing vm's console. If we use this, we need think about when engine setTicket for vm, how vdsm tell the console server VM the ticket, so there need another communicate method between vm and all vdsm host. And we need think about the guest os that running at vm, it's a customize linux, or fully fedora. And we can't use the text-based console when vdsm running as standalone.
The way I see this is once a VM gets started, the console server will create a user/passwd for that VM, and once someone opens an ssh session to the console server as this user, it will automatically connect the ssh session to the console on whatever host the target VM is running on. When the VM stops, the user can be removed.
Regards, Dan.
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
----- Original Message -----
From: "Xu He Jie" xuhj@linux.vnet.ibm.com To: "Dan Yasny" dyasny@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Dan Kenigsberg" danken@redhat.com Sent: Tuesday, 4 September, 2012 11:42:04 AM Subject: Re: [vdsm] [RFC]about the implement of text-based console
On 09/04/2012 03:14 PM, Dan Yasny wrote:
----- Original Message -----
From: "Xu He Jie" xuhj@linux.vnet.ibm.com To: "Dan Kenigsberg" danken@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Tuesday, 4 September, 2012 10:05:37 AM Subject: Re: [vdsm] [RFC]about the implement of text-based console
On 09/03/2012 10:33 PM, Dan Kenigsberg wrote:
On Thu, Aug 30, 2012 at 04:26:31PM -0500, Adam Litke wrote:
On Thu, Aug 30, 2012 at 11:32:02AM +0800, Xu He Jie wrote:
Hi,
I submited a patch for text-based console
http://gerrit.ovirt.org/#/c/7165/
the issue I want to discussing as below:
- fix port VS dynamic port
Use fix port for all VM's console. connect console with 'ssh vmUUID@ip -p port'. Distinguishing VM by vmUUID.
The current implement was vdsm will allocated port for console
dynamically and spawn sub-process when VM creating. In sub-process the main thread responsible for accept new connection and dispatch output of console to each connection. When new connection is coming, main processing create new thread for each new connection. Dynamic port will allocated port for each VM and use range port. It isn't good for firewall rules.
so I got a suggestion that use fix port. and connect console with
'ssh vmuuid@hostip -p fixport'. this is simple for user. We need one process for accept new connection from fix port and when new connection is coming, spawn sub-process for each vm. But because the console only can open by one process, main process need responsible for dispatching console's output of all vms and all connection. So the code will be a little complex then dynamic port.
So this is dynamic port VS fix port and simple code VS complex code.
From a usability point of view, I think the fixed port suggestion is nicer.
This means that a system administrator needs only to open one port to enable remote console access. If your initial implementation limits console access to one connection per VM would that simplify the code?
Yes, using a fixed port for all consoles of all VMs seems like a cooler idea. Besides the firewall issue, there's user experience: instead of calling getVmStats to tell the vm port, and then use ssh, only one ssh call is needed. (Taking this one step further - it would make sense to add another layer on top, directing console clients to the specific host currently running the Vm.)
I did not take a close look at your implementation, and did not research this myself, but have you considered using sshd for this? I suppose you can configure sshd to collect the list of known "users" from `getAllVmStats`, and force it to run a command that redirects VM's console to the ssh client. It has a potential of being a more robust implementation.
I have considered using sshd and ssh tunnel. They can't implement fixed port and share console. Current implement we can do anything that what we want.
We may want to start thinking about migration. It would be great if we could have a smart console client that connects to the source and destination consoles, and moves to the destination on-line, without loosing a character.
This is interesting. My first thinking is it's easy implement at client side. I think we will implement ssh client in webbrowser. Engine will know the vm was migrated. Engine can tell client reconnect console to another host. I will try to think about is there any better idea.
If we implement this in a web client, we lose the use case of people without a GUI, who really have to use the serial text consoles.
If we implement it at client, we have engine command line tools for user, and we also can implement it in that tools.
If we really need a separate console for every VM, how about we keep a console server as a VM in the system, and that console server will be running sshd, with an open session to every VM. And in order to connect to a VMs serial console, we will actually ssh to this console server VM as a certain console user.
Thanks for your idea. But I think it's too heavy. The console server VM will be another centralize management server for managing vm's console. If we use this, we need think about when engine setTicket for vm, how vdsm tell the console server VM the ticket, so there need another communicate method between vm and all vdsm host. And we need think about the guest os that running at vm, it's a customize linux, or fully fedora. And we can't use the text-based console when vdsm running as standalone.
Basically, the idea is not mine, a virtual appliance is how this is done on other products.
I think a container/jail with a special sshd on the engine that would be basically doing the same thing, can also be implemented. this is all, of course, a lot of work.
The way I see this is once a VM gets started, the console server will create a user/passwd for that VM, and once someone opens an ssh session to the console server as this user, it will automatically connect the ssh session to the console on whatever host the target VM is running on. When the VM stops, the user can be removed.
Regards, Dan.
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
On 09/04/2012 04:48 PM, Dan Yasny wrote:
----- Original Message -----
From: "Xu He Jie" xuhj@linux.vnet.ibm.com To: "Dan Yasny" dyasny@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Dan Kenigsberg" danken@redhat.com Sent: Tuesday, 4 September, 2012 11:42:04 AM Subject: Re: [vdsm] [RFC]about the implement of text-based console
On 09/04/2012 03:14 PM, Dan Yasny wrote:
----- Original Message -----
From: "Xu He Jie" xuhj@linux.vnet.ibm.com To: "Dan Kenigsberg" danken@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Tuesday, 4 September, 2012 10:05:37 AM Subject: Re: [vdsm] [RFC]about the implement of text-based console
On 09/03/2012 10:33 PM, Dan Kenigsberg wrote:
On Thu, Aug 30, 2012 at 04:26:31PM -0500, Adam Litke wrote:
On Thu, Aug 30, 2012 at 11:32:02AM +0800, Xu He Jie wrote: > Hi, > > I submited a patch for text-based console > http://gerrit.ovirt.org/#/c/7165/ > > the issue I want to discussing as below: > 1. fix port VS dynamic port > > Use fix port for all VM's console. connect console with 'ssh > vmUUID@ip -p port'. > Distinguishing VM by vmUUID. > > > The current implement was vdsm will allocated port for > console > dynamically and spawn sub-process when VM creating. > In sub-process the main thread responsible for accept new > connection > and dispatch output of console to each connection. > When new connection is coming, main processing create new > thread > for > each new connection. Dynamic port will allocated > port for each VM and use range port. It isn't good for firewall > rules. > > > so I got a suggestion that use fix port. and connect > console > with > 'ssh vmuuid@hostip -p fixport'. this is simple for user. > We need one process for accept new connection from fix port and > when > new connection is coming, spawn sub-process for each vm. > But because the console only can open by one process, main > process > need responsible for dispatching console's output of all vms > and > all > connection. > So the code will be a little complex then dynamic port. > > So this is dynamic port VS fix port and simple code VS > complex > code. >From a usability point of view, I think the fixed port > suggestion > is nicer. This means that a system administrator needs only to open one port to enable remote console access. If your initial implementation limits console access to one connection per VM would that simplify the code?
Yes, using a fixed port for all consoles of all VMs seems like a cooler idea. Besides the firewall issue, there's user experience: instead of calling getVmStats to tell the vm port, and then use ssh, only one ssh call is needed. (Taking this one step further - it would make sense to add another layer on top, directing console clients to the specific host currently running the Vm.)
I did not take a close look at your implementation, and did not research this myself, but have you considered using sshd for this? I suppose you can configure sshd to collect the list of known "users" from `getAllVmStats`, and force it to run a command that redirects VM's console to the ssh client. It has a potential of being a more robust implementation.
I have considered using sshd and ssh tunnel. They can't implement fixed port and share console. Current implement we can do anything that what we want.
We may want to start thinking about migration. It would be great if we could have a smart console client that connects to the source and destination consoles, and moves to the destination on-line, without loosing a character.
This is interesting. My first thinking is it's easy implement at client side. I think we will implement ssh client in webbrowser. Engine will know the vm was migrated. Engine can tell client reconnect console to another host. I will try to think about is there any better idea.
If we implement this in a web client, we lose the use case of people without a GUI, who really have to use the serial text consoles.
If we implement it at client, we have engine command line tools for user, and we also can implement it in that tools.
If we really need a separate console for every VM, how about we keep a console server as a VM in the system, and that console server will be running sshd, with an open session to every VM. And in order to connect to a VMs serial console, we will actually ssh to this console server VM as a certain console user.
Thanks for your idea. But I think it's too heavy. The console server VM will be another centralize management server for managing vm's console. If we use this, we need think about when engine setTicket for vm, how vdsm tell the console server VM the ticket, so there need another communicate method between vm and all vdsm host. And we need think about the guest os that running at vm, it's a customize linux, or fully fedora. And we can't use the text-based console when vdsm running as standalone.
Basically, the idea is not mine, a virtual appliance is how this is done on other products.
I think a container/jail with a special sshd on the engine that would be basically doing the same thing, can also be implemented. this is all, of course, a lot of work.
Yes, I see. I just think this need more plan. example: is any other service need virtual appliance? if build this mechanism only for console server, I think it's a little heave. :)
The way I see this is once a VM gets started, the console server will create a user/passwd for that VM, and once someone opens an ssh session to the console server as this user, it will automatically connect the ssh session to the console on whatever host the target VM is running on. When the VM stops, the user can be removed.
Regards, Dan.
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
On Tue, Sep 04, 2012 at 03:05:37PM +0800, Xu He Jie wrote:
On 09/03/2012 10:33 PM, Dan Kenigsberg wrote:
On Thu, Aug 30, 2012 at 04:26:31PM -0500, Adam Litke wrote:
On Thu, Aug 30, 2012 at 11:32:02AM +0800, Xu He Jie wrote:
Hi,
I submited a patch for text-based console http://gerrit.ovirt.org/#/c/7165/
the issue I want to discussing as below:
- fix port VS dynamic port
Use fix port for all VM's console. connect console with 'ssh vmUUID@ip -p port'. Distinguishing VM by vmUUID.
The current implement was vdsm will allocated port for console dynamically and spawn sub-process when VM creating. In sub-process the main thread responsible for accept new connection and dispatch output of console to each connection. When new connection is coming, main processing create new thread for each new connection. Dynamic port will allocated port for each VM and use range port. It isn't good for firewall rules.
so I got a suggestion that use fix port. and connect console with 'ssh vmuuid@hostip -p fixport'. this is simple for user. We need one process for accept new connection from fix port and when new connection is coming, spawn sub-process for each vm. But because the console only can open by one process, main process need responsible for dispatching console's output of all vms and all connection. So the code will be a little complex then dynamic port.
So this is dynamic port VS fix port and simple code VS complex code. From a usability point of view, I think the fixed port suggestion is nicer.
This means that a system administrator needs only to open one port to enable remote console access. If your initial implementation limits console access to one connection per VM would that simplify the code?
Yes, using a fixed port for all consoles of all VMs seems like a cooler idea. Besides the firewall issue, there's user experience: instead of calling getVmStats to tell the vm port, and then use ssh, only one ssh call is needed. (Taking this one step further - it would make sense to add another layer on top, directing console clients to the specific host currently running the Vm.)
I did not take a close look at your implementation, and did not research this myself, but have you considered using sshd for this? I suppose you can configure sshd to collect the list of known "users" from `getAllVmStats`, and force it to run a command that redirects VM's console to the ssh client. It has a potential of being a more robust implementation.
I have considered using sshd and ssh tunnel. They can't implement fixed port and share console.
Would you elaborate on that? Usually sshd listens to a fixed port 22, and allows multiple users to have independet shells. What do you mean by "share console"?
Current implement we can do anything that what we want.
Yes, it is completely under our control, but there are down sides, too: we have to maintain another process, and another entry point, instead of configuring a universally-used, well maintained and debugged application.
Dan.
----- Original Message -----
From: "Dan Kenigsberg" danken@redhat.com To: "Xu He Jie" xuhj@linux.vnet.ibm.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Tuesday, 4 September, 2012 1:52:49 PM Subject: Re: [vdsm] [RFC]about the implement of text-based console
On Tue, Sep 04, 2012 at 03:05:37PM +0800, Xu He Jie wrote:
On 09/03/2012 10:33 PM, Dan Kenigsberg wrote:
On Thu, Aug 30, 2012 at 04:26:31PM -0500, Adam Litke wrote:
On Thu, Aug 30, 2012 at 11:32:02AM +0800, Xu He Jie wrote:
Hi,
I submited a patch for text-based console http://gerrit.ovirt.org/#/c/7165/
the issue I want to discussing as below:
- fix port VS dynamic port
Use fix port for all VM's console. connect console with 'ssh vmUUID@ip -p port'. Distinguishing VM by vmUUID.
The current implement was vdsm will allocated port for console dynamically and spawn sub-process when VM creating. In sub-process the main thread responsible for accept new connection and dispatch output of console to each connection. When new connection is coming, main processing create new thread for each new connection. Dynamic port will allocated port for each VM and use range port. It isn't good for firewall rules.
so I got a suggestion that use fix port. and connect console with 'ssh vmuuid@hostip -p fixport'. this is simple for user. We need one process for accept new connection from fix port and when new connection is coming, spawn sub-process for each vm. But because the console only can open by one process, main process need responsible for dispatching console's output of all vms and all connection. So the code will be a little complex then dynamic port.
So this is dynamic port VS fix port and simple code VS complex code. From a usability point of view, I think the fixed port suggestion is nicer.
This means that a system administrator needs only to open one port to enable remote console access. If your initial implementation limits console access to one connection per VM would that simplify the code?
Yes, using a fixed port for all consoles of all VMs seems like a cooler idea. Besides the firewall issue, there's user experience: instead of calling getVmStats to tell the vm port, and then use ssh, only one ssh call is needed. (Taking this one step further - it would make sense to add another layer on top, directing console clients to the specific host currently running the Vm.)
I did not take a close look at your implementation, and did not research this myself, but have you considered using sshd for this? I suppose you can configure sshd to collect the list of known "users" from `getAllVmStats`, and force it to run a command that redirects VM's console to the ssh client. It has a potential of being a more robust implementation.
I have considered using sshd and ssh tunnel. They can't implement fixed port and share console.
Would you elaborate on that? Usually sshd listens to a fixed port 22, and allows multiple users to have independet shells. What do you mean by "share console"?
Current implement we can do anything that what we want.
Yes, it is completely under our control, but there are down sides, too: we have to maintain another process, and another entry point, instead of configuring a universally-used, well maintained and debugged application.
Not to mention - known to be secure
Dan. _______________________________________________ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
* Dan Kenigsberg danken@redhat.com [2012-09-04 05:53]:
On Tue, Sep 04, 2012 at 03:05:37PM +0800, Xu He Jie wrote:
On 09/03/2012 10:33 PM, Dan Kenigsberg wrote:
On Thu, Aug 30, 2012 at 04:26:31PM -0500, Adam Litke wrote:
On Thu, Aug 30, 2012 at 11:32:02AM +0800, Xu He Jie wrote:
Hi,
I submited a patch for text-based console http://gerrit.ovirt.org/#/c/7165/
the issue I want to discussing as below:
- fix port VS dynamic port
Use fix port for all VM's console. connect console with 'ssh vmUUID@ip -p port'. Distinguishing VM by vmUUID.
The current implement was vdsm will allocated port for console dynamically and spawn sub-process when VM creating. In sub-process the main thread responsible for accept new connection and dispatch output of console to each connection. When new connection is coming, main processing create new thread for each new connection. Dynamic port will allocated port for each VM and use range port. It isn't good for firewall rules.
so I got a suggestion that use fix port. and connect console with 'ssh vmuuid@hostip -p fixport'. this is simple for user. We need one process for accept new connection from fix port and when new connection is coming, spawn sub-process for each vm. But because the console only can open by one process, main process need responsible for dispatching console's output of all vms and all connection. So the code will be a little complex then dynamic port.
So this is dynamic port VS fix port and simple code VS complex code. From a usability point of view, I think the fixed port suggestion is nicer.
This means that a system administrator needs only to open one port to enable remote console access. If your initial implementation limits console access to one connection per VM would that simplify the code?
Yes, using a fixed port for all consoles of all VMs seems like a cooler idea. Besides the firewall issue, there's user experience: instead of calling getVmStats to tell the vm port, and then use ssh, only one ssh call is needed. (Taking this one step further - it would make sense to add another layer on top, directing console clients to the specific host currently running the Vm.)
I did not take a close look at your implementation, and did not research this myself, but have you considered using sshd for this? I suppose you can configure sshd to collect the list of known "users" from `getAllVmStats`, and force it to run a command that redirects VM's console to the ssh client. It has a potential of being a more robust implementation.
I have considered using sshd and ssh tunnel. They can't implement fixed port and share console.
Would you elaborate on that? Usually sshd listens to a fixed port 22, and allows multiple users to have independet shells. What do you mean by "share console"?
Current implement we can do anything that what we want.
Yes, it is completely under our control, but there are down sides, too: we have to maintain another process, and another entry point, instead of configuring a universally-used, well maintained and debugged application.
Think of the security implications of having another remote shell access point to a host. I'd much rather trust sshd if we can make it work.
Dan.
on 09/04/2012 22:19, Ryan Harper wrote:
- Dan Kenigsberg danken@redhat.com [2012-09-04 05:53]:
On Tue, Sep 04, 2012 at 03:05:37PM +0800, Xu He Jie wrote:
On 09/03/2012 10:33 PM, Dan Kenigsberg wrote:
On Thu, Aug 30, 2012 at 04:26:31PM -0500, Adam Litke wrote:
On Thu, Aug 30, 2012 at 11:32:02AM +0800, Xu He Jie wrote:
Hi,
I submited a patch for text-based console http://gerrit.ovirt.org/#/c/7165/
the issue I want to discussing as below:
- fix port VS dynamic port
Use fix port for all VM's console. connect console with 'ssh vmUUID@ip -p port'. Distinguishing VM by vmUUID.
The current implement was vdsm will allocated port for console dynamically and spawn sub-process when VM creating. In sub-process the main thread responsible for accept new connection and dispatch output of console to each connection. When new connection is coming, main processing create new thread for each new connection. Dynamic port will allocated port for each VM and use range port. It isn't good for firewall rules.
so I got a suggestion that use fix port. and connect console with 'ssh vmuuid@hostip -p fixport'. this is simple for user. We need one process for accept new connection from fix port and when new connection is coming, spawn sub-process for each vm. But because the console only can open by one process, main process need responsible for dispatching console's output of all vms and all connection. So the code will be a little complex then dynamic port.
So this is dynamic port VS fix port and simple code VS complex code. From a usability point of view, I think the fixed port suggestion is nicer.
This means that a system administrator needs only to open one port to enable remote console access. If your initial implementation limits console access to one connection per VM would that simplify the code?
Yes, using a fixed port for all consoles of all VMs seems like a cooler idea. Besides the firewall issue, there's user experience: instead of calling getVmStats to tell the vm port, and then use ssh, only one ssh call is needed. (Taking this one step further - it would make sense to add another layer on top, directing console clients to the specific host currently running the Vm.)
I did not take a close look at your implementation, and did not research this myself, but have you considered using sshd for this? I suppose you can configure sshd to collect the list of known "users" from `getAllVmStats`, and force it to run a command that redirects VM's console to the ssh client. It has a potential of being a more robust implementation.
I have considered using sshd and ssh tunnel. They can't implement fixed port and share console.
Would you elaborate on that? Usually sshd listens to a fixed port 22, and allows multiple users to have independet shells. What do you mean by "share console"?
Current implement we can do anything that what we want.
Yes, it is completely under our control, but there are down sides, too: we have to maintain another process, and another entry point, instead of configuring a universally-used, well maintained and debugged application.
Think of the security implications of having another remote shell access point to a host. I'd much rather trust sshd if we can make it work.
Dan.
At first glance, the standard sshd on the host is stronger and more robust than a custom ssh server, but the risk using the host sshd is high. If we implement this feature via host ssd, when a hacker attacks the sshd successfully, he will get access to the host shell. After all, the custom ssh server is not for accessing host shell, but just for forwarding the data from the guest console (a host /dev/pts/X device). If we just use a custom ssh server, the code in this server only does 1. auth, 2. data forwarding, when the hacker attacks, he just gets access to that virtual machine. Notice that there is no code written about login to the host in the custom ssh server, and the custom ssh server can be protected under selinux, only allowing it to access /dev/pts/X.
In fact using a custom VNC server in qemu is as risky as a custom ssh server in vdsm. If we accepts the former one, then I can accepts the latter one. The consideration is how robust of the custom ssh server, and the difficulty to maintain it. In He Jie's current patch, the ssh auth and transport library is an open-source third-party project, unless the project is well maintained and well proven, using it can be risky.
So my opinion is using neither the host sshd, nor a custom ssh server. Maybe we can apply the suggestion from Dan Yasny, running a standard sshd in a very small VM in every host, and forward data from this VM to other guest consoles. The ssh part is in the VM, then our work is just forwarding data from the VM via virto serial channels, to the guest via the pty.
On Fri, Oct 12, 2012 at 04:55:20PM +0800, Zhou Zheng Sheng wrote:
on 09/04/2012 22:19, Ryan Harper wrote:
- Dan Kenigsberg danken@redhat.com [2012-09-04 05:53]:
On Tue, Sep 04, 2012 at 03:05:37PM +0800, Xu He Jie wrote:
On 09/03/2012 10:33 PM, Dan Kenigsberg wrote:
On Thu, Aug 30, 2012 at 04:26:31PM -0500, Adam Litke wrote:
On Thu, Aug 30, 2012 at 11:32:02AM +0800, Xu He Jie wrote: >Hi, > > I submited a patch for text-based console >http://gerrit.ovirt.org/#/c/7165/ > >the issue I want to discussing as below: >1. fix port VS dynamic port > >Use fix port for all VM's console. connect console with 'ssh >vmUUID@ip -p port'. >Distinguishing VM by vmUUID. > > > The current implement was vdsm will allocated port for console >dynamically and spawn sub-process when VM creating. >In sub-process the main thread responsible for accept new connection >and dispatch output of console to each connection. >When new connection is coming, main processing create new thread for >each new connection. Dynamic port will allocated >port for each VM and use range port. It isn't good for firewall rules. > > > so I got a suggestion that use fix port. and connect console with >'ssh vmuuid@hostip -p fixport'. this is simple for user. >We need one process for accept new connection from fix port and when >new connection is coming, spawn sub-process for each vm. >But because the console only can open by one process, main process >need responsible for dispatching console's output of all vms and all >connection. >So the code will be a little complex then dynamic port. > > So this is dynamic port VS fix port and simple code VS complex code. >From a usability point of view, I think the fixed port suggestion is nicer. This means that a system administrator needs only to open one port to enable remote console access. If your initial implementation limits console access to one connection per VM would that simplify the code?
Yes, using a fixed port for all consoles of all VMs seems like a cooler idea. Besides the firewall issue, there's user experience: instead of calling getVmStats to tell the vm port, and then use ssh, only one ssh call is needed. (Taking this one step further - it would make sense to add another layer on top, directing console clients to the specific host currently running the Vm.)
I did not take a close look at your implementation, and did not research this myself, but have you considered using sshd for this? I suppose you can configure sshd to collect the list of known "users" from `getAllVmStats`, and force it to run a command that redirects VM's console to the ssh client. It has a potential of being a more robust implementation.
I have considered using sshd and ssh tunnel. They can't implement fixed port and share console.
Would you elaborate on that? Usually sshd listens to a fixed port 22, and allows multiple users to have independet shells. What do you mean by "share console"?
Current implement we can do anything that what we want.
Yes, it is completely under our control, but there are down sides, too: we have to maintain another process, and another entry point, instead of configuring a universally-used, well maintained and debugged application.
Think of the security implications of having another remote shell access point to a host. I'd much rather trust sshd if we can make it work.
Dan.
At first glance, the standard sshd on the host is stronger and more robust than a custom ssh server, but the risk using the host sshd is high. If we implement this feature via host ssd, when a hacker attacks the sshd successfully, he will get access to the host shell. After all, the custom ssh server is not for accessing host shell, but just for forwarding the data from the guest console (a host /dev/pts/X device). If we just use a custom ssh server, the code in this server only does 1. auth, 2. data forwarding, when the hacker attacks, he just gets access to that virtual machine. Notice that there is no code written about login to the host in the custom ssh server, and the custom ssh server can be protected under selinux, only allowing it to access /dev/pts/X.
In fact using a custom VNC server in qemu is as risky as a custom ssh server in vdsm. If we accepts the former one, then I can accepts the latter one. The consideration is how robust of the custom ssh server, and the difficulty to maintain it. In He Jie's current patch, the ssh auth and transport library is an open-source third-party project, unless the project is well maintained and well proven, using it can be risky.
So my opinion is using neither the host sshd, nor a custom ssh server. Maybe we can apply the suggestion from Dan Yasny, running a standard sshd in a very small VM in every host, and forward data from this VM to other guest consoles. The ssh part is in the VM, then our work is just forwarding data from the VM via virto serial channels, to the guest via the pty.
I really dislike the idea of a service VM for something as fundamental as a VM console. The logistics of maintaining such a VM are a nightmare: provisioning, deployment, software upgrades, HA, etc.
Maybe we can start simple and provide console access locally only. What sort of functionality would the vdsm api need to provide to enable only local access to the console? Presumably, it would set up a connection and provide the user with a port/pty to use to connect locally. For now it would be "BYOSSH - bring your own SSH" as clients would need to access the hosts with something like:
ssh -t <host> "<connect command>"
The above command could be wrapped in a vdsm-tool command.
In the future, we can take a look at extending this feature via some sort of remote streaming API. Keep in mind that in order for this feature to be truly useful to ovirt-engine consumers, the console connection must survive a VM migration. To me, this means that vdsm will need to implement a generic streaming API like libvirt has.
* Adam Litke agl@us.ibm.com [2012-10-12 08:13]:
On Fri, Oct 12, 2012 at 04:55:20PM +0800, Zhou Zheng Sheng wrote:
on 09/04/2012 22:19, Ryan Harper wrote:
- Dan Kenigsberg danken@redhat.com [2012-09-04 05:53]:
On Tue, Sep 04, 2012 at 03:05:37PM +0800, Xu He Jie wrote:
On 09/03/2012 10:33 PM, Dan Kenigsberg wrote:
On Thu, Aug 30, 2012 at 04:26:31PM -0500, Adam Litke wrote: >On Thu, Aug 30, 2012 at 11:32:02AM +0800, Xu He Jie wrote: >>Hi, >> >> I submited a patch for text-based console >>http://gerrit.ovirt.org/#/c/7165/ >> >>the issue I want to discussing as below: >>1. fix port VS dynamic port >> >>Use fix port for all VM's console. connect console with 'ssh >>vmUUID@ip -p port'. >>Distinguishing VM by vmUUID. >> >> >> The current implement was vdsm will allocated port for console >>dynamically and spawn sub-process when VM creating. >>In sub-process the main thread responsible for accept new connection >>and dispatch output of console to each connection. >>When new connection is coming, main processing create new thread for >>each new connection. Dynamic port will allocated >>port for each VM and use range port. It isn't good for firewall rules. >> >> >> so I got a suggestion that use fix port. and connect console with >>'ssh vmuuid@hostip -p fixport'. this is simple for user. >>We need one process for accept new connection from fix port and when >>new connection is coming, spawn sub-process for each vm. >>But because the console only can open by one process, main process >>need responsible for dispatching console's output of all vms and all >>connection. >>So the code will be a little complex then dynamic port. >> >> So this is dynamic port VS fix port and simple code VS complex code. >>From a usability point of view, I think the fixed port suggestion is nicer. >This means that a system administrator needs only to open one port to enable >remote console access. If your initial implementation limits console access to >one connection per VM would that simplify the code? Yes, using a fixed port for all consoles of all VMs seems like a cooler idea. Besides the firewall issue, there's user experience: instead of calling getVmStats to tell the vm port, and then use ssh, only one ssh call is needed. (Taking this one step further - it would make sense to add another layer on top, directing console clients to the specific host currently running the Vm.)
I did not take a close look at your implementation, and did not research this myself, but have you considered using sshd for this? I suppose you can configure sshd to collect the list of known "users" from `getAllVmStats`, and force it to run a command that redirects VM's console to the ssh client. It has a potential of being a more robust implementation.
I have considered using sshd and ssh tunnel. They can't implement fixed port and share console.
Would you elaborate on that? Usually sshd listens to a fixed port 22, and allows multiple users to have independet shells. What do you mean by "share console"?
Current implement we can do anything that what we want.
Yes, it is completely under our control, but there are down sides, too: we have to maintain another process, and another entry point, instead of configuring a universally-used, well maintained and debugged application.
Think of the security implications of having another remote shell access point to a host. I'd much rather trust sshd if we can make it work.
Dan.
At first glance, the standard sshd on the host is stronger and more robust than a custom ssh server, but the risk using the host sshd is high. If we implement this feature via host ssd, when a hacker attacks the sshd successfully, he will get access to the host shell. After all, the custom ssh server is not for accessing host shell, but just for forwarding the data from the guest console (a host /dev/pts/X device). If we just use a custom ssh server, the code in this server only does 1. auth, 2. data forwarding, when the hacker attacks, he just gets access to that virtual machine. Notice that there is no code written about login to the host in the custom ssh server, and the custom ssh server can be protected under selinux, only allowing it to access /dev/pts/X.
In fact using a custom VNC server in qemu is as risky as a custom ssh server in vdsm. If we accepts the former one, then I can accepts the latter one. The consideration is how robust of the custom ssh server, and the difficulty to maintain it. In He Jie's current patch, the ssh auth and transport library is an open-source third-party project, unless the project is well maintained and well proven, using it can be risky.
So my opinion is using neither the host sshd, nor a custom ssh server. Maybe we can apply the suggestion from Dan Yasny, running a standard sshd in a very small VM in every host, and forward data from this VM to other guest consoles. The ssh part is in the VM, then our work is just forwarding data from the VM via virto serial channels, to the guest via the pty.
I really dislike the idea of a service VM for something as fundamental as a VM console. The logistics of maintaining such a VM are a nightmare: provisioning, deployment, software upgrades, HA, etc.
Maybe we can start simple and provide console access locally only. What sort of functionality would the vdsm api need to provide to enable only local access to the console? Presumably, it would set up a connection and provide the user with a port/pty to use to connect locally. For now it would be "BYOSSH - bring your own SSH" as clients would need to access the hosts with something like:
ssh -t <host> "<connect command>"
The above command could be wrapped in a vdsm-tool command.
I think we have a patch that does local-only here but using virsh
http://gerrit.ovirt.org/#/c/8041/
I'd prefer to let the user tunnel that output over ssh if they need to.
In the future, we can take a look at extending this feature via some sort of remote streaming API. Keep in mind that in order for this feature to be truly useful to ovirt-engine consumers, the console connection must survive a VM migration. To me, this means that vdsm will need to implement a generic streaming API like libvirt has.
Indeed.
-- Adam Litke agl@us.ibm.com IBM Linux Technology Center
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
On Fri, Oct 12, 2012 at 11:25:47AM -0500, Ryan Harper wrote:
- Adam Litke agl@us.ibm.com [2012-10-12 08:13]:
On Fri, Oct 12, 2012 at 04:55:20PM +0800, Zhou Zheng Sheng wrote:
on 09/04/2012 22:19, Ryan Harper wrote:
- Dan Kenigsberg danken@redhat.com [2012-09-04 05:53]:
On Tue, Sep 04, 2012 at 03:05:37PM +0800, Xu He Jie wrote:
On 09/03/2012 10:33 PM, Dan Kenigsberg wrote: >On Thu, Aug 30, 2012 at 04:26:31PM -0500, Adam Litke wrote: >>On Thu, Aug 30, 2012 at 11:32:02AM +0800, Xu He Jie wrote: >>>Hi, >>> >>> I submited a patch for text-based console >>>http://gerrit.ovirt.org/#/c/7165/ >>> >>>the issue I want to discussing as below: >>>1. fix port VS dynamic port >>> >>>Use fix port for all VM's console. connect console with 'ssh >>>vmUUID@ip -p port'. >>>Distinguishing VM by vmUUID. >>> >>> >>> The current implement was vdsm will allocated port for console >>>dynamically and spawn sub-process when VM creating. >>>In sub-process the main thread responsible for accept new connection >>>and dispatch output of console to each connection. >>>When new connection is coming, main processing create new thread for >>>each new connection. Dynamic port will allocated >>>port for each VM and use range port. It isn't good for firewall rules. >>> >>> >>> so I got a suggestion that use fix port. and connect console with >>>'ssh vmuuid@hostip -p fixport'. this is simple for user. >>>We need one process for accept new connection from fix port and when >>>new connection is coming, spawn sub-process for each vm. >>>But because the console only can open by one process, main process >>>need responsible for dispatching console's output of all vms and all >>>connection. >>>So the code will be a little complex then dynamic port. >>> >>> So this is dynamic port VS fix port and simple code VS complex code. >>>From a usability point of view, I think the fixed port suggestion is nicer. >>This means that a system administrator needs only to open one port to enable >>remote console access. If your initial implementation limits console access to >>one connection per VM would that simplify the code? >Yes, using a fixed port for all consoles of all VMs seems like a cooler >idea. Besides the firewall issue, there's user experience: instead of >calling getVmStats to tell the vm port, and then use ssh, only one ssh >call is needed. (Taking this one step further - it would make sense to >add another layer on top, directing console clients to the specific host >currently running the Vm.) > >I did not take a close look at your implementation, and did not research >this myself, but have you considered using sshd for this? I suppose you >can configure sshd to collect the list of known "users" from >`getAllVmStats`, and force it to run a command that redirects VM's >console to the ssh client. It has a potential of being a more robust >implementation. I have considered using sshd and ssh tunnel. They can't implement fixed port and share console.
Would you elaborate on that? Usually sshd listens to a fixed port 22, and allows multiple users to have independet shells. What do you mean by "share console"?
Current implement we can do anything that what we want.
Yes, it is completely under our control, but there are down sides, too: we have to maintain another process, and another entry point, instead of configuring a universally-used, well maintained and debugged application.
Think of the security implications of having another remote shell access point to a host. I'd much rather trust sshd if we can make it work.
Dan.
At first glance, the standard sshd on the host is stronger and more robust than a custom ssh server, but the risk using the host sshd is high. If we implement this feature via host ssd, when a hacker attacks the sshd successfully, he will get access to the host shell. After all, the custom ssh server is not for accessing host shell, but just for forwarding the data from the guest console (a host /dev/pts/X device). If we just use a custom ssh server, the code in this server only does 1. auth, 2. data forwarding, when the hacker attacks, he just gets access to that virtual machine. Notice that there is no code written about login to the host in the custom ssh server, and the custom ssh server can be protected under selinux, only allowing it to access /dev/pts/X.
In fact using a custom VNC server in qemu is as risky as a custom ssh server in vdsm. If we accepts the former one, then I can accepts the latter one. The consideration is how robust of the custom ssh server, and the difficulty to maintain it. In He Jie's current patch, the ssh auth and transport library is an open-source third-party project, unless the project is well maintained and well proven, using it can be risky.
So my opinion is using neither the host sshd, nor a custom ssh server. Maybe we can apply the suggestion from Dan Yasny, running a standard sshd in a very small VM in every host, and forward data from this VM to other guest consoles. The ssh part is in the VM, then our work is just forwarding data from the VM via virto serial channels, to the guest via the pty.
I really dislike the idea of a service VM for something as fundamental as a VM console. The logistics of maintaining such a VM are a nightmare: provisioning, deployment, software upgrades, HA, etc.
Maybe we can start simple and provide console access locally only. What sort of functionality would the vdsm api need to provide to enable only local access to the console? Presumably, it would set up a connection and provide the user with a port/pty to use to connect locally. For now it would be "BYOSSH - bring your own SSH" as clients would need to access the hosts with something like:
ssh -t <host> "<connect command>"
The above command could be wrapped in a vdsm-tool command.
I think we have a patch that does local-only here but using virsh
http://gerrit.ovirt.org/#/c/8041/
I'd prefer to let the user tunnel that output over ssh if they need to.
I haven't been following this thread all that closely, so perhaps this idea has already been mentioned, but would the libvirt ssh transport help with this situation, ie,
virsh -c qemu+ssh://root@host/system console some_vm
Dave
In the future, we can take a look at extending this feature via some sort of remote streaming API. Keep in mind that in order for this feature to be truly useful to ovirt-engine consumers, the console connection must survive a VM migration. To me, this means that vdsm will need to implement a generic streaming API like libvirt has.
Indeed.
-- Adam Litke agl@us.ibm.com IBM Linux Technology Center
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
-- Ryan Harper Software Engineer; Linux Technology Center IBM Corp., Austin, Tx ryanh@us.ibm.com
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
On 10/15/2012 07:41 PM, Dave Allan wrote:
On Fri, Oct 12, 2012 at 11:25:47AM -0500, Ryan Harper wrote:
- Adam Litke agl@us.ibm.com [2012-10-12 08:13]:
On Fri, Oct 12, 2012 at 04:55:20PM +0800, Zhou Zheng Sheng wrote:
on 09/04/2012 22:19, Ryan Harper wrote:
- Dan Kenigsberg danken@redhat.com [2012-09-04 05:53]:
On Tue, Sep 04, 2012 at 03:05:37PM +0800, Xu He Jie wrote: > On 09/03/2012 10:33 PM, Dan Kenigsberg wrote: >> On Thu, Aug 30, 2012 at 04:26:31PM -0500, Adam Litke wrote: >>> On Thu, Aug 30, 2012 at 11:32:02AM +0800, Xu He Jie wrote: >>>> Hi, >>>> >>>> I submited a patch for text-based console >>>> http://gerrit.ovirt.org/#/c/7165/ >>>> >>>> the issue I want to discussing as below: >>>> 1. fix port VS dynamic port >>>> >>>> Use fix port for all VM's console. connect console with 'ssh >>>> vmUUID@ip -p port'. >>>> Distinguishing VM by vmUUID. >>>> >>>> >>>> The current implement was vdsm will allocated port for console >>>> dynamically and spawn sub-process when VM creating. >>>> In sub-process the main thread responsible for accept new connection >>>> and dispatch output of console to each connection. >>>> When new connection is coming, main processing create new thread for >>>> each new connection. Dynamic port will allocated >>>> port for each VM and use range port. It isn't good for firewall rules. >>>> >>>> >>>> so I got a suggestion that use fix port. and connect console with >>>> 'ssh vmuuid@hostip -p fixport'. this is simple for user. >>>> We need one process for accept new connection from fix port and when >>>> new connection is coming, spawn sub-process for each vm. >>>> But because the console only can open by one process, main process >>>> need responsible for dispatching console's output of all vms and all >>>> connection. >>>> So the code will be a little complex then dynamic port. >>>> >>>> So this is dynamic port VS fix port and simple code VS complex code. >>> >From a usability point of view, I think the fixed port suggestion is nicer. >>> This means that a system administrator needs only to open one port to enable >>> remote console access. If your initial implementation limits console access to >>> one connection per VM would that simplify the code? >> Yes, using a fixed port for all consoles of all VMs seems like a cooler >> idea. Besides the firewall issue, there's user experience: instead of >> calling getVmStats to tell the vm port, and then use ssh, only one ssh >> call is needed. (Taking this one step further - it would make sense to >> add another layer on top, directing console clients to the specific host >> currently running the Vm.) >> >> I did not take a close look at your implementation, and did not research >> this myself, but have you considered using sshd for this? I suppose you >> can configure sshd to collect the list of known "users" from >> `getAllVmStats`, and force it to run a command that redirects VM's >> console to the ssh client. It has a potential of being a more robust >> implementation. > I have considered using sshd and ssh tunnel. They > can't implement fixed port and share console. Would you elaborate on that? Usually sshd listens to a fixed port 22, and allows multiple users to have independet shells. What do you mean by "share console"?
> Current implement > we can do anything that what we want. Yes, it is completely under our control, but there are down sides, too: we have to maintain another process, and another entry point, instead of configuring a universally-used, well maintained and debugged application.
Think of the security implications of having another remote shell access point to a host. I'd much rather trust sshd if we can make it work.
Dan.
At first glance, the standard sshd on the host is stronger and more robust than a custom ssh server, but the risk using the host sshd is high. If we implement this feature via host ssd, when a hacker attacks the sshd successfully, he will get access to the host shell. After all, the custom ssh server is not for accessing host shell, but just for forwarding the data from the guest console (a host /dev/pts/X device). If we just use a custom ssh server, the code in this server only does 1. auth, 2. data forwarding, when the hacker attacks, he just gets access to that virtual machine. Notice that there is no code written about login to the host in the custom ssh server, and the custom ssh server can be protected under selinux, only allowing it to access /dev/pts/X.
In fact using a custom VNC server in qemu is as risky as a custom ssh server in vdsm. If we accepts the former one, then I can accepts the latter one. The consideration is how robust of the custom ssh server, and the difficulty to maintain it. In He Jie's current patch, the ssh auth and transport library is an open-source third-party project, unless the project is well maintained and well proven, using it can be risky.
So my opinion is using neither the host sshd, nor a custom ssh server. Maybe we can apply the suggestion from Dan Yasny, running a standard sshd in a very small VM in every host, and forward data from this VM to other guest consoles. The ssh part is in the VM, then our work is just forwarding data from the VM via virto serial channels, to the guest via the pty.
I really dislike the idea of a service VM for something as fundamental as a VM console. The logistics of maintaining such a VM are a nightmare: provisioning, deployment, software upgrades, HA, etc.
Maybe we can start simple and provide console access locally only. What sort of functionality would the vdsm api need to provide to enable only local access to the console? Presumably, it would set up a connection and provide the user with a port/pty to use to connect locally. For now it would be "BYOSSH - bring your own SSH" as clients would need to access the hosts with something like:
ssh -t <host> "<connect command>"
The above command could be wrapped in a vdsm-tool command.
I think we have a patch that does local-only here but using virsh
http://gerrit.ovirt.org/#/c/8041/
I'd prefer to let the user tunnel that output over ssh if they need to.
I haven't been following this thread all that closely, so perhaps this idea has already been mentioned, but would the libvirt ssh transport help with this situation, ie,
virsh -c qemu+ssh://root@host/system console some_vm
i hope this would work/be simple. some questions: 1. can clients script with/over this method like they can with ssh? 2. can we make it support tickets like we have for spice/vnc?
thanks, Itamar
On Thu, Oct 18, 2012 at 02:21:44AM +0200, Itamar Heim wrote:
On 10/15/2012 07:41 PM, Dave Allan wrote:
On Fri, Oct 12, 2012 at 11:25:47AM -0500, Ryan Harper wrote:
- Adam Litke agl@us.ibm.com [2012-10-12 08:13]:
On Fri, Oct 12, 2012 at 04:55:20PM +0800, Zhou Zheng Sheng wrote:
on 09/04/2012 22:19, Ryan Harper wrote:
- Dan Kenigsberg danken@redhat.com [2012-09-04 05:53]:
>On Tue, Sep 04, 2012 at 03:05:37PM +0800, Xu He Jie wrote: >>On 09/03/2012 10:33 PM, Dan Kenigsberg wrote: >>>On Thu, Aug 30, 2012 at 04:26:31PM -0500, Adam Litke wrote: >>>>On Thu, Aug 30, 2012 at 11:32:02AM +0800, Xu He Jie wrote: >>>>>Hi, >>>>> >>>>> I submited a patch for text-based console >>>>>http://gerrit.ovirt.org/#/c/7165/ >>>>> >>>>>the issue I want to discussing as below: >>>>>1. fix port VS dynamic port >>>>> >>>>>Use fix port for all VM's console. connect console with 'ssh >>>>>vmUUID@ip -p port'. >>>>>Distinguishing VM by vmUUID. >>>>> >>>>> >>>>> The current implement was vdsm will allocated port for console >>>>>dynamically and spawn sub-process when VM creating. >>>>>In sub-process the main thread responsible for accept new connection >>>>>and dispatch output of console to each connection. >>>>>When new connection is coming, main processing create new thread for >>>>>each new connection. Dynamic port will allocated >>>>>port for each VM and use range port. It isn't good for firewall rules. >>>>> >>>>> >>>>> so I got a suggestion that use fix port. and connect console with >>>>>'ssh vmuuid@hostip -p fixport'. this is simple for user. >>>>>We need one process for accept new connection from fix port and when >>>>>new connection is coming, spawn sub-process for each vm. >>>>>But because the console only can open by one process, main process >>>>>need responsible for dispatching console's output of all vms and all >>>>>connection. >>>>>So the code will be a little complex then dynamic port. >>>>> >>>>> So this is dynamic port VS fix port and simple code VS complex code. >>>>>From a usability point of view, I think the fixed port suggestion is nicer. >>>>This means that a system administrator needs only to open one port to enable >>>>remote console access. If your initial implementation limits console access to >>>>one connection per VM would that simplify the code? >>>Yes, using a fixed port for all consoles of all VMs seems like a cooler >>>idea. Besides the firewall issue, there's user experience: instead of >>>calling getVmStats to tell the vm port, and then use ssh, only one ssh >>>call is needed. (Taking this one step further - it would make sense to >>>add another layer on top, directing console clients to the specific host >>>currently running the Vm.) >>> >>>I did not take a close look at your implementation, and did not research >>>this myself, but have you considered using sshd for this? I suppose you >>>can configure sshd to collect the list of known "users" from >>>`getAllVmStats`, and force it to run a command that redirects VM's >>>console to the ssh client. It has a potential of being a more robust >>>implementation. >>I have considered using sshd and ssh tunnel. They >>can't implement fixed port and share console. >Would you elaborate on that? Usually sshd listens to a fixed port 22, >and allows multiple users to have independet shells. What do you mean by >"share console"? > >>Current implement >>we can do anything that what we want. >Yes, it is completely under our control, but there are down sides, too: >we have to maintain another process, and another entry point, instead of >configuring a universally-used, well maintained and debugged >application. Think of the security implications of having another remote shell access point to a host. I'd much rather trust sshd if we can make it work.
>Dan.
At first glance, the standard sshd on the host is stronger and more robust than a custom ssh server, but the risk using the host sshd is high. If we implement this feature via host ssd, when a hacker attacks the sshd successfully, he will get access to the host shell. After all, the custom ssh server is not for accessing host shell, but just for forwarding the data from the guest console (a host /dev/pts/X device). If we just use a custom ssh server, the code in this server only does 1. auth, 2. data forwarding, when the hacker attacks, he just gets access to that virtual machine. Notice that there is no code written about login to the host in the custom ssh server, and the custom ssh server can be protected under selinux, only allowing it to access /dev/pts/X.
In fact using a custom VNC server in qemu is as risky as a custom ssh server in vdsm. If we accepts the former one, then I can accepts the latter one. The consideration is how robust of the custom ssh server, and the difficulty to maintain it. In He Jie's current patch, the ssh auth and transport library is an open-source third-party project, unless the project is well maintained and well proven, using it can be risky.
So my opinion is using neither the host sshd, nor a custom ssh server. Maybe we can apply the suggestion from Dan Yasny, running a standard sshd in a very small VM in every host, and forward data from this VM to other guest consoles. The ssh part is in the VM, then our work is just forwarding data from the VM via virto serial channels, to the guest via the pty.
I really dislike the idea of a service VM for something as fundamental as a VM console. The logistics of maintaining such a VM are a nightmare: provisioning, deployment, software upgrades, HA, etc.
Maybe we can start simple and provide console access locally only. What sort of functionality would the vdsm api need to provide to enable only local access to the console? Presumably, it would set up a connection and provide the user with a port/pty to use to connect locally. For now it would be "BYOSSH - bring your own SSH" as clients would need to access the hosts with something like:
ssh -t <host> "<connect command>"
The above command could be wrapped in a vdsm-tool command.
I think we have a patch that does local-only here but using virsh
http://gerrit.ovirt.org/#/c/8041/
I'd prefer to let the user tunnel that output over ssh if they need to.
I haven't been following this thread all that closely, so perhaps this idea has already been mentioned, but would the libvirt ssh transport help with this situation, ie,
virsh -c qemu+ssh://root@host/system console some_vm
i hope this would work/be simple. some questions:
- can clients script with/over this method like they can with ssh?
The serial console API does not do things like
$ ssh root@bar 'hostname' bar
since it's just a stream containing the console i/o, but I'm not sure that answers your question. Can you give me an example of what kind of script you're thinking of?
- can we make it support tickets like we have for spice/vnc?
The console API will use any authentication that the libvirtd on the target will accept, so it could authenticate the same credentials as vdsm. console does require a read-write connection to libvirt.
There's example python code in the libvirt git tree:
http://libvirt.org/git/?p=libvirt.git;a=blob;f=examples/python/consolecallba...
Dave
thanks, Itamar
On 10/18/2012 09:13 PM, Dave Allan wrote:
On Thu, Oct 18, 2012 at 02:21:44AM +0200, Itamar Heim wrote:
On 10/15/2012 07:41 PM, Dave Allan wrote:
On Fri, Oct 12, 2012 at 11:25:47AM -0500, Ryan Harper wrote:
- Adam Litke agl@us.ibm.com [2012-10-12 08:13]:
On Fri, Oct 12, 2012 at 04:55:20PM +0800, Zhou Zheng Sheng wrote:
on 09/04/2012 22:19, Ryan Harper wrote: > * Dan Kenigsberg danken@redhat.com [2012-09-04 05:53]: >> On Tue, Sep 04, 2012 at 03:05:37PM +0800, Xu He Jie wrote: >>> On 09/03/2012 10:33 PM, Dan Kenigsberg wrote: >>>> On Thu, Aug 30, 2012 at 04:26:31PM -0500, Adam Litke wrote: >>>>> On Thu, Aug 30, 2012 at 11:32:02AM +0800, Xu He Jie wrote: >>>>>> Hi, >>>>>> >>>>>> I submited a patch for text-based console >>>>>> http://gerrit.ovirt.org/#/c/7165/ >>>>>> >>>>>> the issue I want to discussing as below: >>>>>> 1. fix port VS dynamic port >>>>>> >>>>>> Use fix port for all VM's console. connect console with 'ssh >>>>>> vmUUID@ip -p port'. >>>>>> Distinguishing VM by vmUUID. >>>>>> >>>>>> >>>>>> The current implement was vdsm will allocated port for console >>>>>> dynamically and spawn sub-process when VM creating. >>>>>> In sub-process the main thread responsible for accept new connection >>>>>> and dispatch output of console to each connection. >>>>>> When new connection is coming, main processing create new thread for >>>>>> each new connection. Dynamic port will allocated >>>>>> port for each VM and use range port. It isn't good for firewall rules. >>>>>> >>>>>> >>>>>> so I got a suggestion that use fix port. and connect console with >>>>>> 'ssh vmuuid@hostip -p fixport'. this is simple for user. >>>>>> We need one process for accept new connection from fix port and when >>>>>> new connection is coming, spawn sub-process for each vm. >>>>>> But because the console only can open by one process, main process >>>>>> need responsible for dispatching console's output of all vms and all >>>>>> connection. >>>>>> So the code will be a little complex then dynamic port. >>>>>> >>>>>> So this is dynamic port VS fix port and simple code VS complex code. >>>>> >From a usability point of view, I think the fixed port suggestion is nicer. >>>>> This means that a system administrator needs only to open one port to enable >>>>> remote console access. If your initial implementation limits console access to >>>>> one connection per VM would that simplify the code? >>>> Yes, using a fixed port for all consoles of all VMs seems like a cooler >>>> idea. Besides the firewall issue, there's user experience: instead of >>>> calling getVmStats to tell the vm port, and then use ssh, only one ssh >>>> call is needed. (Taking this one step further - it would make sense to >>>> add another layer on top, directing console clients to the specific host >>>> currently running the Vm.) >>>> >>>> I did not take a close look at your implementation, and did not research >>>> this myself, but have you considered using sshd for this? I suppose you >>>> can configure sshd to collect the list of known "users" from >>>> `getAllVmStats`, and force it to run a command that redirects VM's >>>> console to the ssh client. It has a potential of being a more robust >>>> implementation. >>> I have considered using sshd and ssh tunnel. They >>> can't implement fixed port and share console. >> Would you elaborate on that? Usually sshd listens to a fixed port 22, >> and allows multiple users to have independet shells. What do you mean by >> "share console"? >> >>> Current implement >>> we can do anything that what we want. >> Yes, it is completely under our control, but there are down sides, too: >> we have to maintain another process, and another entry point, instead of >> configuring a universally-used, well maintained and debugged >> application. > Think of the security implications of having another remote shell > access point to a host. I'd much rather trust sshd if we can make it > work. > > >> Dan.
At first glance, the standard sshd on the host is stronger and more robust than a custom ssh server, but the risk using the host sshd is high. If we implement this feature via host ssd, when a hacker attacks the sshd successfully, he will get access to the host shell. After all, the custom ssh server is not for accessing host shell, but just for forwarding the data from the guest console (a host /dev/pts/X device). If we just use a custom ssh server, the code in this server only does 1. auth, 2. data forwarding, when the hacker attacks, he just gets access to that virtual machine. Notice that there is no code written about login to the host in the custom ssh server, and the custom ssh server can be protected under selinux, only allowing it to access /dev/pts/X.
In fact using a custom VNC server in qemu is as risky as a custom ssh server in vdsm. If we accepts the former one, then I can accepts the latter one. The consideration is how robust of the custom ssh server, and the difficulty to maintain it. In He Jie's current patch, the ssh auth and transport library is an open-source third-party project, unless the project is well maintained and well proven, using it can be risky.
So my opinion is using neither the host sshd, nor a custom ssh server. Maybe we can apply the suggestion from Dan Yasny, running a standard sshd in a very small VM in every host, and forward data from this VM to other guest consoles. The ssh part is in the VM, then our work is just forwarding data from the VM via virto serial channels, to the guest via the pty.
I really dislike the idea of a service VM for something as fundamental as a VM console. The logistics of maintaining such a VM are a nightmare: provisioning, deployment, software upgrades, HA, etc.
Maybe we can start simple and provide console access locally only. What sort of functionality would the vdsm api need to provide to enable only local access to the console? Presumably, it would set up a connection and provide the user with a port/pty to use to connect locally. For now it would be "BYOSSH - bring your own SSH" as clients would need to access the hosts with something like:
ssh -t <host> "<connect command>"
The above command could be wrapped in a vdsm-tool command.
I think we have a patch that does local-only here but using virsh
http://gerrit.ovirt.org/#/c/8041/
I'd prefer to let the user tunnel that output over ssh if they need to.
I haven't been following this thread all that closely, so perhaps this idea has already been mentioned, but would the libvirt ssh transport help with this situation, ie,
virsh -c qemu+ssh://root@host/system console some_vm
i hope this would work/be simple. some questions:
- can clients script with/over this method like they can with ssh?
The serial console API does not do things like
$ ssh root@bar 'hostname' bar
since it's just a stream containing the console i/o, but I'm not sure that answers your question. Can you give me an example of what kind of script you're thinking of?
- can we make it support tickets like we have for spice/vnc?
The console API will use any authentication that the libvirtd on the target will accept, so it could authenticate the same credentials as vdsm. console does require a read-write connection to libvirt.
There's example python code in the libvirt git tree:
http://libvirt.org/git/?p=libvirt.git;a=blob;f=examples/python/consolecallba...
I don't think giving end-users direct read/write access to libvirt is the way to go (even with libvirt having a permission model). we don't manage users/credentials at that level, only tickets. the thing i like about the spice approach is it goes to qemu process, via channels we already have with clients (spice).
* Itamar Heim iheim@redhat.com [2012-10-18 14:17]:
On 10/18/2012 09:13 PM, Dave Allan wrote:
On Thu, Oct 18, 2012 at 02:21:44AM +0200, Itamar Heim wrote:
On 10/15/2012 07:41 PM, Dave Allan wrote:
On Fri, Oct 12, 2012 at 11:25:47AM -0500, Ryan Harper wrote:
- Adam Litke agl@us.ibm.com [2012-10-12 08:13]:
On Fri, Oct 12, 2012 at 04:55:20PM +0800, Zhou Zheng Sheng wrote: > >on 09/04/2012 22:19, Ryan Harper wrote: >>* Dan Kenigsberg danken@redhat.com [2012-09-04 05:53]: >>>On Tue, Sep 04, 2012 at 03:05:37PM +0800, Xu He Jie wrote: >>>>On 09/03/2012 10:33 PM, Dan Kenigsberg wrote: >>>>>On Thu, Aug 30, 2012 at 04:26:31PM -0500, Adam Litke wrote: >>>>>>On Thu, Aug 30, 2012 at 11:32:02AM +0800, Xu He Jie wrote: >>>>>>>Hi, >>>>>>> >>>>>>> I submited a patch for text-based console >>>>>>>http://gerrit.ovirt.org/#/c/7165/ >>>>>>> >>>>>>>the issue I want to discussing as below: >>>>>>>1. fix port VS dynamic port >>>>>>> >>>>>>>Use fix port for all VM's console. connect console with 'ssh >>>>>>>vmUUID@ip -p port'. >>>>>>>Distinguishing VM by vmUUID. >>>>>>> >>>>>>> >>>>>>> The current implement was vdsm will allocated port for console >>>>>>>dynamically and spawn sub-process when VM creating. >>>>>>>In sub-process the main thread responsible for accept new connection >>>>>>>and dispatch output of console to each connection. >>>>>>>When new connection is coming, main processing create new thread for >>>>>>>each new connection. Dynamic port will allocated >>>>>>>port for each VM and use range port. It isn't good for firewall rules. >>>>>>> >>>>>>> >>>>>>> so I got a suggestion that use fix port. and connect console with >>>>>>>'ssh vmuuid@hostip -p fixport'. this is simple for user. >>>>>>>We need one process for accept new connection from fix port and when >>>>>>>new connection is coming, spawn sub-process for each vm. >>>>>>>But because the console only can open by one process, main process >>>>>>>need responsible for dispatching console's output of all vms and all >>>>>>>connection. >>>>>>>So the code will be a little complex then dynamic port. >>>>>>> >>>>>>> So this is dynamic port VS fix port and simple code VS complex code. >>>>>>>From a usability point of view, I think the fixed port suggestion is nicer. >>>>>>This means that a system administrator needs only to open one port to enable >>>>>>remote console access. If your initial implementation limits console access to >>>>>>one connection per VM would that simplify the code? >>>>>Yes, using a fixed port for all consoles of all VMs seems like a cooler >>>>>idea. Besides the firewall issue, there's user experience: instead of >>>>>calling getVmStats to tell the vm port, and then use ssh, only one ssh >>>>>call is needed. (Taking this one step further - it would make sense to >>>>>add another layer on top, directing console clients to the specific host >>>>>currently running the Vm.) >>>>> >>>>>I did not take a close look at your implementation, and did not research >>>>>this myself, but have you considered using sshd for this? I suppose you >>>>>can configure sshd to collect the list of known "users" from >>>>>`getAllVmStats`, and force it to run a command that redirects VM's >>>>>console to the ssh client. It has a potential of being a more robust >>>>>implementation. >>>>I have considered using sshd and ssh tunnel. They >>>>can't implement fixed port and share console. >>>Would you elaborate on that? Usually sshd listens to a fixed port 22, >>>and allows multiple users to have independet shells. What do you mean by >>>"share console"? >>> >>>>Current implement >>>>we can do anything that what we want. >>>Yes, it is completely under our control, but there are down sides, too: >>>we have to maintain another process, and another entry point, instead of >>>configuring a universally-used, well maintained and debugged >>>application. >>Think of the security implications of having another remote shell >>access point to a host. I'd much rather trust sshd if we can make it >>work. >> >> >>>Dan. > >At first glance, the standard sshd on the host is stronger and more >robust than a custom ssh server, but the risk using the host sshd is >high. If we implement this feature via host ssd, when a hacker >attacks the sshd successfully, he will get access to the host shell. >After all, the custom ssh server is not for accessing host shell, >but just for forwarding the data from the guest console (a host >/dev/pts/X device). If we just use a custom ssh server, the code in >this server only does 1. auth, 2. data forwarding, when the hacker >attacks, he just gets access to that virtual machine. Notice that >there is no code written about login to the host in the custom ssh >server, and the custom ssh server can be protected under selinux, >only allowing it to access /dev/pts/X. > >In fact using a custom VNC server in qemu is as risky as a custom >ssh server in vdsm. If we accepts the former one, then I can accepts >the latter one. The consideration is how robust of the custom ssh >server, and the difficulty to maintain it. In He Jie's current >patch, the ssh auth and transport library is an open-source >third-party project, unless the project is well maintained and well >proven, using it can be risky. > >So my opinion is using neither the host sshd, nor a custom ssh >server. Maybe we can apply the suggestion from Dan Yasny, running a >standard sshd in a very small VM in every host, and forward data >from this VM to other guest consoles. The ssh part is in the VM, >then our work is just forwarding data from the VM via virto serial >channels, to the guest via the pty.
I really dislike the idea of a service VM for something as fundamental as a VM console. The logistics of maintaining such a VM are a nightmare: provisioning, deployment, software upgrades, HA, etc.
Maybe we can start simple and provide console access locally only. What sort of functionality would the vdsm api need to provide to enable only local access to the console? Presumably, it would set up a connection and provide the user with a port/pty to use to connect locally. For now it would be "BYOSSH - bring your own SSH" as clients would need to access the hosts with something like:
ssh -t <host> "<connect command>"
The above command could be wrapped in a vdsm-tool command.
I think we have a patch that does local-only here but using virsh
http://gerrit.ovirt.org/#/c/8041/
I'd prefer to let the user tunnel that output over ssh if they need to.
I haven't been following this thread all that closely, so perhaps this idea has already been mentioned, but would the libvirt ssh transport help with this situation, ie,
virsh -c qemu+ssh://root@host/system console some_vm
i hope this would work/be simple. some questions:
- can clients script with/over this method like they can with ssh?
The serial console API does not do things like
$ ssh root@bar 'hostname' bar
since it's just a stream containing the console i/o, but I'm not sure that answers your question. Can you give me an example of what kind of script you're thinking of?
- can we make it support tickets like we have for spice/vnc?
The console API will use any authentication that the libvirtd on the target will accept, so it could authenticate the same credentials as vdsm. console does require a read-write connection to libvirt.
There's example python code in the libvirt git tree:
http://libvirt.org/git/?p=libvirt.git;a=blob;f=examples/python/consolecallba...
I don't think giving end-users direct read/write access to libvirt is the way to go (even with libvirt having a permission model). we don't manage users/credentials at that level, only tickets.
I agree with the additional creditials, I'd prefer not to have to manage this.
the thing i like about the spice approach is it goes to qemu process, via channels we already have with clients (spice).
We already have a device that works without additional clients or channels, the pty on the host the VM runs.
The only question left is how to provide remote access to this. In the short term, we don't need remoting to make use of it. Having vdsm invoke virsh console for a local vdsClient is sufficent to start with.
For remoting, something we've been discussing is a console_read/console_write API call which would be something that a management application could consume and re-use some of those AJAX web-console widgets.
w.r.t permissions for console_read/console_write we could use the setTicket mechanism to enable access to those verbs. read/write API should also work well with VM migration.
On 10/19/2012 04:31 PM, Ryan Harper wrote:
- Itamar Heim iheim@redhat.com [2012-10-18 14:17]:
On 10/18/2012 09:13 PM, Dave Allan wrote:
On Thu, Oct 18, 2012 at 02:21:44AM +0200, Itamar Heim wrote:
On 10/15/2012 07:41 PM, Dave Allan wrote:
On Fri, Oct 12, 2012 at 11:25:47AM -0500, Ryan Harper wrote:
- Adam Litke agl@us.ibm.com [2012-10-12 08:13]:
> On Fri, Oct 12, 2012 at 04:55:20PM +0800, Zhou Zheng Sheng wrote: >> >> on 09/04/2012 22:19, Ryan Harper wrote: >>> * Dan Kenigsberg danken@redhat.com [2012-09-04 05:53]: >>>> On Tue, Sep 04, 2012 at 03:05:37PM +0800, Xu He Jie wrote: >>>>> On 09/03/2012 10:33 PM, Dan Kenigsberg wrote: >>>>>> On Thu, Aug 30, 2012 at 04:26:31PM -0500, Adam Litke wrote: >>>>>>> On Thu, Aug 30, 2012 at 11:32:02AM +0800, Xu He Jie wrote: >>>>>>>> Hi, >>>>>>>> >>>>>>>> I submited a patch for text-based console >>>>>>>> http://gerrit.ovirt.org/#/c/7165/ >>>>>>>> >>>>>>>> the issue I want to discussing as below: >>>>>>>> 1. fix port VS dynamic port >>>>>>>> >>>>>>>> Use fix port for all VM's console. connect console with 'ssh >>>>>>>> vmUUID@ip -p port'. >>>>>>>> Distinguishing VM by vmUUID. >>>>>>>> >>>>>>>> >>>>>>>> The current implement was vdsm will allocated port for console >>>>>>>> dynamically and spawn sub-process when VM creating. >>>>>>>> In sub-process the main thread responsible for accept new connection >>>>>>>> and dispatch output of console to each connection. >>>>>>>> When new connection is coming, main processing create new thread for >>>>>>>> each new connection. Dynamic port will allocated >>>>>>>> port for each VM and use range port. It isn't good for firewall rules. >>>>>>>> >>>>>>>> >>>>>>>> so I got a suggestion that use fix port. and connect console with >>>>>>>> 'ssh vmuuid@hostip -p fixport'. this is simple for user. >>>>>>>> We need one process for accept new connection from fix port and when >>>>>>>> new connection is coming, spawn sub-process for each vm. >>>>>>>> But because the console only can open by one process, main process >>>>>>>> need responsible for dispatching console's output of all vms and all >>>>>>>> connection. >>>>>>>> So the code will be a little complex then dynamic port. >>>>>>>> >>>>>>>> So this is dynamic port VS fix port and simple code VS complex code. >>>>>>> >From a usability point of view, I think the fixed port suggestion is nicer. >>>>>>> This means that a system administrator needs only to open one port to enable >>>>>>> remote console access. If your initial implementation limits console access to >>>>>>> one connection per VM would that simplify the code? >>>>>> Yes, using a fixed port for all consoles of all VMs seems like a cooler >>>>>> idea. Besides the firewall issue, there's user experience: instead of >>>>>> calling getVmStats to tell the vm port, and then use ssh, only one ssh >>>>>> call is needed. (Taking this one step further - it would make sense to >>>>>> add another layer on top, directing console clients to the specific host >>>>>> currently running the Vm.) >>>>>> >>>>>> I did not take a close look at your implementation, and did not research >>>>>> this myself, but have you considered using sshd for this? I suppose you >>>>>> can configure sshd to collect the list of known "users" from >>>>>> `getAllVmStats`, and force it to run a command that redirects VM's >>>>>> console to the ssh client. It has a potential of being a more robust >>>>>> implementation. >>>>> I have considered using sshd and ssh tunnel. They >>>>> can't implement fixed port and share console. >>>> Would you elaborate on that? Usually sshd listens to a fixed port 22, >>>> and allows multiple users to have independet shells. What do you mean by >>>> "share console"? >>>> >>>>> Current implement >>>>> we can do anything that what we want. >>>> Yes, it is completely under our control, but there are down sides, too: >>>> we have to maintain another process, and another entry point, instead of >>>> configuring a universally-used, well maintained and debugged >>>> application. >>> Think of the security implications of having another remote shell >>> access point to a host. I'd much rather trust sshd if we can make it >>> work. >>> >>> >>>> Dan. >> >> At first glance, the standard sshd on the host is stronger and more >> robust than a custom ssh server, but the risk using the host sshd is >> high. If we implement this feature via host ssd, when a hacker >> attacks the sshd successfully, he will get access to the host shell. >> After all, the custom ssh server is not for accessing host shell, >> but just for forwarding the data from the guest console (a host >> /dev/pts/X device). If we just use a custom ssh server, the code in >> this server only does 1. auth, 2. data forwarding, when the hacker >> attacks, he just gets access to that virtual machine. Notice that >> there is no code written about login to the host in the custom ssh >> server, and the custom ssh server can be protected under selinux, >> only allowing it to access /dev/pts/X. >> >> In fact using a custom VNC server in qemu is as risky as a custom >> ssh server in vdsm. If we accepts the former one, then I can accepts >> the latter one. The consideration is how robust of the custom ssh >> server, and the difficulty to maintain it. In He Jie's current >> patch, the ssh auth and transport library is an open-source >> third-party project, unless the project is well maintained and well >> proven, using it can be risky. >> >> So my opinion is using neither the host sshd, nor a custom ssh >> server. Maybe we can apply the suggestion from Dan Yasny, running a >> standard sshd in a very small VM in every host, and forward data > >from this VM to other guest consoles. The ssh part is in the VM, >> then our work is just forwarding data from the VM via virto serial >> channels, to the guest via the pty. > > I really dislike the idea of a service VM for something as fundamental as a VM > console. The logistics of maintaining such a VM are a nightmare: provisioning, > deployment, software upgrades, HA, etc. > > Maybe we can start simple and provide console access locally only. What sort of > functionality would the vdsm api need to provide to enable only local access to > the console? Presumably, it would set up a connection and provide the user with > a port/pty to use to connect locally. For now it would be "BYOSSH - bring your > own SSH" as clients would need to access the hosts with something like: > > ssh -t <host> "<connect command>" > > The above command could be wrapped in a vdsm-tool command.
I think we have a patch that does local-only here but using virsh
http://gerrit.ovirt.org/#/c/8041/
I'd prefer to let the user tunnel that output over ssh if they need to.
I haven't been following this thread all that closely, so perhaps this idea has already been mentioned, but would the libvirt ssh transport help with this situation, ie,
virsh -c qemu+ssh://root@host/system console some_vm
i hope this would work/be simple. some questions:
- can clients script with/over this method like they can with ssh?
The serial console API does not do things like
$ ssh root@bar 'hostname' bar
since it's just a stream containing the console i/o, but I'm not sure that answers your question. Can you give me an example of what kind of script you're thinking of?
- can we make it support tickets like we have for spice/vnc?
The console API will use any authentication that the libvirtd on the target will accept, so it could authenticate the same credentials as vdsm. console does require a read-write connection to libvirt.
There's example python code in the libvirt git tree:
http://libvirt.org/git/?p=libvirt.git;a=blob;f=examples/python/consolecallba...
I don't think giving end-users direct read/write access to libvirt is the way to go (even with libvirt having a permission model). we don't manage users/credentials at that level, only tickets.
I agree with the additional creditials, I'd prefer not to have to manage this.
the thing i like about the spice approach is it goes to qemu process, via channels we already have with clients (spice).
We already have a device that works without additional clients or channels, the pty on the host the VM runs.
The only question left is how to provide remote access to this. In the short term, we don't need remoting to make use of it. Having vdsm invoke virsh console for a local vdsClient is sufficent to start with.
For remoting, something we've been discussing is a console_read/console_write API call which would be something that a management application could consume and re-use some of those AJAX web-console widgets.
I thought the interesting use case for serial console was ability to script to it remotely, rather than a UI (which vnc/spice would give you anyway)?
w.r.t permissions for console_read/console_write we could use the setTicket mechanism to enable access to those verbs. read/write API should also work well with VM migration.
----- Original Message -----
From: "Itamar Heim" iheim@redhat.com To: "Ryan Harper" ryanh@us.ibm.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Sunday, 21 October, 2012 11:17:24 AM Subject: Re: [vdsm] [RFC]about the implement of text-based console
On 10/19/2012 04:31 PM, Ryan Harper wrote:
- Itamar Heim iheim@redhat.com [2012-10-18 14:17]:
On 10/18/2012 09:13 PM, Dave Allan wrote:
On Thu, Oct 18, 2012 at 02:21:44AM +0200, Itamar Heim wrote:
On 10/15/2012 07:41 PM, Dave Allan wrote:
On Fri, Oct 12, 2012 at 11:25:47AM -0500, Ryan Harper wrote: > * Adam Litke agl@us.ibm.com [2012-10-12 08:13]: >> On Fri, Oct 12, 2012 at 04:55:20PM +0800, Zhou Zheng Sheng >> wrote: >>> >>> on 09/04/2012 22:19, Ryan Harper wrote: >>>> * Dan Kenigsberg danken@redhat.com [2012-09-04 05:53]: >>>>> On Tue, Sep 04, 2012 at 03:05:37PM +0800, Xu He Jie wrote: >>>>>> On 09/03/2012 10:33 PM, Dan Kenigsberg wrote: >>>>>>> On Thu, Aug 30, 2012 at 04:26:31PM -0500, Adam Litke >>>>>>> wrote: >>>>>>>> On Thu, Aug 30, 2012 at 11:32:02AM +0800, Xu He Jie >>>>>>>> wrote: >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> I submited a patch for text-based console >>>>>>>>> http://gerrit.ovirt.org/#/c/7165/ >>>>>>>>> >>>>>>>>> the issue I want to discussing as below: >>>>>>>>> 1. fix port VS dynamic port >>>>>>>>> >>>>>>>>> Use fix port for all VM's console. connect console >>>>>>>>> with 'ssh >>>>>>>>> vmUUID@ip -p port'. >>>>>>>>> Distinguishing VM by vmUUID. >>>>>>>>> >>>>>>>>> >>>>>>>>> The current implement was vdsm will allocated port >>>>>>>>> for console >>>>>>>>> dynamically and spawn sub-process when VM creating. >>>>>>>>> In sub-process the main thread responsible for accept >>>>>>>>> new connection >>>>>>>>> and dispatch output of console to each connection. >>>>>>>>> When new connection is coming, main processing create >>>>>>>>> new thread for >>>>>>>>> each new connection. Dynamic port will allocated >>>>>>>>> port for each VM and use range port. It isn't good for >>>>>>>>> firewall rules. >>>>>>>>> >>>>>>>>> >>>>>>>>> so I got a suggestion that use fix port. and >>>>>>>>> connect console with >>>>>>>>> 'ssh vmuuid@hostip -p fixport'. this is simple for >>>>>>>>> user. >>>>>>>>> We need one process for accept new connection from fix >>>>>>>>> port and when >>>>>>>>> new connection is coming, spawn sub-process for each >>>>>>>>> vm. >>>>>>>>> But because the console only can open by one process, >>>>>>>>> main process >>>>>>>>> need responsible for dispatching console's output of >>>>>>>>> all vms and all >>>>>>>>> connection. >>>>>>>>> So the code will be a little complex then dynamic >>>>>>>>> port. >>>>>>>>> >>>>>>>>> So this is dynamic port VS fix port and simple code >>>>>>>>> VS complex code. >>>>>>>> >From a usability point of view, I think the fixed port >>>>>>>> >suggestion is nicer. >>>>>>>> This means that a system administrator needs only to >>>>>>>> open one port to enable >>>>>>>> remote console access. If your initial implementation >>>>>>>> limits console access to >>>>>>>> one connection per VM would that simplify the code? >>>>>>> Yes, using a fixed port for all consoles of all VMs >>>>>>> seems like a cooler >>>>>>> idea. Besides the firewall issue, there's user >>>>>>> experience: instead of >>>>>>> calling getVmStats to tell the vm port, and then use >>>>>>> ssh, only one ssh >>>>>>> call is needed. (Taking this one step further - it would >>>>>>> make sense to >>>>>>> add another layer on top, directing console clients to >>>>>>> the specific host >>>>>>> currently running the Vm.) >>>>>>> >>>>>>> I did not take a close look at your implementation, and >>>>>>> did not research >>>>>>> this myself, but have you considered using sshd for >>>>>>> this? I suppose you >>>>>>> can configure sshd to collect the list of known "users" >>>>>>> from >>>>>>> `getAllVmStats`, and force it to run a command that >>>>>>> redirects VM's >>>>>>> console to the ssh client. It has a potential of being a >>>>>>> more robust >>>>>>> implementation. >>>>>> I have considered using sshd and ssh tunnel. They >>>>>> can't implement fixed port and share console. >>>>> Would you elaborate on that? Usually sshd listens to a >>>>> fixed port 22, >>>>> and allows multiple users to have independet shells. What >>>>> do you mean by >>>>> "share console"? >>>>> >>>>>> Current implement >>>>>> we can do anything that what we want. >>>>> Yes, it is completely under our control, but there are >>>>> down sides, too: >>>>> we have to maintain another process, and another entry >>>>> point, instead of >>>>> configuring a universally-used, well maintained and >>>>> debugged >>>>> application. >>>> Think of the security implications of having another remote >>>> shell >>>> access point to a host. I'd much rather trust sshd if we >>>> can make it >>>> work. >>>> >>>> >>>>> Dan. >>> >>> At first glance, the standard sshd on the host is stronger >>> and more >>> robust than a custom ssh server, but the risk using the host >>> sshd is >>> high. If we implement this feature via host ssd, when a >>> hacker >>> attacks the sshd successfully, he will get access to the >>> host shell. >>> After all, the custom ssh server is not for accessing host >>> shell, >>> but just for forwarding the data from the guest console (a >>> host >>> /dev/pts/X device). If we just use a custom ssh server, the >>> code in >>> this server only does 1. auth, 2. data forwarding, when the >>> hacker >>> attacks, he just gets access to that virtual machine. Notice >>> that >>> there is no code written about login to the host in the >>> custom ssh >>> server, and the custom ssh server can be protected under >>> selinux, >>> only allowing it to access /dev/pts/X. >>> >>> In fact using a custom VNC server in qemu is as risky as a >>> custom >>> ssh server in vdsm. If we accepts the former one, then I can >>> accepts >>> the latter one. The consideration is how robust of the >>> custom ssh >>> server, and the difficulty to maintain it. In He Jie's >>> current >>> patch, the ssh auth and transport library is an open-source >>> third-party project, unless the project is well maintained >>> and well >>> proven, using it can be risky. >>> >>> So my opinion is using neither the host sshd, nor a custom >>> ssh >>> server. Maybe we can apply the suggestion from Dan Yasny, >>> running a >>> standard sshd in a very small VM in every host, and forward >>> data >> >from this VM to other guest consoles. The ssh part is in the >> >VM, >>> then our work is just forwarding data from the VM via virto >>> serial >>> channels, to the guest via the pty. >> >> I really dislike the idea of a service VM for something as >> fundamental as a VM >> console. The logistics of maintaining such a VM are a >> nightmare: provisioning, >> deployment, software upgrades, HA, etc. >> >> Maybe we can start simple and provide console access locally >> only. What sort of >> functionality would the vdsm api need to provide to enable >> only local access to >> the console? Presumably, it would set up a connection and >> provide the user with >> a port/pty to use to connect locally. For now it would be >> "BYOSSH - bring your >> own SSH" as clients would need to access the hosts with >> something like: >> >> ssh -t <host> "<connect command>" >> >> The above command could be wrapped in a vdsm-tool command. > > I think we have a patch that does local-only here but using > virsh > > http://gerrit.ovirt.org/#/c/8041/ > > I'd prefer to let the user tunnel that output over ssh if they > need to.
I haven't been following this thread all that closely, so perhaps this idea has already been mentioned, but would the libvirt ssh transport help with this situation, ie,
virsh -c qemu+ssh://root@host/system console some_vm
i hope this would work/be simple. some questions:
- can clients script with/over this method like they can with
ssh?
The serial console API does not do things like
$ ssh root@bar 'hostname' bar
since it's just a stream containing the console i/o, but I'm not sure that answers your question. Can you give me an example of what kind of script you're thinking of?
- can we make it support tickets like we have for spice/vnc?
The console API will use any authentication that the libvirtd on the target will accept, so it could authenticate the same credentials as vdsm. console does require a read-write connection to libvirt.
There's example python code in the libvirt git tree:
http://libvirt.org/git/?p=libvirt.git;a=blob;f=examples/python/consolecallba...
I don't think giving end-users direct read/write access to libvirt is the way to go (even with libvirt having a permission model). we don't manage users/credentials at that level, only tickets.
I agree with the additional creditials, I'd prefer not to have to manage this.
the thing i like about the spice approach is it goes to qemu process, via channels we already have with clients (spice).
We already have a device that works without additional clients or channels, the pty on the host the VM runs.
The only question left is how to provide remote access to this. In the short term, we don't need remoting to make use of it. Having vdsm invoke virsh console for a local vdsClient is sufficent to start with.
For remoting, something we've been discussing is a console_read/console_write API call which would be something that a management application could consume and re-use some of those AJAX web-console widgets.
I thought the interesting use case for serial console was ability to script to it remotely, rather than a UI (which vnc/spice would give you anyway)?
Actually, the current production use cases (the ones I wrote the workaround for) are for a highly secure, no-GUI environment, where guest serial consoles are the only way to access a guest console. Scripting was never mentioned (though it might be another cool use case)
w.r.t permissions for console_read/console_write we could use the setTicket mechanism to enable access to those verbs. read/write API should also work well with VM migration.
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
* Itamar Heim iheim@redhat.com [2012-10-21 04:19]:
>>Maybe we can start simple and provide console access locally only. What sort of >>functionality would the vdsm api need to provide to enable only local access to >>the console? Presumably, it would set up a connection and provide the user with >>a port/pty to use to connect locally. For now it would be "BYOSSH - bring your >>own SSH" as clients would need to access the hosts with something like: >> >>ssh -t <host> "<connect command>" >> >>The above command could be wrapped in a vdsm-tool command. > >I think we have a patch that does local-only here but using virsh > >http://gerrit.ovirt.org/#/c/8041/ > >I'd prefer to let the user tunnel that output over ssh if they need to.
I haven't been following this thread all that closely, so perhaps this idea has already been mentioned, but would the libvirt ssh transport help with this situation, ie,
virsh -c qemu+ssh://root@host/system console some_vm
i hope this would work/be simple. some questions:
- can clients script with/over this method like they can with ssh?
The serial console API does not do things like
$ ssh root@bar 'hostname' bar
since it's just a stream containing the console i/o, but I'm not sure that answers your question. Can you give me an example of what kind of script you're thinking of?
- can we make it support tickets like we have for spice/vnc?
The console API will use any authentication that the libvirtd on the target will accept, so it could authenticate the same credentials as vdsm. console does require a read-write connection to libvirt.
There's example python code in the libvirt git tree:
http://libvirt.org/git/?p=libvirt.git;a=blob;f=examples/python/consolecallba...
I don't think giving end-users direct read/write access to libvirt is the way to go (even with libvirt having a permission model). we don't manage users/credentials at that level, only tickets.
I agree with the additional creditials, I'd prefer not to have to manage this.
the thing i like about the spice approach is it goes to qemu process, via channels we already have with clients (spice).
We already have a device that works without additional clients or channels, the pty on the host the VM runs.
The only question left is how to provide remote access to this. In the short term, we don't need remoting to make use of it. Having vdsm invoke virsh console for a local vdsClient is sufficent to start with.
For remoting, something we've been discussing is a console_read/console_write API call which would be something that a management application could consume and re-use some of those AJAX web-console widgets.
I thought the interesting use case for serial console was ability to script to it remotely, rather than a UI (which vnc/spice would give you anyway)?
Text-based serial console has many use-cases. We can build whatever we want on top of it, be that a UI or console/shells etc.
----- Original Message -----
From: "Adam Litke" agl@us.ibm.com To: "Zhou Zheng Sheng" zhshzhou@linux.vnet.ibm.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Friday, 12 October, 2012 3:10:57 PM Subject: Re: [vdsm] [RFC]about the implement of text-based console
On Fri, Oct 12, 2012 at 04:55:20PM +0800, Zhou Zheng Sheng wrote:
on 09/04/2012 22:19, Ryan Harper wrote:
- Dan Kenigsberg danken@redhat.com [2012-09-04 05:53]:
On Tue, Sep 04, 2012 at 03:05:37PM +0800, Xu He Jie wrote:
On 09/03/2012 10:33 PM, Dan Kenigsberg wrote:
On Thu, Aug 30, 2012 at 04:26:31PM -0500, Adam Litke wrote: >On Thu, Aug 30, 2012 at 11:32:02AM +0800, Xu He Jie wrote: >>Hi, >> >> I submited a patch for text-based console >>http://gerrit.ovirt.org/#/c/7165/ >> >>the issue I want to discussing as below: >>1. fix port VS dynamic port >> >>Use fix port for all VM's console. connect console with 'ssh >>vmUUID@ip -p port'. >>Distinguishing VM by vmUUID. >> >> >> The current implement was vdsm will allocated port for >> console >>dynamically and spawn sub-process when VM creating. >>In sub-process the main thread responsible for accept new >>connection >>and dispatch output of console to each connection. >>When new connection is coming, main processing create new >>thread for >>each new connection. Dynamic port will allocated >>port for each VM and use range port. It isn't good for >>firewall rules. >> >> >> so I got a suggestion that use fix port. and connect >> console with >>'ssh vmuuid@hostip -p fixport'. this is simple for user. >>We need one process for accept new connection from fix port >>and when >>new connection is coming, spawn sub-process for each vm. >>But because the console only can open by one process, main >>process >>need responsible for dispatching console's output of all vms >>and all >>connection. >>So the code will be a little complex then dynamic port. >> >> So this is dynamic port VS fix port and simple code VS >> complex code. >>From a usability point of view, I think the fixed port >>suggestion is nicer. >This means that a system administrator needs only to open one >port to enable >remote console access. If your initial implementation limits >console access to >one connection per VM would that simplify the code? Yes, using a fixed port for all consoles of all VMs seems like a cooler idea. Besides the firewall issue, there's user experience: instead of calling getVmStats to tell the vm port, and then use ssh, only one ssh call is needed. (Taking this one step further - it would make sense to add another layer on top, directing console clients to the specific host currently running the Vm.)
I did not take a close look at your implementation, and did not research this myself, but have you considered using sshd for this? I suppose you can configure sshd to collect the list of known "users" from `getAllVmStats`, and force it to run a command that redirects VM's console to the ssh client. It has a potential of being a more robust implementation.
I have considered using sshd and ssh tunnel. They can't implement fixed port and share console.
Would you elaborate on that? Usually sshd listens to a fixed port 22, and allows multiple users to have independet shells. What do you mean by "share console"?
Current implement we can do anything that what we want.
Yes, it is completely under our control, but there are down sides, too: we have to maintain another process, and another entry point, instead of configuring a universally-used, well maintained and debugged application.
Think of the security implications of having another remote shell access point to a host. I'd much rather trust sshd if we can make it work.
Dan.
At first glance, the standard sshd on the host is stronger and more robust than a custom ssh server, but the risk using the host sshd is high. If we implement this feature via host ssd, when a hacker attacks the sshd successfully, he will get access to the host shell. After all, the custom ssh server is not for accessing host shell, but just for forwarding the data from the guest console (a host /dev/pts/X device). If we just use a custom ssh server, the code in this server only does 1. auth, 2. data forwarding, when the hacker attacks, he just gets access to that virtual machine. Notice that there is no code written about login to the host in the custom ssh server, and the custom ssh server can be protected under selinux, only allowing it to access /dev/pts/X.
In fact using a custom VNC server in qemu is as risky as a custom ssh server in vdsm. If we accepts the former one, then I can accepts the latter one. The consideration is how robust of the custom ssh server, and the difficulty to maintain it. In He Jie's current patch, the ssh auth and transport library is an open-source third-party project, unless the project is well maintained and well proven, using it can be risky.
So my opinion is using neither the host sshd, nor a custom ssh server. Maybe we can apply the suggestion from Dan Yasny, running a standard sshd in a very small VM in every host, and forward data from this VM to other guest consoles. The ssh part is in the VM, then our work is just forwarding data from the VM via virto serial channels, to the guest via the pty.
I really dislike the idea of a service VM for something as fundamental as a VM console. The logistics of maintaining such a VM are a nightmare: provisioning, deployment, software upgrades, HA, etc.
Why? It really sounds like an easy path to me - provisioning of a virtual appliance is supposed to be simple, upgrades not necessary - same as with ovirt-node, just a bit of config files preserved and the rest simply replaced, and HA is taken care of by the platform
On the other hand, maintaining this on multiple hypervisors means they should all be up to date, compliant and configured. Not to mention the security implications of maintaining an extra access point on lots of machines vs a single virtual appliance VM. Bandwidth can be an issue, but I doubt serial console traffic can be that heavy especially since it's there for admin access and not routine work
Am I missing a point here?
Maybe we can start simple and provide console access locally only. What sort of functionality would the vdsm api need to provide to enable only local access to the console? Presumably, it would set up a connection and provide the user with a port/pty to use to connect locally. For now it would be "BYOSSH - bring your own SSH" as clients would need to access the hosts with something like:
ssh -t <host> "<connect command>"
The above command could be wrapped in a vdsm-tool command.
In the future, we can take a look at extending this feature via some sort of remote streaming API. Keep in mind that in order for this feature to be truly useful to ovirt-engine consumers, the console connection must survive a VM migration. To me, this means that vdsm will need to implement a generic streaming API like libvirt has.
-- Adam Litke agl@us.ibm.com IBM Linux Technology Center
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
On Mon, Oct 15, 2012 at 04:40:00AM -0400, Dan Yasny wrote:
----- Original Message -----
From: "Adam Litke" agl@us.ibm.com To: "Zhou Zheng Sheng" zhshzhou@linux.vnet.ibm.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Friday, 12 October, 2012 3:10:57 PM Subject: Re: [vdsm] [RFC]about the implement of text-based console
On Fri, Oct 12, 2012 at 04:55:20PM +0800, Zhou Zheng Sheng wrote:
on 09/04/2012 22:19, Ryan Harper wrote:
- Dan Kenigsberg danken@redhat.com [2012-09-04 05:53]:
On Tue, Sep 04, 2012 at 03:05:37PM +0800, Xu He Jie wrote:
On 09/03/2012 10:33 PM, Dan Kenigsberg wrote: >On Thu, Aug 30, 2012 at 04:26:31PM -0500, Adam Litke wrote: >>On Thu, Aug 30, 2012 at 11:32:02AM +0800, Xu He Jie wrote: >>>Hi, >>> >>> I submited a patch for text-based console >>>http://gerrit.ovirt.org/#/c/7165/ >>> >>>the issue I want to discussing as below: >>>1. fix port VS dynamic port >>> >>>Use fix port for all VM's console. connect console with 'ssh >>>vmUUID@ip -p port'. >>>Distinguishing VM by vmUUID. >>> >>> >>> The current implement was vdsm will allocated port for >>> console >>>dynamically and spawn sub-process when VM creating. >>>In sub-process the main thread responsible for accept new >>>connection >>>and dispatch output of console to each connection. >>>When new connection is coming, main processing create new >>>thread for >>>each new connection. Dynamic port will allocated >>>port for each VM and use range port. It isn't good for >>>firewall rules. >>> >>> >>> so I got a suggestion that use fix port. and connect >>> console with >>>'ssh vmuuid@hostip -p fixport'. this is simple for user. >>>We need one process for accept new connection from fix port >>>and when >>>new connection is coming, spawn sub-process for each vm. >>>But because the console only can open by one process, main >>>process >>>need responsible for dispatching console's output of all vms >>>and all >>>connection. >>>So the code will be a little complex then dynamic port. >>> >>> So this is dynamic port VS fix port and simple code VS >>> complex code. >>>From a usability point of view, I think the fixed port >>>suggestion is nicer. >>This means that a system administrator needs only to open one >>port to enable >>remote console access. If your initial implementation limits >>console access to >>one connection per VM would that simplify the code? >Yes, using a fixed port for all consoles of all VMs seems like >a cooler >idea. Besides the firewall issue, there's user experience: >instead of >calling getVmStats to tell the vm port, and then use ssh, only >one ssh >call is needed. (Taking this one step further - it would make >sense to >add another layer on top, directing console clients to the >specific host >currently running the Vm.) > >I did not take a close look at your implementation, and did not >research >this myself, but have you considered using sshd for this? I >suppose you >can configure sshd to collect the list of known "users" from >`getAllVmStats`, and force it to run a command that redirects >VM's >console to the ssh client. It has a potential of being a more >robust >implementation. I have considered using sshd and ssh tunnel. They can't implement fixed port and share console.
Would you elaborate on that? Usually sshd listens to a fixed port 22, and allows multiple users to have independet shells. What do you mean by "share console"?
Current implement we can do anything that what we want.
Yes, it is completely under our control, but there are down sides, too: we have to maintain another process, and another entry point, instead of configuring a universally-used, well maintained and debugged application.
Think of the security implications of having another remote shell access point to a host. I'd much rather trust sshd if we can make it work.
Dan.
At first glance, the standard sshd on the host is stronger and more robust than a custom ssh server, but the risk using the host sshd is high. If we implement this feature via host ssd, when a hacker attacks the sshd successfully, he will get access to the host shell. After all, the custom ssh server is not for accessing host shell, but just for forwarding the data from the guest console (a host /dev/pts/X device). If we just use a custom ssh server, the code in this server only does 1. auth, 2. data forwarding, when the hacker attacks, he just gets access to that virtual machine. Notice that there is no code written about login to the host in the custom ssh server, and the custom ssh server can be protected under selinux, only allowing it to access /dev/pts/X.
In fact using a custom VNC server in qemu is as risky as a custom ssh server in vdsm. If we accepts the former one, then I can accepts the latter one. The consideration is how robust of the custom ssh server, and the difficulty to maintain it. In He Jie's current patch, the ssh auth and transport library is an open-source third-party project, unless the project is well maintained and well proven, using it can be risky.
So my opinion is using neither the host sshd, nor a custom ssh server. Maybe we can apply the suggestion from Dan Yasny, running a standard sshd in a very small VM in every host, and forward data from this VM to other guest consoles. The ssh part is in the VM, then our work is just forwarding data from the VM via virto serial channels, to the guest via the pty.
I really dislike the idea of a service VM for something as fundamental as a VM console. The logistics of maintaining such a VM are a nightmare: provisioning, deployment, software upgrades, HA, etc.
Why? It really sounds like an easy path to me - provisioning of a virtual appliance is supposed to be simple, upgrades not necessary - same as with ovirt-node, just a bit of config files preserved and the rest simply replaced, and HA is taken care of by the platform
How do you get the VM image to the hypervisor in the first place? Is this an extra step at install time that the admin must follow? You say that the VM is simple and will not need to be upgraded but I don't completely believe you. Inevitably, we will need to upgrade that VM (to fix a bug someone finds, or sync it up with the latest vdsm/engine code, or fix a security flaw). How will we conduct that upgrade? How do we handle a host going in and out of Maintenance mode?
On the other hand, maintaining this on multiple hypervisors means they should all be up to date, compliant and configured. Not to mention the security implications of maintaining an extra access point on lots of machines vs a single virtual appliance VM. Bandwidth can be an issue, but I doubt serial console traffic can be that heavy especially since it's there for admin access and not routine work
Don't we already want hypervisors to be up to date, compliant, and configured? Allowing serial console access will add complexity in one way or another. In my opinion it would be simpler to support a streaming service than a service VM.
Are there any other uses for a service VM that could justify its complexity?
Am I missing a point here?
Maybe we can start simple and provide console access locally only. What sort of functionality would the vdsm api need to provide to enable only local access to the console? Presumably, it would set up a connection and provide the user with a port/pty to use to connect locally. For now it would be "BYOSSH - bring your own SSH" as clients would need to access the hosts with something like:
ssh -t <host> "<connect command>"
The above command could be wrapped in a vdsm-tool command.
In the future, we can take a look at extending this feature via some sort of remote streaming API. Keep in mind that in order for this feature to be truly useful to ovirt-engine consumers, the console connection must survive a VM migration. To me, this means that vdsm will need to implement a generic streaming API like libvirt has.
-- Adam Litke agl@us.ibm.com IBM Linux Technology Center
_______________________________________________ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
--
Regards,
Dan Yasny Red Hat Israel +972 9769 2280
Adam Litke píše v Po 15. 10. 2012 v 08:07 -0500:
On Mon, Oct 15, 2012 at 04:40:00AM -0400, Dan Yasny wrote:
----- Original Message -----
From: "Adam Litke" agl@us.ibm.com To: "Zhou Zheng Sheng" zhshzhou@linux.vnet.ibm.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Friday, 12 October, 2012 3:10:57 PM Subject: Re: [vdsm] [RFC]about the implement of text-based console
On Fri, Oct 12, 2012 at 04:55:20PM +0800, Zhou Zheng Sheng wrote:
on 09/04/2012 22:19, Ryan Harper wrote:
- Dan Kenigsberg danken@redhat.com [2012-09-04 05:53]:
On Tue, Sep 04, 2012 at 03:05:37PM +0800, Xu He Jie wrote: >On 09/03/2012 10:33 PM, Dan Kenigsberg wrote: >>On Thu, Aug 30, 2012 at 04:26:31PM -0500, Adam Litke wrote: >>>On Thu, Aug 30, 2012 at 11:32:02AM +0800, Xu He Jie wrote: >>>>Hi, >>>> >>>> I submited a patch for text-based console >>>>http://gerrit.ovirt.org/#/c/7165/ >>>> >>>>the issue I want to discussing as below: >>>>1. fix port VS dynamic port >>>> >>>>Use fix port for all VM's console. connect console with 'ssh >>>>vmUUID@ip -p port'. >>>>Distinguishing VM by vmUUID. >>>> >>>> >>>> The current implement was vdsm will allocated port for >>>> console >>>>dynamically and spawn sub-process when VM creating. >>>>In sub-process the main thread responsible for accept new >>>>connection >>>>and dispatch output of console to each connection. >>>>When new connection is coming, main processing create new >>>>thread for >>>>each new connection. Dynamic port will allocated >>>>port for each VM and use range port. It isn't good for >>>>firewall rules. >>>> >>>> >>>> so I got a suggestion that use fix port. and connect >>>> console with >>>>'ssh vmuuid@hostip -p fixport'. this is simple for user. >>>>We need one process for accept new connection from fix port >>>>and when >>>>new connection is coming, spawn sub-process for each vm. >>>>But because the console only can open by one process, main >>>>process >>>>need responsible for dispatching console's output of all vms >>>>and all >>>>connection. >>>>So the code will be a little complex then dynamic port. >>>> >>>> So this is dynamic port VS fix port and simple code VS >>>> complex code. >>>>From a usability point of view, I think the fixed port >>>>suggestion is nicer. >>>This means that a system administrator needs only to open one >>>port to enable >>>remote console access. If your initial implementation limits >>>console access to >>>one connection per VM would that simplify the code? >>Yes, using a fixed port for all consoles of all VMs seems like >>a cooler >>idea. Besides the firewall issue, there's user experience: >>instead of >>calling getVmStats to tell the vm port, and then use ssh, only >>one ssh >>call is needed. (Taking this one step further - it would make >>sense to >>add another layer on top, directing console clients to the >>specific host >>currently running the Vm.) >> >>I did not take a close look at your implementation, and did not >>research >>this myself, but have you considered using sshd for this? I >>suppose you >>can configure sshd to collect the list of known "users" from >>`getAllVmStats`, and force it to run a command that redirects >>VM's >>console to the ssh client. It has a potential of being a more >>robust >>implementation. >I have considered using sshd and ssh tunnel. They >can't implement fixed port and share console. Would you elaborate on that? Usually sshd listens to a fixed port 22, and allows multiple users to have independet shells. What do you mean by "share console"?
>Current implement >we can do anything that what we want. Yes, it is completely under our control, but there are down sides, too: we have to maintain another process, and another entry point, instead of configuring a universally-used, well maintained and debugged application.
Think of the security implications of having another remote shell access point to a host. I'd much rather trust sshd if we can make it work.
Dan.
At first glance, the standard sshd on the host is stronger and more robust than a custom ssh server, but the risk using the host sshd is high. If we implement this feature via host ssd, when a hacker attacks the sshd successfully, he will get access to the host shell. After all, the custom ssh server is not for accessing host shell, but just for forwarding the data from the guest console (a host /dev/pts/X device). If we just use a custom ssh server, the code in this server only does 1. auth, 2. data forwarding, when the hacker attacks, he just gets access to that virtual machine. Notice that there is no code written about login to the host in the custom ssh server, and the custom ssh server can be protected under selinux, only allowing it to access /dev/pts/X.
In fact using a custom VNC server in qemu is as risky as a custom ssh server in vdsm. If we accepts the former one, then I can accepts the latter one. The consideration is how robust of the custom ssh server, and the difficulty to maintain it. In He Jie's current patch, the ssh auth and transport library is an open-source third-party project, unless the project is well maintained and well proven, using it can be risky.
So my opinion is using neither the host sshd, nor a custom ssh server. Maybe we can apply the suggestion from Dan Yasny, running a standard sshd in a very small VM in every host, and forward data from this VM to other guest consoles. The ssh part is in the VM, then our work is just forwarding data from the VM via virto serial channels, to the guest via the pty.
I really dislike the idea of a service VM for something as fundamental as a VM console. The logistics of maintaining such a VM are a nightmare: provisioning, deployment, software upgrades, HA, etc.
Why? It really sounds like an easy path to me - provisioning of a virtual appliance is supposed to be simple, upgrades not necessary - same as with ovirt-node, just a bit of config files preserved and the rest simply replaced, and HA is taken care of by the platform
How do you get the VM image to the hypervisor in the first place? Is this an extra step at install time that the admin must follow? You say that the VM is simple and will not need to be upgraded but I don't completely believe you. Inevitably, we will need to upgrade that VM (to fix a bug someone finds, or sync it up with the latest vdsm/engine code, or fix a security flaw). How will we conduct that upgrade? How do we handle a host going in and out of Maintenance mode?
IMO you could generate such VM from host system similarly to how libvirt-sandbox works.
Also CCing Dan who is author of libvirt-sandbox
David
On the other hand, maintaining this on multiple hypervisors means they should all be up to date, compliant and configured. Not to mention the security implications of maintaining an extra access point on lots of machines vs a single virtual appliance VM. Bandwidth can be an issue, but I doubt serial console traffic can be that heavy especially since it's there for admin access and not routine work
Don't we already want hypervisors to be up to date, compliant, and configured? Allowing serial console access will add complexity in one way or another. In my opinion it would be simpler to support a streaming service than a service VM.
Are there any other uses for a service VM that could justify its complexity?
Am I missing a point here?
Maybe we can start simple and provide console access locally only. What sort of functionality would the vdsm api need to provide to enable only local access to the console? Presumably, it would set up a connection and provide the user with a port/pty to use to connect locally. For now it would be "BYOSSH - bring your own SSH" as clients would need to access the hosts with something like:
ssh -t <host> "<connect command>"
The above command could be wrapped in a vdsm-tool command.
In the future, we can take a look at extending this feature via some sort of remote streaming API. Keep in mind that in order for this feature to be truly useful to ovirt-engine consumers, the console connection must survive a VM migration. To me, this means that vdsm will need to implement a generic streaming API like libvirt has.
-- Adam Litke agl@us.ibm.com IBM Linux Technology Center
_______________________________________________ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
--
Regards,
Dan Yasny Red Hat Israel +972 9769 2280
On Mon, Oct 15, 2012 at 03:21:23PM +0200, David Jaša wrote:
Adam Litke píše v Po 15. 10. 2012 v 08:07 -0500:
On Mon, Oct 15, 2012 at 04:40:00AM -0400, Dan Yasny wrote:
----- Original Message -----
From: "Adam Litke" agl@us.ibm.com To: "Zhou Zheng Sheng" zhshzhou@linux.vnet.ibm.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Friday, 12 October, 2012 3:10:57 PM Subject: Re: [vdsm] [RFC]about the implement of text-based console
On Fri, Oct 12, 2012 at 04:55:20PM +0800, Zhou Zheng Sheng wrote:
on 09/04/2012 22:19, Ryan Harper wrote:
- Dan Kenigsberg danken@redhat.com [2012-09-04 05:53]:
>On Tue, Sep 04, 2012 at 03:05:37PM +0800, Xu He Jie wrote: >>On 09/03/2012 10:33 PM, Dan Kenigsberg wrote: >>>On Thu, Aug 30, 2012 at 04:26:31PM -0500, Adam Litke wrote: >>>>On Thu, Aug 30, 2012 at 11:32:02AM +0800, Xu He Jie wrote: >>>>>Hi, >>>>> >>>>> I submited a patch for text-based console >>>>>http://gerrit.ovirt.org/#/c/7165/ >>>>> >>>>>the issue I want to discussing as below: >>>>>1. fix port VS dynamic port >>>>> >>>>>Use fix port for all VM's console. connect console with 'ssh >>>>>vmUUID@ip -p port'. >>>>>Distinguishing VM by vmUUID. >>>>> >>>>> >>>>> The current implement was vdsm will allocated port for >>>>> console >>>>>dynamically and spawn sub-process when VM creating. >>>>>In sub-process the main thread responsible for accept new >>>>>connection >>>>>and dispatch output of console to each connection. >>>>>When new connection is coming, main processing create new >>>>>thread for >>>>>each new connection. Dynamic port will allocated >>>>>port for each VM and use range port. It isn't good for >>>>>firewall rules. >>>>> >>>>> >>>>> so I got a suggestion that use fix port. and connect >>>>> console with >>>>>'ssh vmuuid@hostip -p fixport'. this is simple for user. >>>>>We need one process for accept new connection from fix port >>>>>and when >>>>>new connection is coming, spawn sub-process for each vm. >>>>>But because the console only can open by one process, main >>>>>process >>>>>need responsible for dispatching console's output of all vms >>>>>and all >>>>>connection. >>>>>So the code will be a little complex then dynamic port. >>>>> >>>>> So this is dynamic port VS fix port and simple code VS >>>>> complex code. >>>>>From a usability point of view, I think the fixed port >>>>>suggestion is nicer. >>>>This means that a system administrator needs only to open one >>>>port to enable >>>>remote console access. If your initial implementation limits >>>>console access to >>>>one connection per VM would that simplify the code? >>>Yes, using a fixed port for all consoles of all VMs seems like >>>a cooler >>>idea. Besides the firewall issue, there's user experience: >>>instead of >>>calling getVmStats to tell the vm port, and then use ssh, only >>>one ssh >>>call is needed. (Taking this one step further - it would make >>>sense to >>>add another layer on top, directing console clients to the >>>specific host >>>currently running the Vm.) >>> >>>I did not take a close look at your implementation, and did not >>>research >>>this myself, but have you considered using sshd for this? I >>>suppose you >>>can configure sshd to collect the list of known "users" from >>>`getAllVmStats`, and force it to run a command that redirects >>>VM's >>>console to the ssh client. It has a potential of being a more >>>robust >>>implementation. >>I have considered using sshd and ssh tunnel. They >>can't implement fixed port and share console. >Would you elaborate on that? Usually sshd listens to a fixed port >22, >and allows multiple users to have independet shells. What do you >mean by >"share console"? > >>Current implement >>we can do anything that what we want. >Yes, it is completely under our control, but there are down >sides, too: >we have to maintain another process, and another entry point, >instead of >configuring a universally-used, well maintained and debugged >application. Think of the security implications of having another remote shell access point to a host. I'd much rather trust sshd if we can make it work.
>Dan.
At first glance, the standard sshd on the host is stronger and more robust than a custom ssh server, but the risk using the host sshd is high. If we implement this feature via host ssd, when a hacker attacks the sshd successfully, he will get access to the host shell. After all, the custom ssh server is not for accessing host shell, but just for forwarding the data from the guest console (a host /dev/pts/X device). If we just use a custom ssh server, the code in this server only does 1. auth, 2. data forwarding, when the hacker attacks, he just gets access to that virtual machine. Notice that there is no code written about login to the host in the custom ssh server, and the custom ssh server can be protected under selinux, only allowing it to access /dev/pts/X.
In fact using a custom VNC server in qemu is as risky as a custom ssh server in vdsm. If we accepts the former one, then I can accepts the latter one. The consideration is how robust of the custom ssh server, and the difficulty to maintain it. In He Jie's current patch, the ssh auth and transport library is an open-source third-party project, unless the project is well maintained and well proven, using it can be risky.
So my opinion is using neither the host sshd, nor a custom ssh server. Maybe we can apply the suggestion from Dan Yasny, running a standard sshd in a very small VM in every host, and forward data from this VM to other guest consoles. The ssh part is in the VM, then our work is just forwarding data from the VM via virto serial channels, to the guest via the pty.
I really dislike the idea of a service VM for something as fundamental as a VM console. The logistics of maintaining such a VM are a nightmare: provisioning, deployment, software upgrades, HA, etc.
Why? It really sounds like an easy path to me - provisioning of a virtual appliance is supposed to be simple, upgrades not necessary - same as with ovirt-node, just a bit of config files preserved and the rest simply replaced, and HA is taken care of by the platform
How do you get the VM image to the hypervisor in the first place? Is this an extra step at install time that the admin must follow? You say that the VM is simple and will not need to be upgraded but I don't completely believe you. Inevitably, we will need to upgrade that VM (to fix a bug someone finds, or sync it up with the latest vdsm/engine code, or fix a security flaw). How will we conduct that upgrade? How do we handle a host going in and out of Maintenance mode?
IMO you could generate such VM from host system similarly to how libvirt-sandbox works.
Also CCing Dan who is author of libvirt-sandbox
In fact you do not even generate a VM image at all. The key idea behind virt-sandbox is that you do *not* want the extra overhead of maintaining extra OS installations, precisely because of the upgrade pain described above.
Thus, the way virt-sandbox works is that the VM boots using host's filesystem as the guest root filesystem (in readonly mode). You then tell virt-sandbox to over-mount a private writable filesystem in certain desired places. So for running SSH daemon in a "service VM" you could do something like
# mkdir -p /var/lib/vdsm/sshd-service/$VMUUID/etc/ssh # cp ...some template sshd_config... /var/lib/vdsm/sshd-service/$VMUUID/etc/ssh/sshd_config # virt-sandbox --host-bind /etc/ssh=/var/lib/vdsm/sshd-service/$VMUUID/etc/ssh /usr/bin/sshd
So the only extra thing you are maintaining here is the SSH daemon config. Depending on how paranoid you are you do not even need to use KVM for your service VMs - you can tell virt-sandbox to make use of LXC which gives you even lower overheads. You still get a sVirt confined VM, but you share the same kernel.
When you upgrade SSH on the host OS, all you need todo is restart the service VMs & they'll be running the new code.
IMHO the kind of scenario you are describing here is quite a good fit for the virt-sandbox concept - you get the security benefits of virtualization without the management pain of extra OS installs.
Regards, Daniel
----- Original Message -----
From: "Adam Litke" agl@us.ibm.com To: "Dan Yasny" dyasny@redhat.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org, "Zhou Zheng Sheng" zhshzhou@linux.vnet.ibm.com Sent: Monday, 15 October, 2012 3:07:47 PM Subject: Re: [vdsm] [RFC]about the implement of text-based console
On Mon, Oct 15, 2012 at 04:40:00AM -0400, Dan Yasny wrote:
----- Original Message -----
From: "Adam Litke" agl@us.ibm.com To: "Zhou Zheng Sheng" zhshzhou@linux.vnet.ibm.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Friday, 12 October, 2012 3:10:57 PM Subject: Re: [vdsm] [RFC]about the implement of text-based console
On Fri, Oct 12, 2012 at 04:55:20PM +0800, Zhou Zheng Sheng wrote:
on 09/04/2012 22:19, Ryan Harper wrote:
- Dan Kenigsberg danken@redhat.com [2012-09-04 05:53]:
On Tue, Sep 04, 2012 at 03:05:37PM +0800, Xu He Jie wrote: >On 09/03/2012 10:33 PM, Dan Kenigsberg wrote: >>On Thu, Aug 30, 2012 at 04:26:31PM -0500, Adam Litke wrote: >>>On Thu, Aug 30, 2012 at 11:32:02AM +0800, Xu He Jie wrote: >>>>Hi, >>>> >>>> I submited a patch for text-based console >>>>http://gerrit.ovirt.org/#/c/7165/ >>>> >>>>the issue I want to discussing as below: >>>>1. fix port VS dynamic port >>>> >>>>Use fix port for all VM's console. connect console with >>>>'ssh >>>>vmUUID@ip -p port'. >>>>Distinguishing VM by vmUUID. >>>> >>>> >>>> The current implement was vdsm will allocated port for >>>> console >>>>dynamically and spawn sub-process when VM creating. >>>>In sub-process the main thread responsible for accept new >>>>connection >>>>and dispatch output of console to each connection. >>>>When new connection is coming, main processing create new >>>>thread for >>>>each new connection. Dynamic port will allocated >>>>port for each VM and use range port. It isn't good for >>>>firewall rules. >>>> >>>> >>>> so I got a suggestion that use fix port. and connect >>>> console with >>>>'ssh vmuuid@hostip -p fixport'. this is simple for user. >>>>We need one process for accept new connection from fix >>>>port >>>>and when >>>>new connection is coming, spawn sub-process for each vm. >>>>But because the console only can open by one process, >>>>main >>>>process >>>>need responsible for dispatching console's output of all >>>>vms >>>>and all >>>>connection. >>>>So the code will be a little complex then dynamic port. >>>> >>>> So this is dynamic port VS fix port and simple code VS >>>> complex code. >>>>From a usability point of view, I think the fixed port >>>>suggestion is nicer. >>>This means that a system administrator needs only to open >>>one >>>port to enable >>>remote console access. If your initial implementation >>>limits >>>console access to >>>one connection per VM would that simplify the code? >>Yes, using a fixed port for all consoles of all VMs seems >>like >>a cooler >>idea. Besides the firewall issue, there's user experience: >>instead of >>calling getVmStats to tell the vm port, and then use ssh, >>only >>one ssh >>call is needed. (Taking this one step further - it would >>make >>sense to >>add another layer on top, directing console clients to the >>specific host >>currently running the Vm.) >> >>I did not take a close look at your implementation, and did >>not >>research >>this myself, but have you considered using sshd for this? I >>suppose you >>can configure sshd to collect the list of known "users" >>from >>`getAllVmStats`, and force it to run a command that >>redirects >>VM's >>console to the ssh client. It has a potential of being a >>more >>robust >>implementation. >I have considered using sshd and ssh tunnel. They >can't implement fixed port and share console. Would you elaborate on that? Usually sshd listens to a fixed port 22, and allows multiple users to have independet shells. What do you mean by "share console"?
>Current implement >we can do anything that what we want. Yes, it is completely under our control, but there are down sides, too: we have to maintain another process, and another entry point, instead of configuring a universally-used, well maintained and debugged application.
Think of the security implications of having another remote shell access point to a host. I'd much rather trust sshd if we can make it work.
Dan.
At first glance, the standard sshd on the host is stronger and more robust than a custom ssh server, but the risk using the host sshd is high. If we implement this feature via host ssd, when a hacker attacks the sshd successfully, he will get access to the host shell. After all, the custom ssh server is not for accessing host shell, but just for forwarding the data from the guest console (a host /dev/pts/X device). If we just use a custom ssh server, the code in this server only does 1. auth, 2. data forwarding, when the hacker attacks, he just gets access to that virtual machine. Notice that there is no code written about login to the host in the custom ssh server, and the custom ssh server can be protected under selinux, only allowing it to access /dev/pts/X.
In fact using a custom VNC server in qemu is as risky as a custom ssh server in vdsm. If we accepts the former one, then I can accepts the latter one. The consideration is how robust of the custom ssh server, and the difficulty to maintain it. In He Jie's current patch, the ssh auth and transport library is an open-source third-party project, unless the project is well maintained and well proven, using it can be risky.
So my opinion is using neither the host sshd, nor a custom ssh server. Maybe we can apply the suggestion from Dan Yasny, running a standard sshd in a very small VM in every host, and forward data from this VM to other guest consoles. The ssh part is in the VM, then our work is just forwarding data from the VM via virto serial channels, to the guest via the pty.
I really dislike the idea of a service VM for something as fundamental as a VM console. The logistics of maintaining such a VM are a nightmare: provisioning, deployment, software upgrades, HA, etc.
Why? It really sounds like an easy path to me - provisioning of a virtual appliance is supposed to be simple, upgrades not necessary - same as with ovirt-node, just a bit of config files preserved and the rest simply replaced, and HA is taken care of by the platform
How do you get the VM image to the hypervisor in the first place?
Place appliance in an export domain, import into the setup as a VM.
Is this an extra step at install time that the admin must follow?
Import the appliance, start it up, enter some initial config options (basically, just a few steps to hook it to the engine)
You say that the VM is simple and will not need to be upgraded but I don't completely believe you. Inevitably, we will need to upgrade that VM (to fix a bug someone finds, or sync it up with the latest vdsm/engine code, or fix a security flaw). How will we conduct that upgrade?
I don't say it won't need to be upgraded, I am suggesting a process similar to what happens with ovirt-node is implemented
How do we handle a host going in and out of Maintenance mode?
The appliance, just like any other VM, will migrate to another host, and as other VMs migrate as well, will provide a console for them. At first, it is probably not too critical to keep the sessions running, reconnecting should be good enough.
As for console access, the appliance can talk to the engine API's MLA calls, to see which user has access to which VMs' consoles, but that's details already
On the other hand, maintaining this on multiple hypervisors means they should all be up to date, compliant and configured. Not to mention the security implications of maintaining an extra access point on lots of machines vs a single virtual appliance VM. Bandwidth can be an issue, but I doubt serial console traffic can be that heavy especially since it's there for admin access and not routine work
Don't we already want hypervisors to be up to date, compliant, and configured?
we do, but having spent the last umpteen years supporting systems that contain multiple servers, you don't really expect this to be happening all the time, everywhere, and try to keep things as smooth as possible within these constraints. One of the larger datacenters I've worked with a few years ago, had a maintenance window every 18 months, dedicated for all upgrades and updates, for example.
Allowing serial console access will add complexity in one way or another. In my opinion it would be simpler to support a streaming service than a service VM.
On every hypervisor, instead of a single proxy? I still can't see how, no irony intended, I really want to understand why you consider having an extra service on every hypervisor, less complex that having this service in a single VM.
Are there any other uses for a service VM that could justify its complexity?
I could think of quite a few actually. Like a universal scheduler appliance that has an easy to script/program facility to orchestrate common API tasks. Or a Proxy for spice connections, or a separated set of VMs providing the engine services (for once the engine is actually modularized of course)...
Once we know we have a suprlus of compute resources that can be used to provide interesting features, where each feature can be in a separate small VM, deployed, or not, according to requirements, ideas just keep flooding in - any datacentre and infrastructure service can become an appliance - DNS/DHCP/MTA/OpenFlow controllers/$openstack_module_name/etc. I do get carried away here, but looking at the avocent serial console appliance, it does look like a nice solution, easily deployed and maintained
Am I missing a point here?
Maybe we can start simple and provide console access locally only. What sort of functionality would the vdsm api need to provide to enable only local access to the console? Presumably, it would set up a connection and provide the user with a port/pty to use to connect locally. For now it would be "BYOSSH - bring your own SSH" as clients would need to access the hosts with something like:
ssh -t <host> "<connect command>"
The above command could be wrapped in a vdsm-tool command.
In the future, we can take a look at extending this feature via some sort of remote streaming API. Keep in mind that in order for this feature to be truly useful to ovirt-engine consumers, the console connection must survive a VM migration. To me, this means that vdsm will need to implement a generic streaming API like libvirt has.
-- Adam Litke agl@us.ibm.com IBM Linux Technology Center
_______________________________________________ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
--
Regards,
Dan Yasny Red Hat Israel +972 9769 2280
-- Adam Litke agl@us.ibm.com IBM Linux Technology Center
* Dan Yasny dyasny@redhat.com [2012-10-15 03:41]:
Dan.
At first glance, the standard sshd on the host is stronger and more robust than a custom ssh server, but the risk using the host sshd is high. If we implement this feature via host ssd, when a hacker attacks the sshd successfully, he will get access to the host shell. After all, the custom ssh server is not for accessing host shell, but just for forwarding the data from the guest console (a host /dev/pts/X device). If we just use a custom ssh server, the code in this server only does 1. auth, 2. data forwarding, when the hacker attacks, he just gets access to that virtual machine. Notice that there is no code written about login to the host in the custom ssh server, and the custom ssh server can be protected under selinux, only allowing it to access /dev/pts/X.
In fact using a custom VNC server in qemu is as risky as a custom ssh server in vdsm. If we accepts the former one, then I can accepts the latter one. The consideration is how robust of the custom ssh server, and the difficulty to maintain it. In He Jie's current patch, the ssh auth and transport library is an open-source third-party project, unless the project is well maintained and well proven, using it can be risky.
So my opinion is using neither the host sshd, nor a custom ssh server. Maybe we can apply the suggestion from Dan Yasny, running a standard sshd in a very small VM in every host, and forward data from this VM to other guest consoles. The ssh part is in the VM, then our work is just forwarding data from the VM via virto serial channels, to the guest via the pty.
I really dislike the idea of a service VM for something as fundamental as a VM console. The logistics of maintaining such a VM are a nightmare: provisioning, deployment, software upgrades, HA, etc.
Why? It really sounds like an easy path to me - provisioning of a virtual appliance is supposed to be simple, upgrades not necessary - same as with ovirt-node, just a bit of config files preserved and the rest simply replaced, and HA is taken care of by the platform
On the other hand, maintaining this on multiple hypervisors means they should all be up to date, compliant and configured. Not to mention the security implications of maintaining an extra access point on lots of machines vs a single virtual appliance VM. Bandwidth can be an issue, but I doubt serial console traffic can be that heavy especially since it's there for admin access and not routine work
So, we're replacing a single daemon with a complete operating system ? which somehow we'll ensure the service VM is connected and running on all of the networks between the various clusters and datacenters within oVirt so that it can provide a single point of failure to the console of each VM?
Am I missing a point here?
Keep it simple. We can re-use existing services that are already present on all of the hosts: virsh and ssh for remoting. By re-using existing services, there is no additional exposure.
----- Original Message -----
From: "Ryan Harper" ryanh@us.ibm.com To: "Dan Yasny" dyasny@redhat.com Cc: "Adam Litke" agl@us.ibm.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Monday, 15 October, 2012 10:55:21 PM Subject: Re: [vdsm] [RFC]about the implement of text-based console
- Dan Yasny dyasny@redhat.com [2012-10-15 03:41]:
Dan.
At first glance, the standard sshd on the host is stronger and more robust than a custom ssh server, but the risk using the host sshd is high. If we implement this feature via host ssd, when a hacker attacks the sshd successfully, he will get access to the host shell. After all, the custom ssh server is not for accessing host shell, but just for forwarding the data from the guest console (a host /dev/pts/X device). If we just use a custom ssh server, the code in this server only does 1. auth, 2. data forwarding, when the hacker attacks, he just gets access to that virtual machine. Notice that there is no code written about login to the host in the custom ssh server, and the custom ssh server can be protected under selinux, only allowing it to access /dev/pts/X.
In fact using a custom VNC server in qemu is as risky as a custom ssh server in vdsm. If we accepts the former one, then I can accepts the latter one. The consideration is how robust of the custom ssh server, and the difficulty to maintain it. In He Jie's current patch, the ssh auth and transport library is an open-source third-party project, unless the project is well maintained and well proven, using it can be risky.
So my opinion is using neither the host sshd, nor a custom ssh server. Maybe we can apply the suggestion from Dan Yasny, running a standard sshd in a very small VM in every host, and forward data from this VM to other guest consoles. The ssh part is in the VM, then our work is just forwarding data from the VM via virto serial channels, to the guest via the pty.
I really dislike the idea of a service VM for something as fundamental as a VM console. The logistics of maintaining such a VM are a nightmare: provisioning, deployment, software upgrades, HA, etc.
Why? It really sounds like an easy path to me - provisioning of a virtual appliance is supposed to be simple, upgrades not necessary
same as with ovirt-node, just a bit of config files preserved and the rest simply replaced, and HA is taken care of by the platform
On the other hand, maintaining this on multiple hypervisors means they should all be up to date, compliant and configured. Not to mention the security implications of maintaining an extra access point on lots of machines vs a single virtual appliance VM. Bandwidth can be an issue, but I doubt serial console traffic can be that heavy especially since it's there for admin access and not routine work
So, we're replacing a single daemon with a complete operating system ?
a daemon on all hosts vs a single VM. It looks to me like a single access point for consoles can provide less of an attack surface. Especially when the virtual appliance comes pre-secured.
which somehow we'll ensure the service VM is connected and running on all of the networks between the various clusters and datacenters within oVirt so that it can provide a single point of failure to the console of each VM?
Well, I don't see an SPOF here - a VM can be set up HA, right? Moreover, since it doesn't really need to be powerful to push text consoles through, you can have one per DC or even cluster, if you have too complex a network, a single appliance with minimal amount of RAM and a single cpu should not be a problem.
Thinking about it, not every cluster even requires a serial console appliance normally, you'd probably use it in clusters of Linux server VMs, but not deploy it with VDI and windows VMs I suppose
Am I missing a point here?
Keep it simple. We can re-use existing services that are already present on all of the hosts: virsh and ssh for remoting. By re-using existing services, there is no additional exposure.
I have always assumed opening ssh on all the hosts is something lots of organizations frown upon. I mean I'm all for using sshd if we can't come up with something more elegant - it's a known and tested technology, but I have seen enough business demands that ssh be closed by default. And remote libvirt access is also something to be very careful with
-- Ryan Harper Software Engineer; Linux Technology Center IBM Corp., Austin, Tx ryanh@us.ibm.com
* Dan Yasny dyasny@redhat.com [2012-10-15 23:42]:
Hi Dan,
Why? It really sounds like an easy path to me - provisioning of a virtual appliance is supposed to be simple, upgrades not necessary
same as with ovirt-node, just a bit of config files preserved and the rest simply replaced, and HA is taken care of by the platform
On the other hand, maintaining this on multiple hypervisors means they should all be up to date, compliant and configured. Not to mention the security implications of maintaining an extra access point on lots of machines vs a single virtual appliance VM. Bandwidth can be an issue, but I doubt serial console traffic can be that heavy especially since it's there for admin access and not routine work
So, we're replacing a single daemon with a complete operating system ?
a daemon on all hosts vs a single VM. It looks to me like a single access point for consoles can provide less of an attack surface.
All of the hosts already run ssh, you're not turning that off, so the surface is the same.
Especially when the virtual appliance comes pre-secured.
I don't even know what that means. There isn't any magic just because we call it a virtual appliance.
which somehow we'll ensure the service VM is connected and running on all of the networks between the various clusters and datacenters within oVirt so that it can provide a single point of failure to the console of each VM?
Well, I don't see an SPOF here - a VM can be set up HA, right? Moreover, since it doesn't really need to be powerful to push text consoles through, you can have one per DC or even cluster, if you have too complex a network, a single appliance with minimal amount of RAM and a single cpu should not be a problem.
Thinking about it, not every cluster even requires a serial console appliance normally, you'd probably use it in clusters of Linux server VMs, but not deploy it with VDI and windows VMs I suppose
All of this is still too much for just tunneling a connection to virsh. We can get remote console text from the VMs *today*. No need to make a "virtual appliance", no need to deploy it, no need to figure out how many we need. No need to figure out how to distribute the VM or dynamically build it. No need for a new features to have it implemented.
This is a solved problem, we just need to connect to the existing solutions.
Am I missing a point here?
Keep it simple. We can re-use existing services that are already present on all of the hosts: virsh and ssh for remoting. By re-using existing services, there is no additional exposure.
I have always assumed opening ssh on all the hosts is something lots of organizations frown upon. I mean I'm all for using sshd if we can't come up with something more elegant - it's a known and tested
What I don't want to do is wait around for the best possible solution. Since we have these tools today, I'd like to use them (virsh + ssh). Over time, if we develope this more elegant design, we can transition to that.
technology, but I have seen enough business demands that ssh be closed by default. And remote libvirt access is also something to be very careful with
If we were the last users of ssh on hosts, then I'd agree with you. But I don't see that being the case.
----- Original Message -----
From: "Ryan Harper" ryanh@us.ibm.com To: "Dan Yasny" dyasny@redhat.com Cc: "Ryan Harper" ryanh@us.ibm.com, "Adam Litke" agl@us.ibm.com, "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Tuesday, 16 October, 2012 2:45:25 PM Subject: Re: [vdsm] [RFC]about the implement of text-based console
- Dan Yasny dyasny@redhat.com [2012-10-15 23:42]:
Hi Dan,
Why? It really sounds like an easy path to me - provisioning of a virtual appliance is supposed to be simple, upgrades not necessary
same as with ovirt-node, just a bit of config files preserved and the rest simply replaced, and HA is taken care of by the platform
On the other hand, maintaining this on multiple hypervisors means they should all be up to date, compliant and configured. Not to mention the security implications of maintaining an extra access point on lots of machines vs a single virtual appliance VM. Bandwidth can be an issue, but I doubt serial console traffic can be that heavy especially since it's there for admin access and not routine work
So, we're replacing a single daemon with a complete operating system ?
a daemon on all hosts vs a single VM. It looks to me like a single access point for consoles can provide less of an attack surface.
All of the hosts already run ssh, you're not turning that off, so the surface is the same.
Generally speaking, with ovirt-node based hypervisors you don't need ssh at all, and iirc by default it's off. For Fedora hosts you only need ssh during setup, afterwards it can also be disabled.
Especially when the virtual appliance comes pre-secured.
I don't even know what that means. There isn't any magic just because we call it a virtual appliance.
When you can supply a prepackaged distribution, with security as tight as possible, it's always appealing for the end user to just use it, instead of actually doing the tightening manually.
which somehow we'll ensure the service VM is connected and running on all of the networks between the various clusters and datacenters within oVirt so that it can provide a single point of failure to the console of each VM?
Well, I don't see an SPOF here - a VM can be set up HA, right? Moreover, since it doesn't really need to be powerful to push text consoles through, you can have one per DC or even cluster, if you have too complex a network, a single appliance with minimal amount of RAM and a single cpu should not be a problem.
Thinking about it, not every cluster even requires a serial console appliance normally, you'd probably use it in clusters of Linux server VMs, but not deploy it with VDI and windows VMs I suppose
All of this is still too much for just tunneling a connection to virsh. We can get remote console text from the VMs *today*. No need to make a "virtual appliance", no need to deploy it, no need to figure out how many we need. No need to figure out how to distribute the VM or dynamically build it. No need for a new features to have it implemented.
This is a solved problem, we just need to connect to the existing solutions.
Of course, if you want to keep it at the level it is today, I posted the solution to the wiki months ago[1], but if this is to be a product, and a product that looks like it was designed for humans, not engineers (C), we need to think ahead a bit, stick to making the ssh workaround work for now, and plan for something better in the future.
Am I missing a point here?
Keep it simple. We can re-use existing services that are already present on all of the hosts: virsh and ssh for remoting. By re-using existing services, there is no additional exposure.
I have always assumed opening ssh on all the hosts is something lots of organizations frown upon. I mean I'm all for using sshd if we can't come up with something more elegant - it's a known and tested
What I don't want to do is wait around for the best possible solution. Since we have these tools today, I'd like to use them (virsh + ssh). Over time, if we develope this more elegant design, we can transition to that.
Agreed, if your argument is for the speed of a solution delivery, then I'm all for the basic ssh.
technology, but I have seen enough business demands that ssh be closed by default. And remote libvirt access is also something to be very careful with
If we were the last users of ssh on hosts, then I'd agree with you. But I don't see that being the case.
Suffice it to say, I have seen a few of virtualized DCs where ssh can only be started on servers during a maintenance window, and after some ugly bureaucracy.
-- Ryan Harper Software Engineer; Linux Technology Center IBM Corp., Austin, Tx ryanh@us.ibm.com
[1] http://wiki.ovirt.org/wiki/Features/Serial_Console_in_CLI#Currently_operatio...
On Tue, Oct 16, 2012 at 07:45:25AM -0500, Ryan Harper wrote:
- Dan Yasny dyasny@redhat.com [2012-10-15 23:42]:
Hi Dan,
Why? It really sounds like an easy path to me - provisioning of a virtual appliance is supposed to be simple, upgrades not necessary
same as with ovirt-node, just a bit of config files preserved and the rest simply replaced, and HA is taken care of by the platform
On the other hand, maintaining this on multiple hypervisors means they should all be up to date, compliant and configured. Not to mention the security implications of maintaining an extra access point on lots of machines vs a single virtual appliance VM. Bandwidth can be an issue, but I doubt serial console traffic can be that heavy especially since it's there for admin access and not routine work
So, we're replacing a single daemon with a complete operating system ?
a daemon on all hosts vs a single VM. It looks to me like a single access point for consoles can provide less of an attack surface.
All of the hosts already run ssh, you're not turning that off, so the surface is the same.
That's not neccessarily true. I can well imagine that there would be different access rules for admins SSH'ing to the host, vs users SSH'ing to access a VM text console. eg host SSH access may be firewall restricted to a special admin VLAN only, while VM console SSH can be open to the LAN or WAN as a whole. So just because both use SSH does not imply the attack surface is the same for both usages
Regards, Daniel
On 2012年10月12日 21:10, Adam Litke wrote:
On Fri, Oct 12, 2012 at 04:55:20PM +0800, Zhou Zheng Sheng wrote:
on 09/04/2012 22:19, Ryan Harper wrote:
- Dan Kenigsberg danken@redhat.com [2012-09-04 05:53]:
On Tue, Sep 04, 2012 at 03:05:37PM +0800, Xu He Jie wrote:
On 09/03/2012 10:33 PM, Dan Kenigsberg wrote:
On Thu, Aug 30, 2012 at 04:26:31PM -0500, Adam Litke wrote: > On Thu, Aug 30, 2012 at 11:32:02AM +0800, Xu He Jie wrote: >> Hi, >> >> I submited a patch for text-based console >> http://gerrit.ovirt.org/#/c/7165/ >> >> the issue I want to discussing as below: >> 1. fix port VS dynamic port >> >> Use fix port for all VM's console. connect console with 'ssh >> vmUUID@ip -p port'. >> Distinguishing VM by vmUUID. >> >> >> The current implement was vdsm will allocated port for console >> dynamically and spawn sub-process when VM creating. >> In sub-process the main thread responsible for accept new connection >> and dispatch output of console to each connection. >> When new connection is coming, main processing create new thread for >> each new connection. Dynamic port will allocated >> port for each VM and use range port. It isn't good for firewall rules. >> >> >> so I got a suggestion that use fix port. and connect console with >> 'ssh vmuuid@hostip -p fixport'. this is simple for user. >> We need one process for accept new connection from fix port and when >> new connection is coming, spawn sub-process for each vm. >> But because the console only can open by one process, main process >> need responsible for dispatching console's output of all vms and all >> connection. >> So the code will be a little complex then dynamic port. >> >> So this is dynamic port VS fix port and simple code VS complex code. > >From a usability point of view, I think the fixed port suggestion is nicer. > This means that a system administrator needs only to open one port to enable > remote console access. If your initial implementation limits console access to > one connection per VM would that simplify the code? Yes, using a fixed port for all consoles of all VMs seems like a cooler idea. Besides the firewall issue, there's user experience: instead of calling getVmStats to tell the vm port, and then use ssh, only one ssh call is needed. (Taking this one step further - it would make sense to add another layer on top, directing console clients to the specific host currently running the Vm.)
I did not take a close look at your implementation, and did not research this myself, but have you considered using sshd for this? I suppose you can configure sshd to collect the list of known "users" from `getAllVmStats`, and force it to run a command that redirects VM's console to the ssh client. It has a potential of being a more robust implementation.
I have considered using sshd and ssh tunnel. They can't implement fixed port and share console.
Would you elaborate on that? Usually sshd listens to a fixed port 22, and allows multiple users to have independet shells. What do you mean by "share console"?
Current implement we can do anything that what we want.
Yes, it is completely under our control, but there are down sides, too: we have to maintain another process, and another entry point, instead of configuring a universally-used, well maintained and debugged application.
Think of the security implications of having another remote shell access point to a host. I'd much rather trust sshd if we can make it work.
Dan.
At first glance, the standard sshd on the host is stronger and more robust than a custom ssh server, but the risk using the host sshd is high. If we implement this feature via host ssd, when a hacker attacks the sshd successfully, he will get access to the host shell. After all, the custom ssh server is not for accessing host shell, but just for forwarding the data from the guest console (a host /dev/pts/X device). If we just use a custom ssh server, the code in this server only does 1. auth, 2. data forwarding, when the hacker attacks, he just gets access to that virtual machine. Notice that there is no code written about login to the host in the custom ssh server, and the custom ssh server can be protected under selinux, only allowing it to access /dev/pts/X.
In fact using a custom VNC server in qemu is as risky as a custom ssh server in vdsm. If we accepts the former one, then I can accepts the latter one. The consideration is how robust of the custom ssh server, and the difficulty to maintain it. In He Jie's current patch, the ssh auth and transport library is an open-source third-party project, unless the project is well maintained and well proven, using it can be risky.
So my opinion is using neither the host sshd, nor a custom ssh server. Maybe we can apply the suggestion from Dan Yasny, running a standard sshd in a very small VM in every host, and forward data from this VM to other guest consoles. The ssh part is in the VM, then our work is just forwarding data from the VM via virto serial channels, to the guest via the pty.
I really dislike the idea of a service VM for something as fundamental as a VM console. The logistics of maintaining such a VM are a nightmare: provisioning, deployment, software upgrades, HA, etc.
Maybe we can start simple and provide console access locally only. What sort of functionality would the vdsm api need to provide to enable only local access to the console? Presumably, it would set up a connection and provide the user with a port/pty to use to connect locally. For now it would be "BYOSSH - bring your own SSH" as clients would need to access the hosts with something like:
ssh -t <host> "<connect command>"
The above command could be wrapped in a vdsm-tool command.
In the future, we can take a look at extending this feature via some sort of remote streaming API. Keep in mind that in order for this feature to be truly useful to ovirt-engine consumers, the console connection must survive a VM migration. To me, this means that vdsm will need to implement a generic streaming API like libvirt has.
Hi, Adam, Could you explain more detail about how streaming API can survive a VM migration?
If we want to support migration, I think we should implement console server out of vdsm. Actually, It will work like proxy. So we call it as consoleProxy now. That consoleProxy can deploy on same machine with engine, or standalone, or virtual machine. I think its' working flow as below:
1. user request open console to engine. 2. engine setTicket(uuid, ticket, hostofvm) to consoleProxy. consoleProxy need provide api to engine. 3. engine return ticket to user. 4. user 'ssh UUID@consoleProxy' with ticket. 5. consoleProxy connect 'virsh -c qemu+tls://hostofvm/system console'. the host of running consoleProxy should have certificates of all vdsm host. 6. consoleProxy redirect output of 'virsh -c qemu+tls://hostofvm/system console' with ssh protocol. Same with currently implement. we can use system sshd or paramiko. If we use paramiko, it almost reuse the code of consoleServer that I have already writen.
After vm migrated: 1. engine tell consoleProxy that vm was migrated. I guess engine can know vm finished migration? And engine how to push the event of vm finished migration to consoleProxy? Engine only have rest api didn't support event push? Is streaming api can resolve this problem? 2. consoleProxy kill 'virsh console'. 3. reconnect to new host of vm with 'virsh console' again. There will missing some character if the reconnection isn't enough fast. This is hardly to resolve except implement ssh in qemu. I guess streaming api have some problem too. 4. continue redirect 'virsh console'.
Actually if we implement consoleProxy out of vdsm, we don't need decide it will run on physical machine or virtual machine now.
A lot detail need to think. I'm not cover all problem. And I haven't code to prove that work now. Just depend on thinking.
Is this make sense?
On Tue, Oct 16, 2012 at 12:51:23AM +0800, Xu He Jie wrote:
[SNIP] Hi, Adam, Could you explain more detail about how streaming API can survive a VM migration?
If we want to support migration, I think we should implement console server out of vdsm. Actually, It will work like proxy. So we call it as consoleProxy now. That consoleProxy can deploy on same machine with engine, or standalone, or virtual machine. I think its' working flow as below:
- user request open console to engine.
- engine setTicket(uuid, ticket, hostofvm) to consoleProxy. consoleProxy need provide api to engine.
- engine return ticket to user.
- user 'ssh UUID@consoleProxy' with ticket.
- consoleProxy connect 'virsh -c qemu+tls://hostofvm/system console'. the host of running consoleProxy should have certificates of all
vdsm host. 6. consoleProxy redirect output of 'virsh -c qemu+tls://hostofvm/system console' with ssh protocol. Same with currently implement. we can use system sshd or paramiko. If we use paramiko, it almost reuse the code of consoleServer that I have already writen.
After vm migrated:
- engine tell consoleProxy that vm was migrated. I guess engine can know vm finished migration? And engine how to push the event of vm finished migration to
consoleProxy? Engine only have rest api didn't support event push? Is streaming api can resolve this problem? 2. consoleProxy kill 'virsh console'. 3. reconnect to new host of vm with 'virsh console' again. There will missing some character if the reconnection isn't enough fast. This is hardly to resolve except implement ssh in qemu. I guess streaming api have some problem too. 4. continue redirect 'virsh console'.
Actually if we implement consoleProxy out of vdsm, we don't need decide it will run on physical machine or virtual machine now.
A lot detail need to think. I'm not cover all problem. And I haven't code to prove that work now. Just depend on thinking.
Is this make sense?
How is this handled with current displays like VNC and Spice?
Ewoud Kohl van Wijngaarden píše v Po 15. 10. 2012 v 22:46 +0200:
On Tue, Oct 16, 2012 at 12:51:23AM +0800, Xu He Jie wrote:
[SNIP] Hi, Adam, Could you explain more detail about how streaming API can survive a VM migration?
If we want to support migration, I think we should implement console server out of vdsm. Actually, It will work like proxy. So we call it as consoleProxy now. That consoleProxy can deploy on same machine with engine, or standalone, or virtual machine. I think its' working flow as below:
- user request open console to engine.
- engine setTicket(uuid, ticket, hostofvm) to consoleProxy. consoleProxy need provide api to engine.
- engine return ticket to user.
- user 'ssh UUID@consoleProxy' with ticket.
- consoleProxy connect 'virsh -c qemu+tls://hostofvm/system console'. the host of running consoleProxy should have certificates of all
vdsm host. 6. consoleProxy redirect output of 'virsh -c qemu+tls://hostofvm/system console' with ssh protocol. Same with currently implement. we can use system sshd or paramiko. If we use paramiko, it almost reuse the code of consoleServer that I have already writen.
After vm migrated:
- engine tell consoleProxy that vm was migrated. I guess engine can know vm finished migration? And engine how to push the event of vm finished migration to
consoleProxy? Engine only have rest api didn't support event push? Is streaming api can resolve this problem? 2. consoleProxy kill 'virsh console'. 3. reconnect to new host of vm with 'virsh console' again. There will missing some character if the reconnection isn't enough fast. This is hardly to resolve except implement ssh in qemu. I guess streaming api have some problem too. 4. continue redirect 'virsh console'.
Actually if we implement consoleProxy out of vdsm, we don't need decide it will run on physical machine or virtual machine now.
A lot detail need to think. I'm not cover all problem. And I haven't code to prove that work now. Just depend on thinking.
Is this make sense?
How is this handled with current displays like VNC and Spice?
Extending spice to provide just serial console remoting actually seems the easiest way to provide remote text-only console because most of the code path is already mature (used for client to guest agent communication) and e.g. spicy to just provide a device where e.g. screen could connect or just provide the console itself.
CCing spice-devel
David
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
David Jaša píše v Út 16. 10. 2012 v 00:18 +0200:
Ewoud Kohl van Wijngaarden píše v Po 15. 10. 2012 v 22:46 +0200:
On Tue, Oct 16, 2012 at 12:51:23AM +0800, Xu He Jie wrote:
[SNIP] Hi, Adam, Could you explain more detail about how streaming API can survive a VM migration?
If we want to support migration, I think we should implement console server out of vdsm. Actually, It will work like proxy. So we call it as consoleProxy now. That consoleProxy can deploy on same machine with engine, or standalone, or virtual machine. I think its' working flow as below:
- user request open console to engine.
- engine setTicket(uuid, ticket, hostofvm) to consoleProxy. consoleProxy need provide api to engine.
- engine return ticket to user.
- user 'ssh UUID@consoleProxy' with ticket.
- consoleProxy connect 'virsh -c qemu+tls://hostofvm/system console'. the host of running consoleProxy should have certificates of all
vdsm host. 6. consoleProxy redirect output of 'virsh -c qemu+tls://hostofvm/system console' with ssh protocol. Same with currently implement. we can use system sshd or paramiko. If we use paramiko, it almost reuse the code of consoleServer that I have already writen.
After vm migrated:
- engine tell consoleProxy that vm was migrated. I guess engine can know vm finished migration? And engine how to push the event of vm finished migration to
consoleProxy? Engine only have rest api didn't support event push? Is streaming api can resolve this problem? 2. consoleProxy kill 'virsh console'. 3. reconnect to new host of vm with 'virsh console' again. There will missing some character if the reconnection isn't enough fast. This is hardly to resolve except implement ssh in qemu. I guess streaming api have some problem too. 4. continue redirect 'virsh console'.
Actually if we implement consoleProxy out of vdsm, we don't need decide it will run on physical machine or virtual machine now.
A lot detail need to think. I'm not cover all problem. And I haven't code to prove that work now. Just depend on thinking.
Is this make sense?
How is this handled with current displays like VNC and Spice?
Extending spice to provide just serial console remoting actually seems the easiest way to provide remote text-only console because most of the code path is already mature (used for client to guest agent communication) and e.g. spicy
extending e.g. spicy
to just provide a device where e.g. screen could connect or just provide the console itself.
CCing spice-devel
David
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
On 10/16/2012 12:18 AM, David Jaša wrote:
Ewoud Kohl van Wijngaarden píše v Po 15. 10. 2012 v 22:46 +0200:
On Tue, Oct 16, 2012 at 12:51:23AM +0800, Xu He Jie wrote:
[SNIP] Hi, Adam, Could you explain more detail about how streaming API can survive a VM migration?
If we want to support migration, I think we should implement console server out of vdsm. Actually, It will work like proxy. So we call it as consoleProxy now. That consoleProxy can deploy on same machine with engine, or standalone, or virtual machine. I think its' working flow as below:
- user request open console to engine.
- engine setTicket(uuid, ticket, hostofvm) to consoleProxy. consoleProxy need provide api to engine.
- engine return ticket to user.
- user 'ssh UUID@consoleProxy' with ticket.
- consoleProxy connect 'virsh -c qemu+tls://hostofvm/system console'. the host of running consoleProxy should have certificates of all
vdsm host. 6. consoleProxy redirect output of 'virsh -c qemu+tls://hostofvm/system console' with ssh protocol. Same with currently implement. we can use system sshd or paramiko. If we use paramiko, it almost reuse the code of consoleServer that I have already writen.
After vm migrated:
- engine tell consoleProxy that vm was migrated. I guess engine can know vm finished migration? And engine how to push the event of vm finished migration to
consoleProxy? Engine only have rest api didn't support event push? Is streaming api can resolve this problem? 2. consoleProxy kill 'virsh console'. 3. reconnect to new host of vm with 'virsh console' again. There will missing some character if the reconnection isn't enough fast. This is hardly to resolve except implement ssh in qemu. I guess streaming api have some problem too. 4. continue redirect 'virsh console'.
Actually if we implement consoleProxy out of vdsm, we don't need decide it will run on physical machine or virtual machine now.
A lot detail need to think. I'm not cover all problem. And I haven't code to prove that work now. Just depend on thinking.
Is this make sense?
How is this handled with current displays like VNC and Spice?
Extending spice to provide just serial console remoting actually seems the easiest way to provide remote text-only console because most of the code path is already mature (used for client to guest agent communication) and e.g. spicy to just provide a device where e.g. screen could connect or just provide the console itself.
CCing spice-devel
would it allow users to script with/over it like they can with ssh?
On 10/16/2012 12:18 AM, David Jaša wrote:
Ewoud Kohl van Wijngaarden píše v Po 15. 10. 2012 v 22:46 +0200:
On Tue, Oct 16, 2012 at 12:51:23AM +0800, Xu He Jie wrote:
[SNIP] Hi, Adam, Could you explain more detail about how streaming API can survive a VM migration?
If we want to support migration, I think we should implement console server out of vdsm. Actually, It will work like proxy. So we call it as consoleProxy now. That consoleProxy can deploy on same machine with engine, or standalone, or virtual machine. I think its' working flow as below:
- user request open console to engine.
- engine setTicket(uuid, ticket, hostofvm) to consoleProxy. consoleProxy need provide api to engine.
- engine return ticket to user.
- user 'ssh UUID@consoleProxy' with ticket.
- consoleProxy connect 'virsh -c qemu+tls://hostofvm/system
console'. the host of running consoleProxy should have certificates of all vdsm host. 6. consoleProxy redirect output of 'virsh -c qemu+tls://hostofvm/system console' with ssh protocol. Same with currently implement. we can use system sshd or paramiko. If we use paramiko, it almost reuse the code of consoleServer that I have already writen.
After vm migrated:
- engine tell consoleProxy that vm was migrated. I guess engine can know vm finished migration? And engine how to push the event of vm finished migration to
consoleProxy? Engine only have rest api didn't support event push? Is streaming api can resolve this problem? 2. consoleProxy kill 'virsh console'. 3. reconnect to new host of vm with 'virsh console' again. There will missing some character if the reconnection isn't enough fast. This is hardly to resolve except implement ssh in qemu. I guess streaming api have some problem too. 4. continue redirect 'virsh console'.
Actually if we implement consoleProxy out of vdsm, we don't need decide it will run on physical machine or virtual machine now.
A lot detail need to think. I'm not cover all problem. And I haven't code to prove that work now. Just depend on thinking.
Is this make sense?
How is this handled with current displays like VNC and Spice?
Extending spice to provide just serial console remoting actually seems the easiest way to provide remote text-only console because most of the code path is already mature (used for client to guest agent communication) and e.g. spicy to just provide a device where e.g. screen could connect or just provide the console itself.
CCing spice-devel
would it allow users to script with/over it like they can with ssh?
If I understand correctly the idea is to add another channel for spice that would connect to a char device in qemu that in turn connects to a serial port. The result is a spice client that can display and interact, but not a scripting extension. We could also create a unix domain socket to expose this connection on the client, and the client could then use that for scripting (but this will be instead of displaying, since you can't multiplex the console in a meaningful way - unless you run screen/tmux over it maybe):
remote-viewer --spice-console-unix-domain-socket /tmp/spice.uds (This option assumes we want a single console channel - if we have multiple we will need to name them too)
Anyone will be able to script it using for instance: socat UNIX-CONNECT:/tmp/spice.uds SYSTEM:"echo hello world"
We could also turn it into a pty (socat can do that).
Spice-devel mailing list Spice-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/spice-devel
On 10/18/2012 12:13 PM, Alon Levy wrote:
On 10/16/2012 12:18 AM, David Jaša wrote:
Ewoud Kohl van Wijngaarden píše v Po 15. 10. 2012 v 22:46 +0200:
On Tue, Oct 16, 2012 at 12:51:23AM +0800, Xu He Jie wrote:
[SNIP] Hi, Adam, Could you explain more detail about how streaming API can survive a VM migration?
If we want to support migration, I think we should implement console server out of vdsm. Actually, It will work like proxy. So we call it as consoleProxy now. That consoleProxy can deploy on same machine with engine, or standalone, or virtual machine. I think its' working flow as below:
- user request open console to engine.
- engine setTicket(uuid, ticket, hostofvm) to consoleProxy. consoleProxy need provide api to engine.
- engine return ticket to user.
- user 'ssh UUID@consoleProxy' with ticket.
- consoleProxy connect 'virsh -c qemu+tls://hostofvm/system
console'. the host of running consoleProxy should have certificates of all vdsm host. 6. consoleProxy redirect output of 'virsh -c qemu+tls://hostofvm/system console' with ssh protocol. Same with currently implement. we can use system sshd or paramiko. If we use paramiko, it almost reuse the code of consoleServer that I have already writen.
After vm migrated:
- engine tell consoleProxy that vm was migrated. I guess engine can know vm finished migration? And engine how to push the event of vm finished migration to
consoleProxy? Engine only have rest api didn't support event push? Is streaming api can resolve this problem? 2. consoleProxy kill 'virsh console'. 3. reconnect to new host of vm with 'virsh console' again. There will missing some character if the reconnection isn't enough fast. This is hardly to resolve except implement ssh in qemu. I guess streaming api have some problem too. 4. continue redirect 'virsh console'.
Actually if we implement consoleProxy out of vdsm, we don't need decide it will run on physical machine or virtual machine now.
A lot detail need to think. I'm not cover all problem. And I haven't code to prove that work now. Just depend on thinking.
Is this make sense?
How is this handled with current displays like VNC and Spice?
Extending spice to provide just serial console remoting actually seems the easiest way to provide remote text-only console because most of the code path is already mature (used for client to guest agent communication) and e.g. spicy to just provide a device where e.g. screen could connect or just provide the console itself.
CCing spice-devel
would it allow users to script with/over it like they can with ssh?
If I understand correctly the idea is to add another channel for spice that would connect to a char device in qemu that in turn connects to a serial port. The result is a spice client that can display and interact, but not a scripting extension. We could also create a unix domain socket to expose this connection on the client, and the client could then use that for scripting (but this will be instead of displaying, since you can't multiplex the console in a meaningful way - unless you run screen/tmux over it maybe):
remote-viewer --spice-console-unix-domain-socket /tmp/spice.uds (This option assumes we want a single console channel - if we have multiple we will need to name them too)
Anyone will be able to script it using for instance: socat UNIX-CONNECT:/tmp/spice.uds SYSTEM:"echo hello world"
We could also turn it into a pty (socat can do that).
i think using spice this way may be a very good solution, to proxy a serial console. only caveat is it requires client to install spice, vs. just using ssh.
Itamar Heim píše v Čt 18. 10. 2012 v 20:32 +0200:
On 10/18/2012 12:13 PM, Alon Levy wrote:
On 10/16/2012 12:18 AM, David Jaša wrote:
[snip]
Extending spice to provide just serial console remoting actually seems the easiest way to provide remote text-only console because most of the code path is already mature (used for client to guest agent communication) and e.g. spicy to just provide a device where e.g. screen could connect or just provide the console itself.
CCing spice-devel
would it allow users to script with/over it like they can with ssh?
If I understand correctly the idea is to add another channel for spice that would connect to a char device in qemu that in turn connects to a serial port. The result is a spice client that can display and interact, but not a scripting extension. We could also create a unix domain socket to expose this connection on the client, and the client could then use that for scripting (but this will be instead of displaying, since you can't multiplex the console in a meaningful way - unless you run screen/tmux over it maybe):
remote-viewer --spice-console-unix-domain-socket /tmp/spice.uds (This option assumes we want a single console channel - if we have multiple we will need to name them too)
Anyone will be able to script it using for instance: socat UNIX-CONNECT:/tmp/spice.uds SYSTEM:"echo hello world"
We could also turn it into a pty (socat can do that).
i think using spice this way may be a very good solution, to proxy a serial console. only caveat is it requires client to install spice, vs. just using ssh.
Jarda (To:) actually asked me if this feature (serial device pass through without any graphics) was feasible for purposes of connecting remotely to serial console. Jarda, would the solution outlined by Alon be good for you?
Alon, one problem comes to my mind though: it would need either second spice server, or multi-client support (limited one would be enough to have simultaneously one graphics user and one serial device user). Do you think it is possible to implement such things without much effort?
David
Itamar Heim píše v Čt 18. 10. 2012 v 20:32 +0200:
On 10/18/2012 12:13 PM, Alon Levy wrote:
On 10/16/2012 12:18 AM, David Jaša wrote:
[snip]
Extending spice to provide just serial console remoting actually seems the easiest way to provide remote text-only console because most of the code path is already mature (used for client to guest agent communication) and e.g. spicy to just provide a device where e.g. screen could connect or just provide the console itself.
CCing spice-devel
would it allow users to script with/over it like they can with ssh?
If I understand correctly the idea is to add another channel for spice that would connect to a char device in qemu that in turn connects to a serial port. The result is a spice client that can display and interact, but not a scripting extension. We could also create a unix domain socket to expose this connection on the client, and the client could then use that for scripting (but this will be instead of displaying, since you can't multiplex the console in a meaningful way - unless you run screen/tmux over it maybe):
remote-viewer --spice-console-unix-domain-socket /tmp/spice.uds (This option assumes we want a single console channel - if we have multiple we will need to name them too)
Anyone will be able to script it using for instance: socat UNIX-CONNECT:/tmp/spice.uds SYSTEM:"echo hello world"
We could also turn it into a pty (socat can do that).
i think using spice this way may be a very good solution, to proxy a serial console. only caveat is it requires client to install spice, vs. just using ssh.
Jarda (To:) actually asked me if this feature (serial device pass through without any graphics) was feasible for purposes of connecting remotely to serial console. Jarda, would the solution outlined by Alon be good for you?
Alon, one problem comes to my mind though: it would need either second spice server, or multi-client support (limited one would be enough to have simultaneously one graphics user and one serial device user). Do you think it is possible to implement such things without much effort?
If we constrain it to one graphics only (no serial connection, i.e. same channels as today) and one serial only (i.e. main channel - since we have to have that, and the new serial console channel), I think it should not pose any of the problems keeping usable multiple client mode from being implemented, i.e. handling different speed clients.
David
--
David Jaša, RHCE
SPICE QE based in Brno GPG Key: 22C33E24 Fingerprint: 513A 060B D1B4 2A72 7F0D 0278 B125 CD00 22C3 3E24
On 09/04/2012 06:52 PM, Dan Kenigsberg wrote:
On Tue, Sep 04, 2012 at 03:05:37PM +0800, Xu He Jie wrote:
On 09/03/2012 10:33 PM, Dan Kenigsberg wrote:
On Thu, Aug 30, 2012 at 04:26:31PM -0500, Adam Litke wrote:
On Thu, Aug 30, 2012 at 11:32:02AM +0800, Xu He Jie wrote:
Hi,
I submited a patch for text-based console http://gerrit.ovirt.org/#/c/7165/
the issue I want to discussing as below:
- fix port VS dynamic port
Use fix port for all VM's console. connect console with 'ssh vmUUID@ip -p port'. Distinguishing VM by vmUUID.
The current implement was vdsm will allocated port for console dynamically and spawn sub-process when VM creating. In sub-process the main thread responsible for accept new connection and dispatch output of console to each connection. When new connection is coming, main processing create new thread for each new connection. Dynamic port will allocated port for each VM and use range port. It isn't good for firewall rules.
so I got a suggestion that use fix port. and connect console with 'ssh vmuuid@hostip -p fixport'. this is simple for user. We need one process for accept new connection from fix port and when new connection is coming, spawn sub-process for each vm. But because the console only can open by one process, main process need responsible for dispatching console's output of all vms and all connection. So the code will be a little complex then dynamic port.
So this is dynamic port VS fix port and simple code VS complex code. From a usability point of view, I think the fixed port suggestion is nicer.
This means that a system administrator needs only to open one port to enable remote console access. If your initial implementation limits console access to one connection per VM would that simplify the code?
Yes, using a fixed port for all consoles of all VMs seems like a cooler idea. Besides the firewall issue, there's user experience: instead of calling getVmStats to tell the vm port, and then use ssh, only one ssh call is needed. (Taking this one step further - it would make sense to add another layer on top, directing console clients to the specific host currently running the Vm.)
I did not take a close look at your implementation, and did not research this myself, but have you considered using sshd for this? I suppose you can configure sshd to collect the list of known "users" from `getAllVmStats`, and force it to run a command that redirects VM's console to the ssh client. It has a potential of being a more robust implementation.
I have considered using sshd and ssh tunnel. They can't implement fixed port and share console.
Would you elaborate on that? Usually sshd listens to a fixed port 22, and allows multiple users to have independet shells. What do you mean by "share console"?
sharable console is like qemu vnc, you can open multiple connection, but picture is same in all connection. virsh limited only one user can open console, so I think make it sharable is more powerful.
Hmm... for sshd, I think I missing something. It could be implemented using sshd in the following way:
Add new system user for that vm on setVmTicket. And change that user's login program to another program that can redirect console. To share console among multiple connection, It need that a process redirects the console to local unix socket, then we can copy console's output to multiple connection.
This is just in my mind. I am going to give a try. Thanks for your suggestion!
Current implement we can do anything that what we want.
Yes, it is completely under our control, but there are down sides, too: we have to maintain another process, and another entry point, instead of configuring a universally-used, well maintained and debugged application.
Dan.
On 09/04/2012 10:36 PM, Xu He Jie wrote:
On 09/04/2012 06:52 PM, Dan Kenigsberg wrote:
On Tue, Sep 04, 2012 at 03:05:37PM +0800, Xu He Jie wrote:
On 09/03/2012 10:33 PM, Dan Kenigsberg wrote:
On Thu, Aug 30, 2012 at 04:26:31PM -0500, Adam Litke wrote:
On Thu, Aug 30, 2012 at 11:32:02AM +0800, Xu He Jie wrote:
Hi,
I submited a patch for text-based console http://gerrit.ovirt.org/#/c/7165/
the issue I want to discussing as below:
- fix port VS dynamic port
Use fix port for all VM's console. connect console with 'ssh vmUUID@ip -p port'. Distinguishing VM by vmUUID.
The current implement was vdsm will allocated port for console dynamically and spawn sub-process when VM creating. In sub-process the main thread responsible for accept new connection and dispatch output of console to each connection. When new connection is coming, main processing create new thread for each new connection. Dynamic port will allocated port for each VM and use range port. It isn't good for firewall rules.
so I got a suggestion that use fix port. and connect console with 'ssh vmuuid@hostip -p fixport'. this is simple for user. We need one process for accept new connection from fix port and when new connection is coming, spawn sub-process for each vm. But because the console only can open by one process, main process need responsible for dispatching console's output of all vms and all connection. So the code will be a little complex then dynamic port.
So this is dynamic port VS fix port and simple code VS complex code. From a usability point of view, I think the fixed port suggestion
is nicer. This means that a system administrator needs only to open one port to enable remote console access. If your initial implementation limits console access to one connection per VM would that simplify the code?
Yes, using a fixed port for all consoles of all VMs seems like a cooler idea. Besides the firewall issue, there's user experience: instead of calling getVmStats to tell the vm port, and then use ssh, only one ssh call is needed. (Taking this one step further - it would make sense to add another layer on top, directing console clients to the specific host currently running the Vm.)
I did not take a close look at your implementation, and did not research this myself, but have you considered using sshd for this? I suppose you can configure sshd to collect the list of known "users" from `getAllVmStats`, and force it to run a command that redirects VM's console to the ssh client. It has a potential of being a more robust implementation.
I have considered using sshd and ssh tunnel. They can't implement fixed port and share console.
Would you elaborate on that? Usually sshd listens to a fixed port 22, and allows multiple users to have independet shells. What do you mean by "share console"?
sharable console is like qemu vnc, you can open multiple connection, but picture is same in all connection. virsh limited only one user can open console, so I think make it sharable is more powerful.
Hmm... for sshd, I think I missing something. It could be implemented using sshd in the following way:
Add new system user for that vm on setVmTicket. And change that user's login program to another program that can redirect console. To share console among multiple connection, It need that a process redirects the console to local unix socket, then we can copy console's output to multiple connection.
This is just in my mind. I am going to give a try. Thanks for your suggestion!
I gave a try for system sshd. That can works. But I think add user in system for each vm is't good enough. So I have look in PAM, try to find a way skip create real user in system, but it doesn't work. Even we can create virtual user with PAM, we still can't tell sshd use which user and which login program. That means sshd doesn't support that. And I didn't find any other solution if I didn't miss something.
I think create user in system isn't good, there have security implication too, and it will mess the system configuration, we need be care for clean all the user of vm. So I think again for implement console server by ourself. I want to ask is that really unsafe? We just use ssh protocol as transfer protocol. It isn't a real sshd. It didn't access any system resource and shell. It only can redirect the vm's console after setVmTicket.
Current implement we can do anything that what we want.
Yes, it is completely under our control, but there are down sides, too: we have to maintain another process, and another entry point, instead of configuring a universally-used, well maintained and debugged application.
Dan.
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
于 2012-8-31 5:26, Adam Litke 写道:
On Thu, Aug 30, 2012 at 11:32:02AM +0800, Xu He Jie wrote:
Hi,
I submited a patch for text-based console http://gerrit.ovirt.org/#/c/7165/
the issue I want to discussing as below:
- fix port VS dynamic port
Use fix port for all VM's console. connect console with 'ssh vmUUID@ip -p port'. Distinguishing VM by vmUUID.
The current implement was vdsm will allocated port for console dynamically and spawn sub-process when VM creating. In sub-process the main thread responsible for accept new connection and dispatch output of console to each connection. When new connection is coming, main processing create new thread for each new connection. Dynamic port will allocated port for each VM and use range port. It isn't good for firewall rules.
so I got a suggestion that use fix port. and connect console with 'ssh vmuuid@hostip -p fixport'. this is simple for user. We need one process for accept new connection from fix port and when new connection is coming, spawn sub-process for each vm. But because the console only can open by one process, main process need responsible for dispatching console's output of all vms and all connection. So the code will be a little complex then dynamic port.
So this is dynamic port VS fix port and simple code VS complex code.
From a usability point of view, I think the fixed port suggestion is nicer. This means that a system administrator needs only to open one port to enable remote console access. If your initial implementation limits console access to one connection per VM would that simplify the code?
Another thing we want to take care is the security. Enabling one port will make all console output accessable to the user. We should take care about this to ensure that one common user can not see the console of other vms belonging to another user.
On 09/04/2012 09:21 AM, Shu Ming wrote:
于 2012-8-31 5:26, Adam Litke 写道:
On Thu, Aug 30, 2012 at 11:32:02AM +0800, Xu He Jie wrote:
Hi,
I submited a patch for text-based console http://gerrit.ovirt.org/#/c/7165/
the issue I want to discussing as below:
- fix port VS dynamic port
Use fix port for all VM's console. connect console with 'ssh vmUUID@ip -p port'. Distinguishing VM by vmUUID.
The current implement was vdsm will allocated port for console dynamically and spawn sub-process when VM creating. In sub-process the main thread responsible for accept new connection and dispatch output of console to each connection. When new connection is coming, main processing create new thread for each new connection. Dynamic port will allocated port for each VM and use range port. It isn't good for firewall rules.
so I got a suggestion that use fix port. and connect console with 'ssh vmuuid@hostip -p fixport'. this is simple for user. We need one process for accept new connection from fix port and when new connection is coming, spawn sub-process for each vm. But because the console only can open by one process, main process need responsible for dispatching console's output of all vms and all connection. So the code will be a little complex then dynamic port.
So this is dynamic port VS fix port and simple code VS complex code.
From a usability point of view, I think the fixed port suggestion is nicer. This means that a system administrator needs only to open one port to enable remote console access. If your initial implementation limits console access to one connection per VM would that simplify the code?
Another thing we want to take care is the security. Enabling one port will make all console output accessable to the user. We should take care about this to ensure that one common user can not see the console of other vms belonging to another user.
User privilege is controled by engine. if the user has privilege, engine will setticket for it. And setTicket was per vm.
Hi all,
For now in there is no agreement on the remote guest console solution, so I decide to do some investigation continue the discussion.
Our goal VM serial console remote access in CLI mode. That means the client runs without X environment.
There are several proposals.
1. Sandboxed sshd VDSM runs a new host sshd instance in virtual machine/sandbox and redirects the virtio console to it. 2. Third-party sshd VDSM runs third-party sshd library/implementation and redirects virtio console to it. 3. Spice Extend spice to support console and implement a client to be run without GUI environment 4. oVirt shell -> Engine -> libvirt The user connects to Engine via oVirt CLI, then issues a "serial-console" command, then Engine locates the host and connect to the guest console. Currently there is a workaround, it invokes "virsh -c qemu+tls://host/qemu console vmid" from Engine side. 5. VDSM console streaming API VDSM exposes getConsoleReadStream() and getConsoleWriteStream() via XMLRPC binding. Then implement the related client in vdsClient and Engine
Detailed discussion
1. Sandboxes Solution 1 and 2 allow users connect to console using their favorite ssh client. The login name is vmid, the password is set by setVmTicket() call of VDSM. The connection will be lost during migration. This is similar to VNC in oVirt.
I take a look at several sandbox technologies, including libvirt-sandbox, lxc and selinux. a) libvirt-sandbox boots a VM using host kernel and initramfs, then passthru the host file system to the VM in read only mode. We can also add extra binding to the guest file system. It's very easy to use. To run shell in a VM, one can just issues
virt-sandbox -c qemu:///session /bin/sh
Then the VM will be ready in several seconds. However it will trigger some selinux violations. Currently there is no official support for selinux policy configuration from this project. In the project page this is put in the todo list.
b) lxc utilize Linux container to run a process in sandbox. It needs to be configured properly. I find in the package lxc-templates there is an example configuration file for running sshd in lxc.
c) sandbox command in the package policycoreutils-python makes use of selinux to run a process in sandbox, but there is no official or example policy files for sshd.
In a word, for sandbox technologies, we have to configure the policies/file system binding/network carefully and test the compatibility with popular sshd implementations (openssh-server). When those sshd upgrade, the policy must be upgraded by us at the same time. Since the policies are not maintained by who implements sshd, this is a burden for us.
Work to do Write and maintain the policies. Find ways for auth callback and redirecting data to openssh-server.
pros Re-use existing pieces and technologies (host sshd, sandbox). User friendly, they can use existing ssh clients. cons Connection is lost in migration, this is not a big problem because 1) VNC connection share the same problem, 2) the user can reconnect manually. It's not easy to maintain the sandbox policies/file system binding/network for compatibility with sshd.
2. Third-party sshd implementations Almost the same as solution 1 but with better flexibility. VDSM can import a third-party sshd library and let that library deal with auth and transport. VDSM just have to implement the data forwarding. Many people consider this is insecure but I think the ticket solution for VNC is even not as secure as this. Currently most of us only trust openssh-server and think the quality of third-party sshd is low. I searched for a while and found twisted.conch from the popular twisted project. I'm not familiar with twisted.conch, but I still put it in this mail to collect opinions from potential twisted.conch experts.
In a word, I prefer sandbox technologies to third-party sshd implementations unless there is a implementation as good as openssh-server.
Work to do Integrate twisted.conch into VDSM
pros Very flexible. If library provide auth callback to VDSM, then VDSM can just compares the login password to the VM ticket without knowing SSH detials. cons Third party implementations are not as secure and carefully maintained as sshd in the host (probably openssh-server).
3. Extend Spice to support console Is it possible to implement a spice client can be run in pure text mode without GUI environment? If we extend the protocol to support console stream but the client must be run in GUI, it will be less useful.
pros No new VMs and server process, easy for maintenance. cons Must wait for Spice developers to commit the support. Need special client program in CLI, the user may prefer existing client program like ssh. It not a big problem because this feature can be put in to oVirt shell.
4. oVirt shell -> Engine -> libvirtd This is the current workaround described in
http://wiki.ovirt.org/wiki/Features/Serial_Console_in_CLI#Currently_operatio...
The design is good but I do not like Engine talking to libvirtd directly, thus comes the VDSM console streaming API below.
Work to do Provide console streaming API from Engine to be invoked in oVirt shell. Implement the "serial-console" command in oVirt shell.
pros Support migration. Engine can reconnect to the guest automatically after migration while keeping the connection from oVirt shell. Fit well in the current oVirt architecture: no new server process introduced, no new VM introduced, easy to maintain and manage. cons Engine talking to libvirtd directly breaks the encapsulation of VDSM. Users only can get the console stream from Engine, they can not directly connect to the host as VNC and the above two sshd solutions do.
5. VDSM console streaming API Implement new APIs in VDSM to forward the raw data from console. It exposes getConsoleReadStream() and getConsoleWriteStream() via XMLRPC binding. Then Engine can get the console data stream via API instead of directly connecting to libvirtd. Other things will be the same as solution 4.
Work to do Implement getConsoleReadStream() and getConsoleWriteStream() in VDSM. Provide console streaming API from Engine to be invoked in oVirt shell. Implement the "serial-console" command in oVirt shell. Optional: Implement a client program in vdsClient to consume the stream API.
pros Same as solution 4 cons We can not allow ordinary user directly connect to VDSM and invoke the stream API, because there is no ACL in VDSM, once a client cert is setup for the ordinary user, he can call all the APIs in VDSM and get total control. So the ordinary user can only get the stream from Engine, and we leave Engine to do the ACL.
I like solution 4 best.
Thanks for the writeup Zhou Zheng! This is a very nice explanation of the possible approaches that are on the table. I would like to add my thoughts on each approach inline below.
On Tue, Nov 27, 2012 at 05:22:20PM +0800, Zhou Zheng Sheng wrote:
Hi all,
For now in there is no agreement on the remote guest console solution, so I decide to do some investigation continue the discussion.
Our goal VM serial console remote access in CLI mode. That means the client runs without X environment.
There are several proposals.
- Sandboxed sshd
VDSM runs a new host sshd instance in virtual machine/sandbox and redirects the virtio console to it. 2. Third-party sshd VDSM runs third-party sshd library/implementation and redirects virtio console to it. 3. Spice Extend spice to support console and implement a client to be run without GUI environment 4. oVirt shell -> Engine -> libvirt The user connects to Engine via oVirt CLI, then issues a "serial-console" command, then Engine locates the host and connect to the guest console. Currently there is a workaround, it invokes "virsh -c qemu+tls://host/qemu console vmid" from Engine side. 5. VDSM console streaming API VDSM exposes getConsoleReadStream() and getConsoleWriteStream() via XMLRPC binding. Then implement the related client in vdsClient and Engine
Detailed discussion
- Sandboxes
Solution 1 and 2 allow users connect to console using their favorite ssh client. The login name is vmid, the password is set by setVmTicket() call of VDSM. The connection will be lost during migration. This is similar to VNC in oVirt.
I take a look at several sandbox technologies, including libvirt-sandbox, lxc and selinux. a) libvirt-sandbox boots a VM using host kernel and initramfs, then passthru the host file system to the VM in read only mode. We can also add extra binding to the guest file system. It's very easy to use. To run shell in a VM, one can just issues
virt-sandbox -c qemu:///session /bin/sh
Then the VM will be ready in several seconds. However it will trigger some selinux violations. Currently there is no official support for selinux policy configuration from this project. In the project page this is put in the todo list.
b) lxc utilize Linux container to run a process in sandbox. It needs to be configured properly. I find in the package lxc-templates there is an example configuration file for running sshd in lxc.
c) sandbox command in the package policycoreutils-python makes use of selinux to run a process in sandbox, but there is no official or example policy files for sshd.
In a word, for sandbox technologies, we have to configure the policies/file system binding/network carefully and test the compatibility with popular sshd implementations (openssh-server). When those sshd upgrade, the policy must be upgraded by us at the same time. Since the policies are not maintained by who implements sshd, this is a burden for us.
Work to do Write and maintain the policies. Find ways for auth callback and redirecting data to openssh-server.
pros Re-use existing pieces and technologies (host sshd, sandbox). User friendly, they can use existing ssh clients. cons Connection is lost in migration, this is not a big problem because
- VNC connection share the same problem, 2) the user can reconnect
manually. It's not easy to maintain the sandbox policies/file system binding/network for compatibility with sshd.
I find all of these sandbox techniques to be far too cumbersome to be useful. In each case, the dependencies on the base operating system (selinux, etc.) are too great to make this a maintainable option going forward.
- Third-party sshd implementations
Almost the same as solution 1 but with better flexibility. VDSM can import a third-party sshd library and let that library deal with auth and transport. VDSM just have to implement the data forwarding. Many people consider this is insecure but I think the ticket solution for VNC is even not as secure as this. Currently most of us only trust openssh-server and think the quality of third-party sshd is low. I searched for a while and found twisted.conch from the popular twisted project. I'm not familiar with twisted.conch, but I still put it in this mail to collect opinions from potential twisted.conch experts.
In a word, I prefer sandbox technologies to third-party sshd implementations unless there is a implementation as good as openssh-server.
Work to do Integrate twisted.conch into VDSM
pros Very flexible. If library provide auth callback to VDSM, then VDSM can just compares the login password to the VM ticket without knowing SSH detials. cons Third party implementations are not as secure and carefully maintained as sshd in the host (probably openssh-server).
As others have said previously, the security implications of relying on a third-party ssh implementation makes this idea a non-starter for me.
- Extend Spice to support console
Is it possible to implement a spice client can be run in pure text mode without GUI environment? If we extend the protocol to support console stream but the client must be run in GUI, it will be less useful.
pros No new VMs and server process, easy for maintenance. cons Must wait for Spice developers to commit the support. Need special client program in CLI, the user may prefer existing client program like ssh. It not a big problem because this feature can be put in to oVirt shell.
Can someone familiar with spice weigh in on whether a console connection as described here could survive a live migration? In general, I really like this approach if it can be done cleanly. Spice is already oVirt's primary end-user application so in a deployed environment, we'd expect users to already have this program. If a scripted interface is required, I am sure that I/O redirection could be added either to the existing spice client or as part of a new spice-console program. This approach also works with a vdsm that is connected to ovirt-engine or running in standalone mode.
This seems like the best approach to me so long as the spice team agrees that it can and should be done.
- oVirt shell -> Engine -> libvirtd
This is the current workaround described in
http://wiki.ovirt.org/wiki/Features/Serial_Console_in_CLI#Currently_operatio...
The design is good but I do not like Engine talking to libvirtd directly, thus comes the VDSM console streaming API below.
Work to do Provide console streaming API from Engine to be invoked in oVirt shell. Implement the "serial-console" command in oVirt shell.
pros Support migration. Engine can reconnect to the guest automatically after migration while keeping the connection from oVirt shell. Fit well in the current oVirt architecture: no new server process introduced, no new VM introduced, easy to maintain and manage. cons Engine talking to libvirtd directly breaks the encapsulation of VDSM. Users only can get the console stream from Engine, they can not directly connect to the host as VNC and the above two sshd solutions do.
I agree that this is a layering violation and should not be persued as the long-term solution. We do not want to expose the libvirt connection outside of the host.
- VDSM console streaming API
Implement new APIs in VDSM to forward the raw data from console. It exposes getConsoleReadStream() and getConsoleWriteStream() via XMLRPC binding. Then Engine can get the console data stream via API instead of directly connecting to libvirtd. Other things will be the same as solution 4.
Work to do Implement getConsoleReadStream() and getConsoleWriteStream() in VDSM. Provide console streaming API from Engine to be invoked in oVirt shell. Implement the "serial-console" command in oVirt shell. Optional: Implement a client program in vdsClient to consume the stream API.
pros Same as solution 4 cons We can not allow ordinary user directly connect to VDSM and invoke the stream API, because there is no ACL in VDSM, once a client cert is setup for the ordinary user, he can call all the APIs in VDSM and get total control. So the ordinary user can only get the stream from Engine, and we leave Engine to do the ACL.
One issue that was raised is console buffering. What happens if a client does not call getConsoleReadStream() fast enough? Will characters be dropped? This could create a reliability problem and would make scripting against this interface risky at best.
I like solution 4 best.
I will note again for others that you mentioned you like #5 (console streaming API) best. I think the spice approach is best based on weighing the following requirements:
1. Simple and easy to maintain 2. Can access via the host or ovirt-engine 3. Scripting mode is possible 4. Reliable
The best solution would of course be 3 (Or something similar that keeps the terminal state inside the VM memory so that migration works). Tunelling screen can do that but it requires having screen (or something similar) installed on the guest which is hard to do.
But I think the more practical solution is 2 as it has semantics similar to VNC. Running a real ssh (ie. 1) is problematic because we have less control over the daemon and there are more vectors the user can try and use to break out of the sandbox. Further more, setting up sandboxes is a bit problematic ATM.
I don't really understand 5. What does those methods return the virtio dev path?
----- Original Message -----
From: "Zhou Zheng Sheng" zhshzhou@linux.vnet.ibm.com To: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Tuesday, November 27, 2012 4:22:20 AM Subject: Re: [vdsm] [RFC]about the implement of text-based console
Hi all,
For now in there is no agreement on the remote guest console solution, so I decide to do some investigation continue the discussion.
Our goal VM serial console remote access in CLI mode. That means the client runs without X environment.
Do you mean like running qemu with -curses?
There are several proposals.
- Sandboxed sshd VDSM runs a new host sshd instance in virtual machine/sandbox and
redirects the virtio console to it. 2. Third-party sshd VDSM runs third-party sshd library/implementation and redirects virtio console to it. 3. Spice Extend spice to support console and implement a client to be run without GUI environment 4. oVirt shell -> Engine -> libvirt The user connects to Engine via oVirt CLI, then issues a "serial-console" command, then Engine locates the host and connect to the guest console. Currently there is a workaround, it invokes "virsh -c qemu+tls://host/qemu console vmid" from Engine side. 5. VDSM console streaming API VDSM exposes getConsoleReadStream() and getConsoleWriteStream() via XMLRPC binding. Then implement the related client in vdsClient and Engine
Detailed discussion
- Sandboxes
Solution 1 and 2 allow users connect to console using their favorite ssh client. The login name is vmid, the password is set by setVmTicket() call of VDSM. The connection will be lost during migration. This is similar to VNC in oVirt.
I take a look at several sandbox technologies, including libvirt-sandbox, lxc and selinux. a) libvirt-sandbox boots a VM using host kernel and initramfs, then passthru the host file system to the VM in read only mode. We can also add extra binding to the guest file system. It's very easy to use. To run shell in a VM, one can just issues
virt-sandbox -c qemu:///session /bin/sh
Then the VM will be ready in several seconds. However it will trigger some selinux violations. Currently there is no official support for selinux policy configuration from this project. In the project page this is put in the todo list.
b) lxc utilize Linux container to run a process in sandbox. It needs to be configured properly. I find in the package lxc-templates there is an example configuration file for running sshd in lxc.
c) sandbox command in the package policycoreutils-python makes use of selinux to run a process in sandbox, but there is no official or example policy files for sshd.
In a word, for sandbox technologies, we have to configure the policies/file system binding/network carefully and test the compatibility with popular sshd implementations (openssh-server). When those sshd upgrade, the policy must be upgraded by us at the same time. Since the policies are not maintained by who implements sshd, this is a burden for us.
Work to do Write and maintain the policies. Find ways for auth callback and redirecting data to openssh-server.
pros Re-use existing pieces and technologies (host sshd, sandbox). User friendly, they can use existing ssh clients. cons Connection is lost in migration, this is not a big problem because
VNC connection share the same problem, 2) the user can reconnect manually. It's not easy to maintain the sandbox policies/file system binding/network for compatibility with sshd.
- Third-party sshd implementations
Almost the same as solution 1 but with better flexibility. VDSM can import a third-party sshd library and let that library deal with auth and transport. VDSM just have to implement the data forwarding. Many people consider this is insecure but I think the ticket solution for VNC is even not as secure as this. Currently most of us only trust openssh-server and think the quality of third-party sshd is low. I searched for a while and found twisted.conch from the popular twisted project. I'm not familiar with twisted.conch, but I still put it in this mail to collect opinions from potential twisted.conch experts.
In a word, I prefer sandbox technologies to third-party sshd implementations unless there is a implementation as good as openssh-server.
Work to do Integrate twisted.conch into VDSM
pros Very flexible. If library provide auth callback to VDSM, then VDSM can just compares the login password to the VM ticket without knowing SSH detials. cons Third party implementations are not as secure and carefully maintained as sshd in the host (probably openssh-server).
- Extend Spice to support console
Is it possible to implement a spice client can be run in pure text mode without GUI environment? If we extend the protocol to support console stream but the client must be run in GUI, it will be less useful.
pros No new VMs and server process, easy for maintenance. cons Must wait for Spice developers to commit the support. Need special client program in CLI, the user may prefer existing client program like ssh. It not a big problem because this feature can be put in to oVirt shell.
- oVirt shell -> Engine -> libvirtd
This is the current workaround described in
http://wiki.ovirt.org/wiki/Features/Serial_Console_in_CLI#Currently_operatio...
The design is good but I do not like Engine talking to libvirtd directly, thus comes the VDSM console streaming API below.
Work to do Provide console streaming API from Engine to be invoked in oVirt shell. Implement the "serial-console" command in oVirt shell.
pros Support migration. Engine can reconnect to the guest automatically after migration while keeping the connection from oVirt shell. Fit well in the current oVirt architecture: no new server process introduced, no new VM introduced, easy to maintain and manage. cons Engine talking to libvirtd directly breaks the encapsulation of VDSM. Users only can get the console stream from Engine, they can not directly connect to the host as VNC and the above two sshd solutions do.
- VDSM console streaming API
Implement new APIs in VDSM to forward the raw data from console. It exposes getConsoleReadStream() and getConsoleWriteStream() via XMLRPC binding. Then Engine can get the console data stream via API instead of directly connecting to libvirtd. Other things will be the same as solution 4.
Work to do Implement getConsoleReadStream() and getConsoleWriteStream() in VDSM. Provide console streaming API from Engine to be invoked in oVirt shell. Implement the "serial-console" command in oVirt shell. Optional: Implement a client program in vdsClient to consume the stream API.
pros Same as solution 4 cons We can not allow ordinary user directly connect to VDSM and invoke the stream API, because there is no ACL in VDSM, once a client cert is setup for the ordinary user, he can call all the APIs in VDSM and get total control. So the ordinary user can only get the stream from Engine, and we leave Engine to do the ACL.
I like solution 4 best.
-- Thanks and best regards!
Zhou Zheng Sheng / 周征晟 E-mail: zhshzhou@linux.vnet.ibm.com Telephone: 86-10-82454397
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
----- Original Message -----
From: "Saggi Mizrahi" smizrahi@redhat.com To: "Zhou Zheng Sheng" zhshzhou@linux.vnet.ibm.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Tuesday, 27 November, 2012 7:45:27 PM Subject: Re: [vdsm] [RFC]about the implement of text-based console
The best solution would of course be 3 (Or something similar that keeps the terminal state inside the VM memory so that migration works). Tunelling screen can do that but it requires having screen (or something similar) installed on the guest which is hard to do.
But I think the more practical solution is 2 as it has semantics similar to VNC. Running a real ssh (ie. 1) is problematic because we have less control over the daemon and there are more vectors the user can try and use to break out of the sandbox. Further more, setting up sandboxes is a bit problematic ATM.
+1
I don't really understand 5. What does those methods return the virtio dev path?
----- Original Message -----
From: "Zhou Zheng Sheng" zhshzhou@linux.vnet.ibm.com To: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Tuesday, November 27, 2012 4:22:20 AM Subject: Re: [vdsm] [RFC]about the implement of text-based console
Hi all,
For now in there is no agreement on the remote guest console solution, so I decide to do some investigation continue the discussion.
Our goal VM serial console remote access in CLI mode. That means the client runs without X environment.
Do you mean like running qemu with -curses?
There are several proposals.
- Sandboxed sshd VDSM runs a new host sshd instance in virtual machine/sandbox and
redirects the virtio console to it. 2. Third-party sshd VDSM runs third-party sshd library/implementation and redirects virtio console to it. 3. Spice Extend spice to support console and implement a client to be run without GUI environment 4. oVirt shell -> Engine -> libvirt The user connects to Engine via oVirt CLI, then issues a "serial-console" command, then Engine locates the host and connect to the guest console. Currently there is a workaround, it invokes "virsh -c qemu+tls://host/qemu console vmid" from Engine side. 5. VDSM console streaming API VDSM exposes getConsoleReadStream() and getConsoleWriteStream() via XMLRPC binding. Then implement the related client in vdsClient and Engine
Detailed discussion
- Sandboxes
Solution 1 and 2 allow users connect to console using their favorite ssh client. The login name is vmid, the password is set by setVmTicket() call of VDSM. The connection will be lost during migration. This is similar to VNC in oVirt.
I take a look at several sandbox technologies, including libvirt-sandbox, lxc and selinux. a) libvirt-sandbox boots a VM using host kernel and initramfs, then passthru the host file system to the VM in read only mode. We can also add extra binding to the guest file system. It's very easy to use. To run shell in a VM, one can just issues
virt-sandbox -c qemu:///session /bin/sh
Then the VM will be ready in several seconds. However it will trigger some selinux violations. Currently there is no official support for selinux policy configuration from this project. In the project page this is put in the todo list.
b) lxc utilize Linux container to run a process in sandbox. It needs to be configured properly. I find in the package lxc-templates there is an example configuration file for running sshd in lxc.
c) sandbox command in the package policycoreutils-python makes use of selinux to run a process in sandbox, but there is no official or example policy files for sshd.
In a word, for sandbox technologies, we have to configure the policies/file system binding/network carefully and test the compatibility with popular sshd implementations (openssh-server). When those sshd upgrade, the policy must be upgraded by us at the same time. Since the policies are not maintained by who implements sshd, this is a burden for us.
Work to do Write and maintain the policies. Find ways for auth callback and redirecting data to openssh-server.
pros Re-use existing pieces and technologies (host sshd, sandbox). User friendly, they can use existing ssh clients. cons Connection is lost in migration, this is not a big problem because
VNC connection share the same problem, 2) the user can reconnect manually. It's not easy to maintain the sandbox policies/file system binding/network for compatibility with sshd.
- Third-party sshd implementations
Almost the same as solution 1 but with better flexibility. VDSM can import a third-party sshd library and let that library deal with auth and transport. VDSM just have to implement the data forwarding. Many people consider this is insecure but I think the ticket solution for VNC is even not as secure as this. Currently most of us only trust openssh-server and think the quality of third-party sshd is low. I searched for a while and found twisted.conch from the popular twisted project. I'm not familiar with twisted.conch, but I still put it in this mail to collect opinions from potential twisted.conch experts.
In a word, I prefer sandbox technologies to third-party sshd implementations unless there is a implementation as good as openssh-server.
Work to do Integrate twisted.conch into VDSM
pros Very flexible. If library provide auth callback to VDSM, then VDSM can just compares the login password to the VM ticket without knowing SSH detials. cons Third party implementations are not as secure and carefully maintained as sshd in the host (probably openssh-server).
- Extend Spice to support console
Is it possible to implement a spice client can be run in pure text mode without GUI environment? If we extend the protocol to support console stream but the client must be run in GUI, it will be less useful.
pros No new VMs and server process, easy for maintenance. cons Must wait for Spice developers to commit the support. Need special client program in CLI, the user may prefer existing client program like ssh. It not a big problem because this feature can be put in to oVirt shell.
- oVirt shell -> Engine -> libvirtd
This is the current workaround described in
http://wiki.ovirt.org/wiki/Features/Serial_Console_in_CLI#Currently_operatio...
The design is good but I do not like Engine talking to libvirtd directly, thus comes the VDSM console streaming API below.
Work to do Provide console streaming API from Engine to be invoked in oVirt shell. Implement the "serial-console" command in oVirt shell.
pros Support migration. Engine can reconnect to the guest automatically after migration while keeping the connection from oVirt shell. Fit well in the current oVirt architecture: no new server process introduced, no new VM introduced, easy to maintain and manage. cons Engine talking to libvirtd directly breaks the encapsulation of VDSM. Users only can get the console stream from Engine, they can not directly connect to the host as VNC and the above two sshd solutions do.
- VDSM console streaming API
Implement new APIs in VDSM to forward the raw data from console. It exposes getConsoleReadStream() and getConsoleWriteStream() via XMLRPC binding. Then Engine can get the console data stream via API instead of directly connecting to libvirtd. Other things will be the same as solution 4.
Work to do Implement getConsoleReadStream() and getConsoleWriteStream() in VDSM. Provide console streaming API from Engine to be invoked in oVirt shell. Implement the "serial-console" command in oVirt shell. Optional: Implement a client program in vdsClient to consume the stream API.
pros Same as solution 4 cons We can not allow ordinary user directly connect to VDSM and invoke the stream API, because there is no ACL in VDSM, once a client cert is setup for the ordinary user, he can call all the APIs in VDSM and get total control. So the ordinary user can only get the stream from Engine, and we leave Engine to do the ACL.
I like solution 4 best.
-- Thanks and best regards!
Zhou Zheng Sheng / 周征晟 E-mail: zhshzhou@linux.vnet.ibm.com Telephone: 86-10-82454397
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Hi all, in this mail I further explain how solution 5 (console streaming API) works, and propose a virtual HTTP server live inside existing XMLRPC server with a request router. You can have a look at
on 11/28/2012 01:09, Adam Litke wrote:
One issue that was raised is console buffering. What happens if a client does not call getConsoleReadStream() fast enough? Will characters be dropped? This could create a reliability problem and would make scripting against this interface risky at best.
on 11/28/2012 01:45, Saggi Mizrahi wrote:
I don't really understand 5. What does those methods return the virtio dev path?
As I know, HTTP supports persistent connection and data streaming, this is popular for AJAX applications and live video broadcasting servers. The client sends one GET request to server, and server returns a data stream, then the client reads the stream continuously.
XMLRPC and REST calls relies on HTTP, so I was considering getConsoleReadStream() can utilize streaming feature in HTTP, and VDSM just forwards the console data when it is called. Unfortunately I can not find out how XMLRPC and REST supports data streaming, because XML and JSON do not support containing a continuous stream object. It seems that to get the continuous stream data, the client must call getConsoleReadStream() again and again. I think it's expensive to call getConsoleReadStream() very frequently to get the data, and it may cause a notable delay, which is not acceptable for interactive console.
I am thinking of providing console stream through HTTP(s) directly. A virtual server can forward the data from guest serial console by traditional HTTP streaming method (GET /consoleStream/vmid HTTP/1.0), and can forward the input data from the user by POST method as well(POST /consoleStream/vmid HTTP/1.0), or forward input and output stream at the same time in a POST request. This virtual server can be further extended to serve downloading guest crash core dump, and we can provide flexible authentication policies in this server. The auth for HTTP requests can be different from the XMLRPC request.
The normal XMLRPC requests are always "POST / HTTP/1.0" or "POST /RPC2 HTTP/1.0". So this virtual server can live inside the existing XMLRPC server, just with a request router. I read the code implementing the XMLRPC binding and find that implementing a request router is not very complex. We can multiplex the port 54321, and route the raw HTTP request to the virtual server while normal XMLRPC request still goes to XMLRPC handler.
This means it can serve XMLRPC request as
vdsClient -s localhost getVdsCaps
at the same time it can serve a wget client as
wget --no-check-certificate \ --certificate=/etc/pki/vdsm/certs/vdsmcert.pem \ --private-key=/etc/pki/vdsm/keys/vdsmkey.pem \ --ca-certificate=/etc/pki/vdsm/certs/cacert.pem \ https://localhost:54321/console/vmid
I try to implement a simple request router at
If interested, you can have a look it. It can pass the recently add VDSM functional tests, and can serve wget requests at the same time. If we do not like this idea, I think only the solution of extending spice will fulfill our requirements. There are obvious problems in other solutions.
----- Original Message -----
From: "Zhou Zheng Sheng" zhshzhou@linux.vnet.ibm.com To: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Tuesday, November 27, 2012 4:22:20 AM Subject: Re: [vdsm] [RFC]about the implement of text-based console
Hi all,
For now in there is no agreement on the remote guest console solution, so I decide to do some investigation continue the discussion.
Our goal VM serial console remote access in CLI mode. That means the client runs without X environment.
Do you mean like running qemu with -curses?
I mean like "virsh console"
Sorry, it's probably the fact that I don't have enough time to go into the code but I still don't get what you are trying to do. Having it in HTTP and XML-RPC is a bad idea but I imagine that the theoretical solution doesn't depend on any of them.
Could you just show some pseudo code of a client using the stream?
----- Original Message -----
From: "Zhou Zheng Sheng" zhshzhou@linux.vnet.ibm.com To: "Saggi Mizrahi" smizrahi@redhat.com, "Adam Litke" agl@us.ibm.com Cc: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Friday, November 30, 2012 10:12:19 PM Subject: Re: [vdsm] [RFC]about the implement of text-based console
Hi all, in this mail I further explain how solution 5 (console streaming API) works, and propose a virtual HTTP server live inside existing XMLRPC server with a request router. You can have a look at
on 11/28/2012 01:09, Adam Litke wrote:
One issue that was raised is console buffering. What happens if a client does not call getConsoleReadStream() fast enough? Will characters be dropped? This could create a reliability problem and would make scripting against this interface risky at best.
on 11/28/2012 01:45, Saggi Mizrahi wrote:
I don't really understand 5. What does those methods return the virtio dev path?
As I know, HTTP supports persistent connection and data streaming, this is popular for AJAX applications and live video broadcasting servers. The client sends one GET request to server, and server returns a data stream, then the client reads the stream continuously.
XMLRPC and REST calls relies on HTTP, so I was considering getConsoleReadStream() can utilize streaming feature in HTTP, and VDSM just forwards the console data when it is called. Unfortunately I can not find out how XMLRPC and REST supports data streaming, because XML and JSON do not support containing a continuous stream object. It seems that to get the continuous stream data, the client must call getConsoleReadStream() again and again. I think it's expensive to call getConsoleReadStream() very frequently to get the data, and it may cause a notable delay, which is not acceptable for interactive console.
I am thinking of providing console stream through HTTP(s) directly. A virtual server can forward the data from guest serial console by traditional HTTP streaming method (GET /consoleStream/vmid HTTP/1.0), and can forward the input data from the user by POST method as well(POST /consoleStream/vmid HTTP/1.0), or forward input and output stream at the same time in a POST request. This virtual server can be further extended to serve downloading guest crash core dump, and we can provide flexible authentication policies in this server. The auth for HTTP requests can be different from the XMLRPC request.
The normal XMLRPC requests are always "POST / HTTP/1.0" or "POST /RPC2 HTTP/1.0". So this virtual server can live inside the existing XMLRPC server, just with a request router. I read the code implementing the XMLRPC binding and find that implementing a request router is not very complex. We can multiplex the port 54321, and route the raw HTTP request to the virtual server while normal XMLRPC request still goes to XMLRPC handler.
This means it can serve XMLRPC request as
vdsClient -s localhost getVdsCaps
at the same time it can serve a wget client as
wget --no-check-certificate \ --certificate=/etc/pki/vdsm/certs/vdsmcert.pem \ --private-key=/etc/pki/vdsm/keys/vdsmkey.pem \ --ca-certificate=/etc/pki/vdsm/certs/cacert.pem \ https://localhost:54321/console/vmid
I try to implement a simple request router at
If interested, you can have a look it. It can pass the recently add VDSM functional tests, and can serve wget requests at the same time. If we do not like this idea, I think only the solution of extending spice will fulfill our requirements. There are obvious problems in other solutions.
----- Original Message -----
From: "Zhou Zheng Sheng" zhshzhou@linux.vnet.ibm.com To: "VDSM Project Development" vdsm-devel@lists.fedorahosted.org Sent: Tuesday, November 27, 2012 4:22:20 AM Subject: Re: [vdsm] [RFC]about the implement of text-based console
Hi all,
For now in there is no agreement on the remote guest console solution, so I decide to do some investigation continue the discussion.
Our goal VM serial console remote access in CLI mode. That means the client runs without X environment.
Do you mean like running qemu with -curses?
I mean like "virsh console"
-- Thanks and best regards!
Zhou Zheng Sheng / 周征晟 E-mail: zhshzhou@linux.vnet.ibm.com Telephone: 86-10-82454397
on 12/04/2012 00:40, Saggi Mizrahi wrote:
Sorry, it's probably the fact that I don't have enough time to go into the code but I still don't get what you are trying to do. Having it in HTTP and XML-RPC is a bad idea but I imagine that the theoretical solution doesn't depend on any of them.
Could you just show some pseudo code of a client using the stream?
In my proposal, HTTP is just for "signaling". The stream data is forwarding back and forth via socket. Could you have a look at http://gerrit.ovirt.org/#/c/10381 ? In this patch I implement the console forwarding based on the previous proposal. You can just run net cat to test it.
nc 127.0.0.1 54321
Then paste the following
POST /VM/paste-your-VM-uuid-here/dev/console HTTP/1.0
Hit enter twice, and you will see
HTTP/1.0 200 OK Server: BaseHTTP/0.3 Python/2.7.3 Date: Wed, 26 Dec 2012 10:13:56 GMT Content-type: application/octet-stream
This means the console is OK, hit enter again, and you will see
Fedora release 17 (Beefy Miracle) Kernel 3.3.4-5.fc17.x86_64 on an x86_64 (hvc0)
localhost login:
Now you can interact with the remote console as usual.
vdsm-devel@lists.stg.fedorahosted.org