-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
== What are our use-cases? ==
Full disclosure: the use-cases I am listing below come from discussions I have had within Red Hat for what we would like to see in Fedora that we can best build on to become Red Hat Enterprise Linux 8. Please take this in the spirit it is given: I am disclosing what Red Hat wants up front, so we aren't accused of working towards hidden goals. I am also not committing us to covering any or all of these targets. I do believe however that the majority of these use-cases will be beneficial to both Fedora and Red Hat.
* Provide a platform for acting as a node in an OpenStack rack. * Provide a platform and simple setup for certain infrastructure services, e.g. * FreeIPA Domain Controller * BIND DNS * DHCP * Database server (both free and commercial). * Provide simple setup of a file-server (on par with Windows). * Platform for deploying web applications with high-value frameworks. * Ruby on Rails * Django * Turbogears * Node.js * Make Fedora the best platform to deploy JBoss applications. * Come up with standardized mechanisms for centralized monitoring * Come up with standardized mechanisms for centralized configuration and management * Simple enrollment into FreeIPA and Active Directory domains * Provide the best platform for secure application deployment * Isolation of OS from applications * Isolation of applications from each other * Isolation of application users from each other * Management of application resource consumption * Simplify management and deployment.[1] * Deliver the world's best leading edge DevOps platform.
== My initial thoughts == I am open to counter-arguments, naturally.
[1] Ideally, we want a mid-level Microsoft admin to be able to manage Fedora without much learning curve.
On Mon, 2013-10-28 at 08:55 -0400, Stephen Gallagher wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
== What are our use-cases? ==
Full disclosure: the use-cases I am listing below come from discussions I have had within Red Hat for what we would like to see in Fedora that we can best build on to become Red Hat Enterprise Linux 8. Please take this in the spirit it is given: I am disclosing what Red Hat wants up front, so we aren't accused of working towards hidden goals. I am also not committing us to covering any or all of these targets. I do believe however that the majority of these use-cases will be beneficial to both Fedora and Red Hat.
- Provide a platform for acting as a node in an OpenStack rack.
Isn't this the goal of the Cloud Product ?
- Provide a platform and simple setup for certain infrastructure
services, e.g.
- FreeIPA Domain Controller
I think we also need to help getting a Samba Ad Domain Controller as well, although that is mostly blocked due to upstream work, and I am not sure we can do much here, beyond encouraging upstream.
- BIND DNS
- DHCP
- Database server (both free and commercial).
s/commercial/proprietary/ I guess ?
I do not think Fedora should care about proprietary ones particularly, we shouldn't be hostile, but neither cater for them too much, most of the contributors wouldn't have the means to do it anyway, and proprietary vendors can better install on RHEL and interact with RH.
- Provide simple setup of a file-server (on par with Windows).
There be dragons :-) But I agree.
- Platform for deploying web applications with high-value frameworks.
- Ruby on Rails
- Django
- Turbogears
- Node.js
Are we going to do this via Software Collections ? The main issue I see with these is that various apps requires at times, conflicting versions of the same base framework/language.
- Make Fedora the best platform to deploy JBoss applications.
- Come up with standardized mechanisms for centralized monitoring
- Come up with standardized mechanisms for centralized configuration
and management
- Simple enrollment into FreeIPA and Active Directory domains
- Provide the best platform for secure application deployment
- Isolation of OS from applications
- Isolation of applications from each other
- Isolation of application users from each other
- Management of application resource consumption
You mean containers and cgroups here ?
- Simplify management and deployment.[1]
- Deliver the world's best leading edge DevOps platform.
Uhm Server and DevOps are not necessarily aligned in my mind, but I guess it depends on how leading edge you make stuff.
I think the main issue is the amount of packages we try to cram in the "OS". If we keep the OS elan and move most of the rapidly evolving frameworks in collections, we can probably pull this off, but the balance will be critical.
== My initial thoughts == I am open to counter-arguments, naturally.
[1] Ideally, we want a mid-level Microsoft admin to be able to manage Fedora without much learning curve.
For this you need wizards and (web) UIs, I am not sure how much in our charter there is for developing additional software, Fedora usually is about packaging stuff, mostly.
Simo.
On Mon, Oct 28, 2013 at 09:06:14AM -0400, Simo Sorce wrote:
- Provide a platform for acting as a node in an OpenStack rack.
Isn't this the goal of the Cloud Product ?
We haven't figured this all out yet. It's definitely not *the* goal -- most of us on the cloud WG are thinking about cloud guest images, but there may be room for this case as well. And, on the other hand, the server product probably also wants to be able to run under virtualization including in cloud providers.
One possible way to think about the division is in the pets vs. cattle analogy (which is such a good one that it's quickly becoming worn thin -- bear with me if you've heard this too many times already).
Traditional servers have names, personalities, and are lovingly cared for. When they get sick, you diagnose the problem and carefully nurse them back to health, sometimes at great expense. Like pets. On the other hand, cattle are numbered, and thought of as basically identical, and if they get sick, you put them down and get another one.
With this distinction in mind, compute nodes may indeed better fit into the cloud product, even though they run on bare hardware. We'll see.
On Mon, 2013-10-28 at 09:22 -0400, Matthew Miller wrote:
On Mon, Oct 28, 2013 at 09:06:14AM -0400, Simo Sorce wrote:
- Provide a platform for acting as a node in an OpenStack rack.
Isn't this the goal of the Cloud Product ?
We haven't figured this all out yet. It's definitely not *the* goal -- most of us on the cloud WG are thinking about cloud guest images, but there may be room for this case as well. And, on the other hand, the server product probably also wants to be able to run under virtualization including in cloud providers.
One possible way to think about the division is in the pets vs. cattle analogy (which is such a good one that it's quickly becoming worn thin -- bear with me if you've heard this too many times already).
Traditional servers have names, personalities, and are lovingly cared for. When they get sick, you diagnose the problem and carefully nurse them back to health, sometimes at great expense. Like pets. On the other hand, cattle are numbered, and thought of as basically identical, and if they get sick, you put them down and get another one.
With this distinction in mind, compute nodes may indeed better fit into the cloud product, even though they run on bare hardware. We'll see.
This was my thinking, I thought the cloud WG is about building lean images that do not have much in the way of wizards, installers, and hand-holding software, but are focused toward rapid deployment, either as lean hosts or as computing/elastic guests, all controlled via some configuration engine like puppet etc... basically single task machines.
I see the server WG more about building heavier, long term, multipurpose servers (be it on bare metal or as guest).
Simo.
On 10/28/2013 01:29 PM, Simo Sorce wrote:
This was my thinking, I thought the cloud WG is about building lean images that do not have much in the way of wizards, installers, and hand-holding software, but are focused toward rapid deployment, either as lean hosts or as computing/elastic guests, all controlled via some configuration engine like puppet etc... basically single task machines.
I see the server WG more about building heavier, long term, multipurpose servers (be it on bare metal or as guest).
I would disagree I would say single purpose servers/vm/containers all fall under the server WG as well but I think we should be looking at this from application stand point as in which of those services/daemon fall under the server WG and that means we could make up to about 500 550 applications or "products" that can be deployed on bare metal in vm's or containers we would be delivering.
JBG
On Mon, Oct 28, 2013 at 2:46 PM, "Jóhann B. Guðmundsson" johannbg@gmail.com wrote:
On 10/28/2013 01:29 PM, Simo Sorce wrote:
This was my thinking, I thought the cloud WG is about building lean images that do not have much in the way of wizards, installers, and hand-holding software, but are focused toward rapid deployment, either as lean hosts or as computing/elastic guests, all controlled via some configuration engine like puppet etc... basically single task machines.
I see the server WG more about building heavier, long term, multipurpose servers (be it on bare metal or as guest).
I would disagree I would say single purpose servers/vm/containers all fall under the server WG as well but I think we should be looking at this from application stand point as in which of those services/daemon fall under the server WG and that means we could make up to about 500 550 applications or "products" that can be deployed on bare metal in vm's or containers we would be delivering.
I think it would make most sense for Cloud and Server to "share applications", i.e. the same application package can be deployed either within a single-purpose Cloud image (automatically managed for horizontal scaling), or as a single instance within a Server (one of many applications running on this particular Server).
Given that, I think the Server WG should indeed choose a very limited set of "applications" / "services" to include within the Server product and to make management of this limited set of services really good. Mirek
On 10/29/2013 09:30 PM, Miloslav Trmač wrote:
On Mon, Oct 28, 2013 at 2:46 PM, "Jóhann B. Guðmundsson" johannbg@gmail.com wrote:
On 10/28/2013 01:29 PM, Simo Sorce wrote:
This was my thinking, I thought the cloud WG is about building lean images that do not have much in the way of wizards, installers, and hand-holding software, but are focused toward rapid deployment, either as lean hosts or as computing/elastic guests, all controlled via some configuration engine like puppet etc... basically single task machines.
I see the server WG more about building heavier, long term, multipurpose servers (be it on bare metal or as guest).
I would disagree I would say single purpose servers/vm/containers all fall under the server WG as well but I think we should be looking at this from application stand point as in which of those services/daemon fall under the server WG and that means we could make up to about 500 550 applications or "products" that can be deployed on bare metal in vm's or containers we would be delivering.
I think it would make most sense for Cloud and Server to "share applications", i.e. the same application package can be deployed either within a single-purpose Cloud image (automatically managed for horizontal scaling), or as a single instance within a Server (one of many applications running on this particular Server).
We see things quite differently here I look at a server servicing one or more server application ( including hosting the cloud ) and the cloud first foremost a deliver method of server application.
So we ( as in the server WG ) handle all the server applications and the solution surrounding that while the cloud WG handles the deliverable and the configuration aspect of the server applications.
Given that, I think the Server WG should indeed choose a very limited set of "applications" / "services" to include within the Server product and to make management of this limited set of services really good.
I disagree
JBG
On Wed, Oct 30, 2013 at 12:49 AM, "Jóhann B. Guðmundsson" johannbg@gmail.com wrote:
On 10/29/2013 09:30 PM, Miloslav Trmač wrote:
On Mon, Oct 28, 2013 at 2:46 PM, "Jóhann B. Guðmundsson" johannbg@gmail.com wrote: I think it would make most sense for Cloud and Server to "share applications", i.e. the same application package can be deployed either within a single-purpose Cloud image (automatically managed for horizontal scaling), or as a single instance within a Server (one of many applications running on this particular Server).
We see things quite differently here I look at a server servicing one or more server application ( including hosting the cloud ) and the cloud first foremost a deliver method of server application.
I think we actually agree on this, in general.
So we ( as in the server WG ) handle all the server applications and the solution surrounding that while the cloud WG handles the deliverable and the configuration aspect of the server applications.
Here we might disagree: I think it's far easier to keep the application and the "configuration aspect" working well if they are both done by the same group. And "configuration" is not something specific to cloud - it needs to be done when deploying the application both to cloud and to the server; the special thing about cloud is the (automatic?) horizontal scaling.
As for "all the server applications", I do want as many as possible available, but to me some _are_ more important than others, and deserve specific attention of the Server WG, and specific integration within the product. Mirek
On 10/30/2013 04:59 PM, Miloslav Trmač wrote:
As for "all the server applications", I do want as many as possible available, but to me some_are_ more important than others, and deserve specific attention of the Server WG, and specific integration within the product.
We need to start the process somewhere so we can start with those that you hold dear to your heart ;) then gradually add those later.
JBG
On Mon, Oct 28, 2013 at 2:29 PM, Simo Sorce simo@redhat.com wrote:
On Mon, 2013-10-28 at 09:22 -0400, Matthew Miller wrote:
On Mon, Oct 28, 2013 at 09:06:14AM -0400, Simo Sorce wrote:
- Provide a platform for acting as a node in an OpenStack rack.
Isn't this the goal of the Cloud Product ?
We haven't figured this all out yet. It's definitely not *the* goal -- most of us on the cloud WG are thinking about cloud guest images, but there may be room for this case as well. And, on the other hand, the server product probably also wants to be able to run under virtualization including in cloud providers.
One possible way to think about the division is in the pets vs. cattle analogy (which is such a good one that it's quickly becoming worn thin -- bear with me if you've heard this too many times already).
Traditional servers have names, personalities, and are lovingly cared for. When they get sick, you diagnose the problem and carefully nurse them back to health, sometimes at great expense. Like pets. On the other hand, cattle are numbered, and thought of as basically identical, and if they get sick, you put them down and get another one.
With this distinction in mind, compute nodes may indeed better fit into the cloud product, even though they run on bare hardware. We'll see.
This was my thinking, I thought the cloud WG is about building lean images that do not have much in the way of wizards, installers, and hand-holding software, but are focused toward rapid deployment, either as lean hosts or as computing/elastic guests, all controlled via some configuration engine like puppet etc... basically single task machines.
I see the server WG more about building heavier, long term, multipurpose servers (be it on bare metal or as guest).
Yes, that's the way I think of this as well: Single-purpose systems with horizontal scaling are primarily a Cloud domain.
Or, to take it from the other end, something like the oVirt minimal VM host should, I think, be built from/by the Cloud product. For the Server product we'd be completely fine with using a little extra space (on the order of dozens of megabytes, or perhaps more) if that made server management noticeably easier. Mirek
Fedora Server Folks, I think the group needs to focus on making things easier for 100% RPM Installs. RPM is the package manager for Fedora so applications SHOULD be deployable with 100% RPM-based installs and NOT require another system to get them running or configure them (e.g. chef, puppet, CFengine). Here's a short list of things that I would like and would be rather easy to implement. - Any daemon can be started and running with a "-on" package (e.g. httpd-on) just a wrapper with chkconfig on + service start - /etc/fstab.d/ capability - Moving /etc/yum.repos.d/*.repo files from fedora-release to an independent package that can optionally not be installed in the kickstart. THIS IS A BIG SECURITY PET PEEVE OF MINE.
* Database server (both free and commercial). s/commercial/proprietary/ I guess ?
We really need to fix the Oracle instant client mess. SpaceWalk tried but why can't Fedora make a really great wrapper RPM so that it just works. I'm sure most of you will say why can't Oracle just make good RPMs. I have my own wrapper but why make it so hard. Same with Perl-DBD-Oracle. This is a political effort more than technical. But, these are the kinds of things, I think, this group needs to work on. I am not proposing that Fedora host Oracle RPMs (wrong license) but work with Oracle to fix it for everyone in one place and just make it easy.
* Simplify management and deployment.[1]
Bottom line I think Fedora should provide the running building block or even full running applications like TurnKey Linux with a nice default configuration. e.g. I need a running webserver "yum install httpd-on". I need a running database "yum install postgresql-server-on".
So, a web application spec simply requires: postgresql-server-on httpd-on and done!
Thanks Mike mrdvt92
On 10/28/2013 04:13 PM, Michael R. Davis wrote:
Fedora Server Folks,
I think the group needs to focus on making things easier for 100% RPM Installs. RPM is the package manager for Fedora so applications SHOULD be deployable with 100% RPM-based installs and NOT require another system to get them running or configure them (e.g. chef, puppet, CFengine).
This is already the ideal case for most things. Very few require things like this to configure a functional default. Katello and foreman are the only two that come to mind offhand.
There's a difference between initial install and config maintenance/management though.
Here's a short list of things that I would like and would be rather easy to implement.
- Any daemon can be started and running with a "-on" package (e.g.
httpd-on) just a wrapper with chkconfig on + service start
- /etc/fstab.d/ capability
Could you provide an example of why this might be useful? Do you programmatically change mount points enough for this?
- Moving /etc/yum.repos.d/*.repo files from fedora-release to an
independent package that can optionally not be installed in the kickstart. THIS IS A BIG SECURITY PET PEEVE OF MINE.
Could you elaborate a bit on this one as well please?
- Database server (both free and commercial).
s/commercial/proprietary/ I guess ?
We really need to fix the Oracle instant client mess. SpaceWalk tried but why can't Fedora make a really great wrapper RPM so that it just works. I'm sure most of you will say why can't Oracle just make good RPMs. I have my own wrapper but why make it so hard. Same with Perl-DBD-Oracle. This is a political effort more than technical. But, these are the kinds of things, I think, this group needs to work on. I am not proposing that Fedora host Oracle RPMs (wrong license) but work with Oracle to fix it for everyone in one place and just make it easy.
I've not found oracle overly willing to help in most aspects. In most cases they've been condescending and arrogant.
- Simplify management and deployment.[1]
Bottom line I think Fedora should provide the running building block or even full running applications like TurnKey Linux with a nice default configuration. e.g. I need a running webserver "yum install httpd-on". I need a running database "yum install postgresql-server-on".
So, a web application spec simply requires: postgresql-server-on httpd-on and done!
Would there be an equivalent -off for folks who wish to manually tinker prior to enabling? If so, what would be the behavior if someone were to 'yum install httpd-on httpd-off' ?
I'm not convinced that yum should be in the business of enabling/disabling services like this.
- /etc/fstab.d/ capability
Could you provide an example of why this might be useful? Do you programmatically change mount points enough for this?
I have applications that need mounts points. I need to deploy to every server. I'd like to do this in RPM easily not in chef or puppet or augtool.
- Moving /etc/yum.repos.d/*.repo files from fedora-release to an independent package that can optionally not be installed in the kickstart. THIS IS A BIG SECURITY PET PEEVE OF MINE.
Could you elaborate a bit on this one as well please?
My IT guys will do a yum update before they remove the default repo files an we'll get Internet installed RPMs on our systems. Which is a no-no for us. We are required to only use the local repo which may be behind the latest on the net but that's what's been tested with the apps.
We really need to fix the Oracle instant client mess.
I've not found oracle overly willing to help in most aspects. In most cases they've been condescending and arrogant.
We still need to do the best we can to make it easy.
These are the RPMs that we have build. I think most are home grown but there is no need for every company to repeat this mess.
apr-util-oracle oracle-instantclient11.2-httpd oracle-instantclient11.2-wrapper oracle-instantclient11.2-bashrc oracle-instantclient11.2-ldconfig
Bottom line I think Fedora should provide the running building block or even full running applications like TurnKey Linux with a nice default configuration. e.g. I need a running webserver "yum install httpd-on". I need a running database "yum install postgresql-server-on".
Would there be an equivalent -off for folks who wish to manually tinker prior to enabling?
yum erase httpd-on; turns it off it's just a wrapper package...
but yum install httpd; service httpd start; would still work.
I'm not convinced that yum should be in the business of enabling/disabling services like this.
"yum" would not be. The spec "post" would actually "do" it. I think if we raise the bar we can start building mansions and the end application only need to build a room. If we raise the bar far enough there's no stopping us. Thanks, Mike mrdvt92
On 10/28/2013 05:55 PM, Michael R. Davis wrote:
- /etc/fstab.d/ capability
Could you provide an example of why this might be useful? Do you programmatically change mount points enough for this?
I have applications that need mounts points. I need to deploy to every server. I'd like to do this in RPM easily not in chef or puppet or augtool.
There's a compelling case for this, but I don't think you've made it.
- Moving /etc/yum.repos.d/*.repo files from fedora-release to an
independent package that can optionally not be installed in the kickstart. THIS IS A BIG SECURITY PET PEEVE OF MINE.
Could you elaborate a bit on this one as well please?
My IT guys will do a yum update before they remove the default repo files an we'll get Internet installed RPMs on our systems. Which is a no-no for us. We are required to only use the local repo which may be behind the latest on the net but that's what's been tested with the apps.
This is a local case, and just makes the case for proper config management via the tools you seem to want to use. I don't see how this provides a benefit to other users.
We really need to fix the Oracle instant client mess.
I've not found oracle overly willing to help in most aspects. In most cases they've been condescending and arrogant.
We still need to do the best we can to make it easy.
These are the RPMs that we have build. I think most are home grown but there is no need for every company to repeat this mess.
apr-util-oracle oracle-instantclient11.2-httpd oracle-instantclient11.2-wrapper oracle-instantclient11.2-bashrc oracle-instantclient11.2-ldconfig
As you pointed out, there is a licensing issue here that needs to be addressed.
Bottom line I think Fedora should provide the running building block or even full running applications like TurnKey Linux with a nice default configuration. e.g. I need a running webserver "yum install httpd-on". I need a running database "yum install postgresql-server-on".
Would there be an equivalent -off for folks who wish to manually tinker prior to enabling?
yum erase httpd-on; turns it off it's just a wrapper package...
but yum install httpd; service httpd start; would still work.
I'm not convinced that yum should be in the business of enabling/disabling services like this.
"yum" would not be. The spec "post" would actually "do" it.
I don't agree with this approach. This is clearly within the domain of config management tools and would break the traditional approach for no perceptible gain. With this approach, a user could have your httpd-on package installed, and 'chkconfig httpd off'. This would create confusion and a support issue. I would prefer to leave this in the capable hands of puppet/ansible/cfengine/chef/bcfg2/salt etc.
I think if we raise the bar we can start building mansions and the end application only need to build a room. If we raise the bar far enough there's no stopping us.
wat?
- /etc/fstab.d/ capability
There's a compelling case for this, but I don't think you've made it.
I'm not sure I have the skills to make that case. I know I don't have the skills to modify mount. I just think it would be a great capability.
----- Original Message ----- From: Jim Perrin jperrin@centos.org
> I would prefer to leave this in the capable hands of puppet/ansible/cfengine/chef/bcfg2/salt etc.
I would proffer that there are so many "hands" in the pot, is because there's too big of a gap between what RPM provides and what people need. Maybe you are correct, Fedora should only provide the lumber and let everyone else bring the glue. I was just hoping that if we provided well built homes up front then we could save on the cost glue at the edge.
As it stands, we will agree to disagree as to the scope of what package management should be. I think you understand my position.
Thanks, Mike
mrdvt92
On Mon, Oct 28, 2013 at 3:55 PM, Michael R. Davis mrdvt92@yahoo.com wrote:
- /etc/fstab.d/ capability
Could you provide an example of why this might be useful? Do you programmatically change mount points enough for this?
I have applications that need mounts points. I need to deploy to every server. I'd like to do this in RPM easily not in chef or puppet or augtool.
In Fedora, this is clearly the purview of mount and automount units with systemd. An RPM can drop them into /etc/systemd/system/*.mount and enable them. There is no need for fstab.d.
On 28.10.2013 23:55, Michael R. Davis wrote:
- /etc/fstab.d/ capability
Could you provide an example of why this might be useful? Do you programmatically change mount points enough for this?
I have applications that need mounts points. I need to deploy to every server. I'd like to do this in RPM easily not in chef or puppet or augtool.
Isn't this already covered by systemd's mount units?
- Moving /etc/yum.repos.d/*.repo files from fedora-release to an
independent package that can optionally not be installed in the kickstart. THIS IS A BIG SECURITY PET PEEVE OF MINE.
Could you elaborate a bit on this one as well please?
My IT guys will do a yum update before they remove the default repo files an we'll get Internet installed RPMs on our systems. Which is a no-no for us. We are required to only use the local repo which may be behind the latest on the net but that's what's been tested with the apps.
We really need to fix the Oracle instant client mess.
I've not found oracle overly willing to help in most aspects. In most cases they've been condescending and arrogant.
We still need to do the best we can to make it easy.
These are the RPMs that we have build. I think most are home grown but there is no need for every company to repeat this mess.
apr-util-oracle oracle-instantclient11.2-httpd oracle-instantclient11.2-wrapper oracle-instantclient11.2-bashrc oracle-instantclient11.2-ldconfig
Have you approached Oracle about this? I don't think that this is something Fedora should concern itself with. If Oracle then needs some adjustments in Fedora that can be dealt with but until then this is really just between Oracle and its users.
Bottom line I think Fedora should provide the running building block or even full running applications like TurnKey Linux with a nice default configuration. e.g. I need a running webserver "yum install httpd-on". I need a running database "yum install postgresql-server-on".
Would there be an equivalent -off for folks who wish to manually tinker prior to enabling?
yum erase httpd-on; turns it off it's just a wrapper package...
but yum install httpd; service httpd start; would still work.
I'm not convinced that yum should be in the business of enabling/disabling services like this.
"yum" would not be. The spec "post" would actually "do" it.
Wether a service is started or not is a policy decision that is local to the site where the service is installed and as such shouldn't be hard-coded into a package. Site local configuration policies lie completely outside the scope of package management and as such littering the distribution with lots of "*-on" packages just to make life a tiny bit easier for lazy admins is a really terrible idea I think.
Regards, Dennis
On Tue, 2013-10-29 at 16:12 +0100, Dennis Jacobfeuerborn wrote:
On 28.10.2013 23:55, Michael R. Davis wrote:
Would there be an equivalent -off for folks who wish to manually tinker prior to enabling?
yum erase httpd-on; turns it off it's just a wrapper package...
but yum install httpd; service httpd start; would still work.
I'm not convinced that yum should be in the business of enabling/disabling services like this.
"yum" would not be. The spec "post" would actually "do" it.
Wether a service is started or not is a policy decision that is local to the site where the service is installed and as such shouldn't be hard-coded into a package. Site local configuration policies lie completely outside the scope of package management and as such littering the distribution with lots of "*-on" packages just to make life a tiny bit easier for lazy admins is a really terrible idea I think.
Well there is a case for distributing policy via RPMs, it is a fine distribution mechanism IMHO, but it is self-evidently a completely local one, and will be better served by custom admin RPMs rather than distribution provided ones.
Simo.
On 29.10.2013 16:22, Simo Sorce wrote:
On Tue, 2013-10-29 at 16:12 +0100, Dennis Jacobfeuerborn wrote:
On 28.10.2013 23:55, Michael R. Davis wrote:
Would there be an equivalent -off for folks who wish to manually tinker prior to enabling?
yum erase httpd-on; turns it off it's just a wrapper package...
but yum install httpd; service httpd start; would still work.
I'm not convinced that yum should be in the business of enabling/disabling services like this.
"yum" would not be. The spec "post" would actually "do" it.
Wether a service is started or not is a policy decision that is local to the site where the service is installed and as such shouldn't be hard-coded into a package. Site local configuration policies lie completely outside the scope of package management and as such littering the distribution with lots of "*-on" packages just to make life a tiny bit easier for lazy admins is a really terrible idea I think.
Well there is a case for distributing policy via RPMs, it is a fine distribution mechanism IMHO, but it is self-evidently a completely local one, and will be better served by custom admin RPMs rather than distribution provided ones.
I guess the fact that a service comes with a default configuration file already counts as policy distribution but encapsulating something like "chkconfig X on && service X start" in its own distribution maintained package seems like overkill to me.
I don't think its a good idea to make the line between package and configuration management too blurry.
Regards, Dennis
Simo Sorce (simo@redhat.com) said:
Well there is a case for distributing policy via RPMs, it is a fine distribution mechanism IMHO, but it is self-evidently a completely local one, and will be better served by custom admin RPMs rather than distribution provided ones.
... and if desired, can be accomplished by packaging up a systemd preset file already.
Bill
On Tue, 2013-10-29 at 11:36 -0400, Bill Nottingham wrote:
Simo Sorce (simo@redhat.com) said:
Well there is a case for distributing policy via RPMs, it is a fine distribution mechanism IMHO, but it is self-evidently a completely local one, and will be better served by custom admin RPMs rather than distribution provided ones.
... and if desired, can be accomplished by packaging up a systemd preset file already.
Absolutely, I do not care what mechanism, the point is that this is not something the distribution should provide as it is local policy.
Simo.
On 10/29/2013 03:36 PM, Bill Nottingham wrote:
Simo Sorce (simo@redhat.com) said:
Well there is a case for distributing policy via RPMs, it is a fine distribution mechanism IMHO, but it is self-evidently a completely local one, and will be better served by custom admin RPMs rather than distribution provided ones.
... and if desired, can be accomplished by packaging up a systemd preset file already.
That's not entirely correct since what Michael was seeking was to install unit,enable it and started it in the same install packaging transaction and you wont achieve that by simply dropping in a preset snippet in the correct place afaik + that is something we will never do here in Fedora for obvious reasons but he can certainly play with those explosives by himself in his own rpm package if he wants to.
JBG
On Mon, Oct 28, 2013 at 10:13 PM, Michael R. Davis mrdvt92@yahoo.com wrote:
I think the group needs to focus on making things easier for 100% RPM Installs. RPM is the package manager for Fedora so applications SHOULD be deployable with 100% RPM-based installs and NOT require another system to get them running or configure them (e.g. chef, puppet, CFengine).
I don't think RPM is a good fit for doing system management: - As the core functionality of RPM "install a file" is not a good way to invoke an API: It's fairly resource-intensive to watch for new files to be installed, and the semantics is problematic - now that the file is in place, has the application parsed it correctly and is it being actually applied, or was it ignored because of a typo? - As a way to deploy arbitrary shell scripts, RPM is difficult to use on all fronts: one has to write the script, wrap it in a spec file, create the RPM, deliver the RPM, and install the RPM, just to run it; and combine (yum history) with (rpm -q --scripts) to see what has been run. (ssh hostname 'the command here') with some logging would do essentially the same thing, but much simpler. (And, as I have argued elsewhere in the thread, shell is long-term not a good API language in any case.) Mirek
On Mon, Oct 28, 2013 at 2:06 PM, Simo Sorce simo@redhat.com wrote:
On Mon, 2013-10-28 at 08:55 -0400, Stephen Gallagher wrote:
- Platform for deploying web applications with high-value frameworks.
- Ruby on Rails
- Django
- Turbogears
- Node.js
Are we going to do this via Software Collections ? The main issue I see with these is that various apps requires at times, conflicting versions of the same base framework/language.
I think it would make sense for the Base Design and Environments/Stacks WGs to define how to install, build and package the runtimes and the applications that use them (including the conflicting/multiple versions issue), without focusing on a specific use (e.g. treating web applications, GUI applications and CLI applications equally), and for the Server WG to handle deploying the web applications within the web server and managing deployed web applications.
I think the main issue is the amount of packages we try to cram in the "OS".
Would it be radical to suggest that "packages" should be invisible to an admin that doesn't want to see them? "Enable the DNS server", configure what it is serving, *product's magic here*, the DNS server runs.
[1] Ideally, we want a mid-level Microsoft admin to be able to manage Fedora without much learning curve.
For this you need wizards and (web) UIs, I am not sure how much in our charter there is for developing additional software, Fedora usually is about packaging stuff, mostly.
I strongly think Fedora needs to more actively develop missing functionality, or cause the missing functionality to be developed. "The Open Source community hasn't written it yet" is an excuse but not a reason for the users to put up with the resulting lack of functionality. Mirek
On 10/28/2013 12:55 PM, Stephen Gallagher wrote:
== My initial thoughts == I am open to counter-arguments, naturally.
I'm just going straight to the overlapping issues we have between the WG's as in we need to establish the fundamental approach of which applications belong to our group.
Basically where I stand any application that runs daemon/service as in it's an application (or set of applications) that runs in the background waiting to be used, or carrying out essential tasks on an pyshical/vm or in container or in other words basically it's an systemd/upstart/sysv unit/service or an container that can be started and enable with the systemd and service commands and is not part of the desktop/graphical target ( such as gdm.service which thus makes it part of the workstation group ), as well is not part of the base/coreOS ( like device mapper etc ) it belongs within the server WG.
JBG
On Mon, 2013-10-28 at 13:32 +0000, "Jóhann B. Guðmundsson" wrote:
On 10/28/2013 12:55 PM, Stephen Gallagher wrote:
== My initial thoughts == I am open to counter-arguments, naturally.
I'm just going straight to the overlapping issues we have between the WG's as in we need to establish the fundamental approach of which applications belong to our group.
Basically where I stand any application that runs daemon/service as in it's an application (or set of applications) that runs in the background waiting to be used, or carrying out essential tasks on an pyshical/vm or in container or in other words basically it's an systemd/upstart/sysv unit/service or an container that can be started and enable with the systemd and service commands and is not part of the desktop/graphical target ( such as gdm.service which thus makes it part of the workstation group ), as well is not part of the base/coreOS ( like device mapper etc ) it belongs within the server WG.
I tentatively agree, although I guess there may be desktop-oriented daemons we may not care about. Say a desktop-oriented backup daemon, that is sort of single user or anyway ill-suited for a multi-user server.
Also should we care much about Graphical UIs ? Or should we freeze early and maintain whatever version was considered stable in the Desktop WG at the start of the cycle ? And who is going to maintain it if we do so and happen to have a longer term cycle than the desktop ?
Simo.
On 10/28/2013 01:48 PM, Simo Sorce wrote:
On Mon, 2013-10-28 at 13:32 +0000, "Jóhann B. Guðmundsson" wrote:
On 10/28/2013 12:55 PM, Stephen Gallagher wrote:
== My initial thoughts == I am open to counter-arguments, naturally.
I'm just going straight to the overlapping issues we have between the WG's as in we need to establish the fundamental approach of which applications belong to our group.
Basically where I stand any application that runs daemon/service as in it's an application (or set of applications) that runs in the background waiting to be used, or carrying out essential tasks on an pyshical/vm or in container or in other words basically it's an systemd/upstart/sysv unit/service or an container that can be started and enable with the systemd and service commands and is not part of the desktop/graphical target ( such as gdm.service which thus makes it part of the workstation group ), as well is not part of the base/coreOS ( like device mapper etc ) it belongs within the server WG.
I tentatively agree, although I guess there may be desktop-oriented daemons we may not care about. Say a desktop-oriented backup daemon, that is sort of single user or anyway ill-suited for a multi-user server.
You never deploy a desktop on a server so I would say we would limit this to a base/coreOS a set of administrative tools + a single application and or a application stack and the way we would deliver the products would be something that we would limit to netinstall + ks or something that integrades well with provisioning tools
JBG
On Mon, 2013-10-28 at 14:11 +0000, "Jóhann B. Guðmundsson" wrote:
On 10/28/2013 01:48 PM, Simo Sorce wrote:
On Mon, 2013-10-28 at 13:32 +0000, "Jóhann B. Guðmundsson" wrote:
On 10/28/2013 12:55 PM, Stephen Gallagher wrote:
== My initial thoughts == I am open to counter-arguments, naturally.
I'm just going straight to the overlapping issues we have between the WG's as in we need to establish the fundamental approach of which applications belong to our group.
Basically where I stand any application that runs daemon/service as in it's an application (or set of applications) that runs in the background waiting to be used, or carrying out essential tasks on an pyshical/vm or in container or in other words basically it's an systemd/upstart/sysv unit/service or an container that can be started and enable with the systemd and service commands and is not part of the desktop/graphical target ( such as gdm.service which thus makes it part of the workstation group ), as well is not part of the base/coreOS ( like device mapper etc ) it belongs within the server WG.
I tentatively agree, although I guess there may be desktop-oriented daemons we may not care about. Say a desktop-oriented backup daemon, that is sort of single user or anyway ill-suited for a multi-user server.
You never deploy a desktop on a server so I would say we would limit this to a base/coreOS a set of administrative tools + a single application and or a application stack and the way we would deliver the products would be something that we would limit to netinstall + ks or something that integrades well with provisioning tools
Sorry I am not sure what you mean here. If you mean the Server image will never have a graphical UI I don't think we are all on the same boat.
Not that I want to install such UI by default, but not all people are comfortable managing headless servers, so the option almost certainly needs to be there.
I think the cloud Product should probably be headless as it is a more specialized thing.
Simo.
On 10/28/2013 02:18 PM, Simo Sorce wrote:
Sorry I am not sure what you mean here. If you mean the Server image will never have a graphical UI I don't think we are all on the same boat.
For the first we dont have any graphical tools to manage our application stack secondly we should not be delivering products that allow for larger attack surface on servers then need be.
Not that I want to install such UI by default, but not all people are comfortable managing headless servers, so the option almost certainly needs to be there.
Name the desktop applications that exist to manage various server applications?
Let's see if you can count to more then let's keep that number very low 5 - 10 maybe 20 ( out of 500 550 server applications or server application stack )
In reality if the user is not comfortable manage-nig headless server he's not comfortable in the terminal which in turn excludes the entire desktop.
I think the cloud Product should probably be headless as it is a more specialized thing.
No more then us.
JBG
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 10/28/2013 10:26 AM, "Jóhann B. Guðmundsson" wrote:
On 10/28/2013 02:18 PM, Simo Sorce wrote:
Sorry I am not sure what you mean here. If you mean the Server image will never have a graphical UI I don't think we are all on the same boat.
For the first we dont have any graphical tools to manage our application stack secondly we should not be delivering products that allow for larger attack surface on servers then need be.
Not that I want to install such UI by default, but not all people are comfortable managing headless servers, so the option almost certainly needs to be there.
I agree that there will be situations where an administrator will want to install a GUI on a server (even if it's just because they have one machine in a rack that they use to fix things up when things go sideways).
Here is my expectation:
We expect headless operation to be the norm, and if graphical interaction is needed, it will usually be done remotely via another system, possibly Fedora Workstation. However, we should also keep in mind the high likelihood that people are going to want to administer these systems from Windows, Macintosh or possibly tablet devices. With this in mind, I'd like to suggest that we focus our efforts around web-based applications and scriptable shell commands. (So things like Katello/Foreman, OpenLMI, FreeIPA WebUI and similar). These should all be consumable from any graphical client.
That all being said, I think we really want to coordinate with the Environments/Stacks and Workstation group to hopefully enable the deployment of a desktop environment as part of those stacks. That way, if someone REALLY wants a local GUI for their servers, they need only install the "GNOME Desktop" or "KDE Desktop" bundle. To the best of our ability, this bundle should be self-contained and not require modification of the system atop which it is being installed.
On Mon, 2013-10-28 at 10:44 -0400, Stephen Gallagher wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 10/28/2013 10:26 AM, "Jóhann B. Guðmundsson" wrote:
On 10/28/2013 02:18 PM, Simo Sorce wrote:
Sorry I am not sure what you mean here. If you mean the Server image will never have a graphical UI I don't think we are all on the same boat.
For the first we dont have any graphical tools to manage our application stack secondly we should not be delivering products that allow for larger attack surface on servers then need be.
Not that I want to install such UI by default, but not all people are comfortable managing headless servers, so the option almost certainly needs to be there.
I agree that there will be situations where an administrator will want to install a GUI on a server (even if it's just because they have one machine in a rack that they use to fix things up when things go sideways).
Not only that, there are some applications that although they are 'server apps' they require a GUI even if just to install them.
stupid example but IIRC I remember things like game servers install programs that would run only with the Graphical UI.
I also remember proprietary packages that would have a server component that would need X to install.
Yes stupid things, but you can't discount them.
I am also thinking that installing openoffice in order to do server-side rendering (doc conversions and so on) is going to drag in at least X libraries.
And even just the ability to run GUI + Browser some times may be needed.
Of course you can try to export the browser via X but it probably is better to have a full desktop you can access locally or via VLC or whatever will be used with Wayland in the future.
Simo.
On 10/28/2013 02:51 PM, Simo Sorce wrote:
And even just the ability to run GUI + Browser some times may be needed.
Usually you can install lynx or if you require a more modern browser like firefox it's dependency and running it by ssh-ing into the server and firing it up using the no-remote startup option and configura application from there and or you can connect to it from your own local desktop ( if the port is open for that on the network that is ).
Still no hard requirment for a full blown DE being installed on the server so you need to be counting the actuall applications that provide an GUI ( not webui) frontend to the server applications.
JBG
On 10/28/2013 02:44 PM, Stephen Gallagher wrote:
We expect headless operation to be the norm, and if graphical interaction is needed, it will usually be done remotely via another system, possibly Fedora Workstation. However, we should also keep in mind the high likelihood that people are going to want to administer these systems from Windows, Macintosh or possibly tablet devices. With this in mind, I'd like to suggest that we focus our efforts around web-based applications and scriptable shell commands.
I disagree strongly and argue that our focus should be strictly on the transitioning process from packaged application to a product.
Ones we have defined that process we can start transitioning applications and making that process depend on the relevant application and having application required to have web frontend before doing so is pure nonsense.
JBG
On Mon, 2013-10-28 at 14:58 +0000, "Jóhann B. Guðmundsson" wrote:
On 10/28/2013 02:44 PM, Stephen Gallagher wrote:
We expect headless operation to be the norm, and if graphical interaction is needed, it will usually be done remotely via another system, possibly Fedora Workstation. However, we should also keep in mind the high likelihood that people are going to want to administer these systems from Windows, Macintosh or possibly tablet devices. With this in mind, I'd like to suggest that we focus our efforts around web-based applications and scriptable shell commands.
I disagree strongly and argue that our focus should be strictly on the transitioning process from packaged application to a product.
Ones we have defined that process we can start transitioning applications and making that process depend on the relevant application and having application required to have web frontend before doing so is pure nonsense.
Jóhann, I always welcome your input, but stop using phrases like "pure nonsense".
We are here to discuss technical details, you may not agree with someone, in which case an explanation of why you do not agree is important. But saying something is "pure nonsense" is not an explanation and is not useful to convey your position.
Simo.
On 10/28/2013 03:13 PM, Simo Sorce wrote:
Jóhann, I always welcome your input, but stop using phrases like "pure nonsense".
We are here to discuss technical details, you may not agree with someone, in which case an explanation of why you do not agree is important. But saying something is "pure nonsense" is not an explanation and is not useful to convey your position.
Our transitioning process needs to be able to cover 500+ applications ( or in other words be as generic as possible ) or so, so it obviously cannot depend on the existence of web fronted otherwise we would be excluding 99% of those server applications.
JBG
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 10/28/2013 11:25 AM, "Jóhann B. Guðmundsson" wrote:
On 10/28/2013 03:13 PM, Simo Sorce wrote:
Jóhann, I always welcome your input, but stop using phrases like "pure nonsense".
We are here to discuss technical details, you may not agree with someone, in which case an explanation of why you do not agree is important. But saying something is "pure nonsense" is not an explanation and is not useful to convey your position.
Our transitioning process needs to be able to cover 500+ applications ( or in other words be as generic as possible ) or so, so it obviously cannot depend on the existence of web fronted otherwise we would be excluding 99% of those server applications.
I'm not sure I agree with "cover 500+ applications". I'm not convinced (yet) that our responsibility is to be shipping the full set of server applications currently available in the greater Fedora universe.
I'd like for us to be focusing on a *platform* and a set of standard, visible APIs and working with the Base Design and Environments/Stacks groups to have service packages treated similarly to "apps" in other operating systems. We ourselves don't necessarily need to do all of the porting to accommodate this (though we will probably want to select a group of high-value servers that we use as examples, such as Apache HTTPD and BIND).
Also, I'd like for us to try to manage this separation so that we can allow our consumers to pick and choose which server they actually want, rather than necessarily the freshest upstream bits. Those fresh copies *must* be available (and probably the default if not otherwise specified), but it would be REALLY nice to be able to hang onto MyServer 2.4 after 3.0 comes out if your other applications aren't ready for it.
On 10/28/2013 03:32 PM, Stephen Gallagher wrote:
I'm not sure I agree with "cover 500+ applications". I'm not convinced (yet) that our responsibility is to be shipping the full set of server applications currently available in the greater Fedora universe.
Think of it this way...
If you want people to actually want to package and maintain server applications in the distribution and have them work towards making them "product ready" and be presented and advertised and made available as an "product" the answer to that question is yes we want to strive to cover 500+ applications.
If you dont want people to contribute to Fedora by not giving them that option or just make products out of what RH maintains or otherwise just want to put competing products at disadvantage ( think 389ds vs openldap, xen vs kvm etc ) and put the community in endless dispute since you effectively decided to favour one application and it's maintainers over another application then the answer to that question is no we dont want strive to cover 500+ applications.
+If I as an administrator wanted to deploy Fedora+some server application in the cloud,container or simply on bare metal it would be a) something I was familiar with or b) something that got decided on a general corporate meeting. It would not be an "product" that we ( or some other entity within our community ) would be deciding what would be but it would be something that would suit my or my corporate needs and fit my infrastructure requirements not what they would think would be my needs or fitted their infrastructure requirements and in that context having larger product portfolio would be better would it not...
JBG
On Mon, Oct 28, 2013 at 5:06 PM, "Jóhann B. Guðmundsson" johannbg@gmail.com wrote:
If you want people to actually want to package and maintain server applications in the distribution and have them work towards making them "product ready" and be presented and advertised and made available as an "product" the answer to that question is yes we want to strive to cover 500+ applications.
The way I think about it, we should strive to have 500+ "applications" available _for_ the Server (whether they are obscure Open Source services, or custom company-only functionality developed internall) ...
If you dont want people to contribute to Fedora by not giving them that option or just make products out of what RH maintains or otherwise just want to put competing products at disadvantage ( think 389ds vs openldap, xen vs kvm etc )
... but the Server shouldn't ship 500 "services" as an integrated part of the product (are there even that many services to provide?). Regarding the "competing products", I'd go as far as to say that the Server should give the users a "good LDAP server" without exposing which upstream project is internally providing the functionality - even possibly switching the upstream projects on an upgrade if one of them started to fall behind. Mirek
On 10/29/2013 09:58 PM, Miloslav Trmač wrote:
On Mon, Oct 28, 2013 at 5:06 PM, "Jóhann B. Guðmundsson" johannbg@gmail.com wrote:
If you want people to actually want to package and maintain server applications in the distribution and have them work towards making them "product ready" and be presented and advertised and made available as an "product" the answer to that question is yes we want to strive to cover 500+ applications.
The way I think about it, we should strive to have 500+ "applications" available _for_ the Server (whether they are obscure Open Source services, or custom company-only functionality developed internall) ...
Agreed we need to start small then gradually extend.
If you dont want people to contribute to Fedora by not giving them that option or just make products out of what RH maintains or otherwise just want to put competing products at disadvantage ( think 389ds vs openldap, xen vs kvm etc )
... but the Server shouldn't ship 500 "services" as an integrated part of the product
I'm not sure what you mean by that I'm not talking about a 500 services installed and available but rather a core/baseOS with one or more services.
(are there even that many services to provide?).
Yes and around 100 unmigrated ones
Regarding the "competing products", I'd go as far as to say that the Server should give the users a "good LDAP server" without exposing which upstream project is internally providing the functionality - even possibly switching the upstream projects on an upgrade if one of them started to fall behind.
That will never work.
JBG
Without trying to directly reply to anyone in particular, here are my primary interests:
* Container host management, including at scale (10k containers per host with container hibernation and migration capability) * Working with upstream and packagers to have first-class systemd support, including native units and outsourcing privilege dropping, logging, and socket listening to systemd. Maximize use of isolation for capabilities, privileges, and namespaces. Continually raise the bar for uniformity of configuration and management tools. * No packaged config shipping to /etc (system and services should use defaults with empty /etc)
Ideally, this would mean services would ship in a way packaged to install and run from a container, much like BIND sort of does now with chroot. This model is useful for security, traditional multi-purpose servers, and high-density compute usage.
On 10/30/2013 12:23 AM, David Strauss wrote:
Without trying to directly reply to anyone in particular, here are my primary interests:
- Container host management, including at scale (10k containers per
host with container hibernation and migration capability)
- Working with upstream and packagers to have first-class systemd
support, including native units and outsourcing privilege dropping, logging, and socket listening to systemd. Maximize use of isolation for capabilities, privileges, and namespaces. Continually raise the bar for uniformity of configuration and management tools.
- No packaged config shipping to /etc (system and services should use
defaults with empty /etc)
Ideally, this would mean services would ship in a way packaged to install and run from a container, much like BIND sort of does now with chroot. This model is useful for security, traditional multi-purpose servers, and high-density compute usage.
Agreed however we need the ability to assign exclusive physical interfaces to the container as well as creating a dynamic link aggregation of several nic's in the parent/global container root and allowing each container to create virtual interfaces on this aggregation as well as have the ability to assign a virtual nic to a container directly over some physical NIC on the global/parent container server. ( which ofcourse requires networking belonging to systemd before doing so )
Ones that has been achieve we can start looking into supporting HA containers and other stuff as well properly integrate it into the cloud.
We also need to kill /etc/sysconfig ( or deliver it empty as you proposed ) and rework the entire units in the distribution with containers and virtualsation as well as hardware activation in mind
JBG
On 10/30/2013 12:23 AM, David Strauss wrote:
Without trying to directly reply to anyone in particular, here are my primary interests:
- Container host management, including at scale (10k containers per
host with container hibernation and migration capability)
- Working with upstream and packagers to have first-class systemd
support, including native units and outsourcing privilege dropping, logging, and socket listening to systemd. Maximize use of isolation for capabilities, privileges, and namespaces. Continually raise the bar for uniformity of configuration and management tools.
- No packaged config shipping to /etc (system and services should use
defaults with empty /etc)
Ideally, this would mean services would ship in a way packaged to install and run from a container, much like BIND sort of does now with chroot. This model is useful for security, traditional multi-purpose servers, and high-density compute usage.
Agreed we also need the ability to assign exclusive physical interfaces to the container as well as creating a dynamic link aggregation of several nic's in the parent/global container root and allowing each container to create virtual interfaces on this aggregation as well as have the ability to assign a virtual nic to a container directly over some physical NIC on the global/parent container server.
Ones that has been achieve we can start looking into supporting HA containers and other stuff ( we will need to rework the entire units un the distribution as well as integrading network handling into systemd )
JBG
On Mon, Oct 28, 2013 at 4:32 PM, Stephen Gallagher sgallagh@redhat.com wrote:
On 10/28/2013 11:25 AM, "Jóhann B. Guðmundsson" wrote:
Our transitioning process needs to be able to cover 500+ applications ( or in other words be as generic as possible ) or so, so it obviously cannot depend on the existence of web fronted otherwise we would be excluding 99% of those server applications.
I'm not sure I agree with "cover 500+ applications". I'm not convinced (yet) that our responsibility is to be shipping the full set of server applications currently available in the greater Fedora universe.
I'd like for us to be focusing on a *platform* and a set of standard, visible APIs and working with the Base Design and Environments/Stacks groups to have service packages treated similarly to "apps" in other operating systems. We ourselves don't necessarily need to do all of the porting to accommodate this (though we will probably want to select a group of high-value servers that we use as examples, such as Apache HTTPD and BIND).
I generally agree - though I'd focus more on getting the "high-value servers" working well than on calling ourselves a "platform" - it's far too easy to make a platform that doesn't "work" without noticing when there are no major users of the platform.
Also, I'd like for us to try to manage this separation so that we can allow our consumers to pick and choose which server they actually want, rather than necessarily the freshest upstream bits.
For the "high-value" servers (which provide external functionality, not an application API), I strongly disagree. We should be managing the transitions (no functionality dropped, all configuration migrated) so that the user will never want to use the older version.
For APIs / runtimes, yes, we have no choice but to provide the older versions when the upstreams make ABI-incompatible changes. Mirek
On Mon, Oct 28, 2013 at 3:26 PM, "Jóhann B. Guðmundsson" johannbg@gmail.com wrote:
On 10/28/2013 02:18 PM, Simo Sorce wrote:
Sorry I am not sure what you mean here. If you mean the Server image will never have a graphical UI I don't think we are all on the same boat.
For the first we dont have any graphical tools to manage our application stack secondly we should not be delivering products that allow for larger attack surface on servers then need be.
Not that I want to install such UI by default, but not all people are comfortable managing headless servers, so the option almost certainly needs to be there.
Name the desktop applications that exist to manage various server applications?
Historical note... about 15 years ago, Red Hat has spent a lot of development effort on writing or integrating configuration GUIs, and they were available for a large part of the then available functionality - even reimplemented in multiple generations (Tk based tools, linuxconf, redhat-config-*).
The latest generation, redhat-config-*, were, IIRC, written over a comparatively short period of time (ISTR they all happened within a year!), and covered basically all of the major server functions at the time - networking, httpd, bind, mail, ...)
This has been done in the past, and this could be done again, if we really tried.
(Historical corrections very much welcome. Also, while one lesson from the history is that "it can be done", another is "it has been done and it then has been abandoned" - it would be interesting to know _why_ the tools have been abandoned. I haven't been working at Red Hat at that time, so I don't know the reasons.) Mirek
On 10/29/2013 09:48 PM, Miloslav Trmač wrote:
This has been done in the past, and this could be done again, if we really tried.
Why waste our time with something that has never worked in the past?
Why focus on desktop on servers when Microsoft the OS you seem to be so desperately trying to imitating is moving away from having a DE running on their servers?
Our primary task as I see it is deployment/ha/scalability/reliability while we leave the desktop application to the workstation WG and the web frontent to the Cloud WG...
As well as extend the lifetime of the underlying Fedora instance hosting all the services.
JBG
On Wed, Oct 30, 2013 at 12:20:56AM +0000, "Jóhann B. Guðmundsson" wrote:
Why focus on desktop on servers
I think the idea of focusing on target users is important here. Will having a GUI for servers help us grow Fedora Server use? Is it where we should put effort over other things that would grow it *more*? Who are the users, and, short of market research, do we have the collective expertise to make a reasonable case?
On 10/30/2013 12:27 AM, Matthew Miller wrote:
On Wed, Oct 30, 2013 at 12:20:56AM +0000, "Jóhann B. Guðmundsson" wrote:
Why focus on desktop on servers
I think the idea of focusing on target users is important here.
Which is server administrators not desktop users.
Will having a GUI for servers help us grow Fedora Server use?
No it wont however locally installed application which you use to connect to your server remotely to manage the server application you have running there might.
Is it where we should put effort over other things that would grow it *more*?
No it's not we leave that up to the Workstation WG.
Who are the users.
Administrators
JBG
On Wed, Oct 30, 2013 at 12:51:48AM +0000, "Jóhann B. Guðmundsson" wrote:
Why focus on desktop on servers
I think the idea of focusing on target users is important here.
Which is server administrators not desktop users.
Absolutely.
Will having a GUI for servers help us grow Fedora Server use?
No it wont however locally installed application which you use to connect to your server remotely to manage the server application you have running there might.
Personally, and from my own experience, I agree with you. But my experience is as a Linux sysadmin in Unix/Linux-heavy university environments. Maybe there are places *outside* of my experience where a Fedora Server meant to appeal to desktop users would be very successful. I'm skeptical, but I don't know.
On 10/30/2013 01:06 AM, Matthew Miller wrote:
On Wed, Oct 30, 2013 at 12:51:48AM +0000, "Jóhann B. Guðmundsson" wrote:
Why focus on desktop on servers
I think the idea of focusing on target users is important here.
Which is server administrators not desktop users.
Absolutely.
Will having a GUI for servers help us grow Fedora Server use?
No it wont however locally installed application which you use to connect to your server remotely to manage the server application you have running there might.
Personally, and from my own experience, I agree with you. But my experience is as a Linux sysadmin in Unix/Linux-heavy university environments. Maybe there are places *outside* of my experience where a Fedora Server meant to appeal to desktop users would be very successful. I'm skeptical, but I don't know.
You dont have to know + we administrators ( yes I've been doing this as my dayjob for the last 10 years or so ) work on deployment stories from the field not marketing speeches.
If the need for graphical applications to either manage your server service locally or remotely by connecting to server or server service was there, an application to do so for it would have emerged by now and it most likely already has or does on Android, before we can figure out the transitioning an server application to server product process.
JBG
On Wed, Oct 30, 2013 at 1:20 AM, "Jóhann B. Guðmundsson" johannbg@gmail.com wrote:
On 10/29/2013 09:48 PM, Miloslav Trmač wrote:
This has been done in the past, and this could be done again, if we really tried.
Why waste our time with something that has never worked in the past?
Because the past has been different.
In the past, even most of the "simple" servers (... well, even the underlying multi-processing OS kernel, and the TCP/IP stack) were a Big Deal, very costly proprietary software. In the past, a company could have saved money by not buying that proprietary software and instead hiring a specialized administrator to deal with the Open Source alternative.
Nowadays, every server product will give you a HTTP server, a DNS server, a DHCP server, and the like, for a fraction of an admin's salary. Using the free product is no longer that good a deal if the software requires extra manual effort or extra training to manage (or googling to find the right man page).
Why focus on desktop on servers when Microsoft the OS you seem to be so desperately trying to imitating is moving away from having a DE running on their servers?
A "desktop" is not necessary: What about (completely fictional) a terminal login prompt on one virtual console, a browser on another console, and on a third console a GUI combining immediate status display (CPU usage at 50%; your hard drive is failing! RAID out of sync!) with a login prompt to access a configuration GUI? No window manager or desktop is necessary to get this.
Our primary task as I see it is deployment/ha/scalability/reliability while we leave the desktop application to the workstation WG and the web frontent to the Cloud WG...
Yes; a good UI is a part of the deployment/reliability story. The "desktop application" for managing a server is really out of scope for the Workstation WG as proposed. The management interface and the underlying API would be much better implemented by the same group. Mirek
On Wed, 2013-10-30 at 18:11 +0100, Miloslav Trmač wrote:
A "desktop" is not necessary: What about (completely fictional) a terminal login prompt on one virtual console, a browser on another console, and on a third console a GUI combining immediate status display (CPU usage at 50%; your hard drive is failing! RAID out of sync!) with a login prompt to access a configuration GUI? No window manager or desktop is necessary to get this.
Until you want non-latin input methods, a11y, etc...all that stuff around the edges that isn't "window management" but is actually really important. It's quite interesting to look at the evolution of ChromeOS in this aspect, and what they present in the "browser" versus what goes in the "chrome".
On Tue, Oct 29, 2013 at 10:48:09PM +0100, Miloslav Trmač wrote:
The latest generation, redhat-config-*, were, IIRC, written over a comparatively short period of time (ISTR they all happened within a year!), and covered basically all of the major server functions at the time - networking, httpd, bind, mail, ...)
These were all renamed to system-config-* in the early days of Fedora, and in fact they're still around (if, as you note, largely actually abandoned code).
(Historical corrections very much welcome. Also, while one lesson from the history is that "it can be done", another is "it has been done and it then has been abandoned" - it would be interesting to know _why_ the tools have been abandoned. I haven't been working at Red Hat at that time, so I don't know the reasons.)
I wasn't either and I equally don't know, but I can tell you why they weren't useful to me at the time:
1) no handling of multiple machines (or even remote connections to a single machine) 2) little cohesive ux design (this did get iteratively better) 3) some of them didn't work very well (*cough* samba) 4) generally, you had to commit to using them and never touching the config files by hand 5) I was operating in an environment with Red Hat Linux, Debian, Other-Linux-Flavor-Of-The-Day, BSDI, NetBSD, netbh Solaris, SunOS, IRIX, Tru64, and, um, VMS. All the various single-vendor GUIs just brought more pain.
I would certainly add "no API" to the list of complaints _now_, but I'm not ashamed to admit that that wasn't on list of sysadmin concerns a decade ago.
Some of the above might be instructional, some of it might just be stories about the olden days. :)
Miloslav Trmač (mitr@volny.cz) said:
Historical note... about 15 years ago, Red Hat has spent a lot of development effort on writing or integrating configuration GUIs, and they were available for a large part of the then available functionality - even reimplemented in multiple generations (Tk based tools, linuxconf, redhat-config-*).
Ugh, linuxconf. Don't remind me.
The latest generation, redhat-config-*, were, IIRC, written over a comparatively short period of time (ISTR they all happened within a year!), and covered basically all of the major server functions at the time - networking, httpd, bind, mail, ...)
This has been done in the past, and this could be done again, if we really tried.
(Historical corrections very much welcome. Also, while one lesson from the history is that "it can be done", another is "it has been done and it then has been abandoned" - it would be interesting to know _why_ the tools have been abandoned. I haven't been working at Red Hat at that time, so I don't know the reasons.)
They were abandoned, more or less:
1) because the hard core admins didn't use them anyway; they would either edit the configs by hand (old days) or just push out their configs with puppet/chef/ansible/cfengine/salt (these days) 2) becuase the less hard core admins had enough other issues that this wasn't going to win them over 3) because we were unable to create a larger upstream community that allowed us to drive development forward 4) because chasing all the options that could be configured for something like this is actually somewhat significant work 5) because there wasn't any encapsulation for common automation - it was just separate 'click here; do this' sort of tools
Bill
On Mon, Oct 28, 2013 at 3:18 PM, Simo Sorce simo@redhat.com wrote:
Sorry I am not sure what you mean here. If you mean the Server image will never have a graphical UI I don't think we are all on the same boat.
Not that I want to install such UI by default, but not all people are comfortable managing headless servers, so the option almost certainly needs to be there.
<flame_proof_suit> The UNIX CLI is, over time, less and less an acceptable interface.
It's good and efficient at what it has been originally designed for (manipulating line-oriented text by experts - all those two- and three-character commands and little languages).
It's not particularly great at interactive use when compared to an efficient GUI (the command names and options inconsistent, many and long; the configuration file format is even more inconsistent than the command names and options; to use either of them one has to either remember a lot of option names / config directives, or to constantly read documentation).
It's not a particularly good programming environment either - no IDE, no type checking / lint / code completion.
There are only two major interfaces worse than the UNIX CLI: A GUI that was not designed to be efficient, and an application with a GUI that doesn't have any programmable interface. </flame_proof_suit>
I think we should, over time, move towards a "G"UI (whether local or web is an implementation detail in this) for one-time use, and an actual API used by an actual, current-era, programming language, for automated use. The CLI will obviously stay, both because many users are comfortable with it, and because we can't replace it during this decade. Mirek
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 10/29/2013 05:41 PM, Miloslav Trmač wrote:
On Mon, Oct 28, 2013 at 3:18 PM, Simo Sorce simo@redhat.com wrote:
Sorry I am not sure what you mean here. If you mean the Server image will never have a graphical UI I don't think we are all on the same boat.
Not that I want to install such UI by default, but not all people are comfortable managing headless servers, so the option almost certainly needs to be there.
<flame_proof_suit> The UNIX CLI is, over time, less and less an acceptable interface.
It's good and efficient at what it has been originally designed for (manipulating line-oriented text by experts - all those two- and three-character commands and little languages).
It's not particularly great at interactive use when compared to an efficient GUI (the command names and options inconsistent, many and long; the configuration file format is even more inconsistent than the command names and options; to use either of them one has to either remember a lot of option names / config directives, or to constantly read documentation).
It's not a particularly good programming environment either - no IDE, no type checking / lint / code completion.
There are only two major interfaces worse than the UNIX CLI: A GUI that was not designed to be efficient, and an application with a GUI that doesn't have any programmable interface.
</flame_proof_suit>
I think we should, over time, move towards a "G"UI (whether local or web is an implementation detail in this) for one-time use, and an actual API used by an actual, current-era, programming language, for automated use. The CLI will obviously stay, both because many users are comfortable with it, and because we can't replace it during this decade.
I agree with this completely, and it's one of the principal drivers of the OpenLMI project[1] (full disclosure: I'm heavily involved in this effort).
Under the hood, we have abstracted large amounts of the underlying subsystems of the Linux system into a set of CIM object models and exposed them as a stable API. We've then gone and built a scripting environment (using the python language; nothing new to learn like PowerShell) and cli "meta-command" builder[2].
Right now, we don't have any public GUI consuming this interface because our research among Red Hat's customers strongly indicated that real-world administrators care more about a scriptable interface than they do a GUI. That's not to say that there is zero interest in such a GUI, but that it's secondary to getting day-to-day, repetitive tasks done.
Also, to head off some of the NIH concerns that may be in your mind after reading the earlier discussion about the system-config-* UIs, the OpenLMI project is currently being developed jointly by several companies, led by Red Hat but with contributors from Dell, SUSE and others.
[1] http://www.openlmi.org [2] http://www.openlmi.org/scriptdocs
On 10/30/2013 12:47 PM, Stephen Gallagher wrote:
I agree with this completely, and it's one of the principal drivers of the OpenLMI project[1] (full disclosure: I'm heavily involved in this effort).
Under the hood, we have abstracted large amounts of the underlying subsystems of the Linux system into a set of CIM object models and exposed them as a stable API. We've then gone and built a scripting environment (using the python language; nothing new to learn like PowerShell) and cli "meta-command" builder[2].
Right now, we don't have any public GUI consuming this interface because our research among Red Hat's customers strongly indicated that real-world administrators care more about a scriptable interface than they do a GUI. That's not to say that there is zero interest in such a GUI, but that it's secondary to getting day-to-day, repetitive tasks done.
Also, to head off some of the NIH concerns that may be in your mind after reading the earlier discussion about the system-config-* UIs, the OpenLMI project is currently being developed jointly by several companies, led by Red Hat but with contributors from Dell, SUSE and others.
I'm not foreseeing anyone around these parts deploy and use the CIM behemoth after I attended the presentation of it devconf brno this year.
Maybe to much learning curve and training for to little infrastructure to make it worth while, had a part in reaching that conclusion as well as fact I think it got mentioned on that presentation that there was cross OS interoperability issues with it ( if memory servers me correct ).
JBG
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 10/30/2013 09:30 AM, "Jóhann B. Guðmundsson" wrote:
On 10/30/2013 12:47 PM, Stephen Gallagher wrote:
I agree with this completely, and it's one of the principal drivers of the OpenLMI project[1] (full disclosure: I'm heavily involved in this effort).
Under the hood, we have abstracted large amounts of the underlying subsystems of the Linux system into a set of CIM object models and exposed them as a stable API. We've then gone and built a scripting environment (using the python language; nothing new to learn like PowerShell) and cli "meta-command" builder[2].
Right now, we don't have any public GUI consuming this interface because our research among Red Hat's customers strongly indicated that real-world administrators care more about a scriptable interface than they do a GUI. That's not to say that there is zero interest in such a GUI, but that it's secondary to getting day-to-day, repetitive tasks done.
Also, to head off some of the NIH concerns that may be in your mind after reading the earlier discussion about the system-config-* UIs, the OpenLMI project is currently being developed jointly by several companies, led by Red Hat but with contributors from Dell, SUSE and others.
I'm not foreseeing anyone around these parts deploy and use the CIM behemoth after I attended the presentation of it devconf brno this year.
CIM was a behemoth, but we think that the new scripting interface we've built atop it simplifies it greatly. We've made a lot of progress on it in the last year.
Maybe to much learning curve and training for to little infrastructure to make it worth while, had a part in reaching that conclusion as well as fact I think it got mentioned on that presentation that there was cross OS interoperability issues with it ( if memory servers me correct ).
The learning curve of CIM is very high, but we're trying very hard to solve that problem in the lmishell python interface. It's significantly simplified, while still having the ability to talk to all of the complex capabilities as well.
As far as cross-os compatibility, the only concern is that older OSes (such as RHEL 6) will only get a subset of the capabilities because we use more modern interfaces. Some work is being done to backport a very large subset, though.
On 10/30/2013 01:46 PM, Stephen Gallagher wrote:
As far as cross-os compatibility, the only concern is that older OSes (such as RHEL 6) will only get a subset of the capabilities because we use more modern interfaces. Some work is being done to backport a very large subset, though.
I was referring using CIM/OpenLMI with other OS windows,*bsd,aix,solaris etc. or in other words it being unusable in mixed environments thus not making it suitable for deployments.
JBG
On Wed, 30 Oct 2013 14:12:30 +0000 "Jóhann B. Guðmundsson" johannbg@gmail.com wrote:
I was referring using CIM/OpenLMI with other OS windows,*bsd,aix,solaris etc. or in other words it being unusable in mixed environments thus not making it suitable for deployments.
I'm not quite sure what you mean by this. CIM/WBEM is present in Windows as well as in other systems (server firmware, network switches, NAS, etc.). And one of the aims of OpenLMI is to create tooling that would be universally usable across platforms. Not that we would be there yet but moving forward quite well. We want Fedora to be the first supported OS/platform but definitely not the only one.
Thanks and regards,
"Jóhann B. Guðmundsson" (johannbg@gmail.com) said:
You never deploy a desktop on a server so I would say we would limit this to a base/coreOS a set of administrative tools + a single application and or a application stack and the way we would deliver the products would be something that we would limit to netinstall + ks or something that integrades well with provisioning tools
I wouldn't say "never", given that somewhere around 20% of the users of a Fedora-downstream distribution do install a desktop on their server, as of a random sampling of a user subset done a while back.
Given, that sample should be followed up on to see *why* they do it (unfamiliarity with CLI/used to Windows, bad defaults in installer, other reasons?). In fact, if we're being forward looking where *everything* is deployed at large-scale, then this number should be expected to go down as fewer people are doing single-digit deployments. But the need may still exist for some people.
Bill
On 10/28/2013 02:15 PM, Bill Nottingham wrote:
"Jóhann B. Guðmundsson" (johannbg@gmail.com) said:
You never deploy a desktop on a server so I would say we would limit this to a base/coreOS a set of administrative tools + a single application and or a application stack and the way we would deliver the products would be something that we would limit to netinstall + ks or something that integrades well with provisioning tools
I wouldn't say "never", given that somewhere around 20% of the users of a Fedora-downstream distribution do install a desktop on their server, as of a random sampling of a user subset done a while back.
20% actually sounds a bit low, given what we see on the CentOS side.
Given, that sample should be followed up on to see *why* they do it (unfamiliarity with CLI/used to Windows, bad defaults in installer, other reasons?). In fact, if we're being forward looking where *everything* is deployed at large-scale, then this number should be expected to go down as fewer people are doing single-digit deployments. But the need may still exist for some people.
We see this also as a dependency requirement for various 3rd party software. For example, Oracle, some printserver software, ArcGIS, and several FlexLM intergrated apps all want at least some portion of a gui to run. Not all of them handle a forwarded display properly. Some link directly to firefox to display their documentation.
On 10/28/2013 07:15 PM, Bill Nottingham wrote:
"Jóhann B. Guðmundsson" (johannbg@gmail.com) said:
You never deploy a desktop on a server so I would say we would limit this to a base/coreOS a set of administrative tools + a single application and or a application stack and the way we would deliver the products would be something that we would limit to netinstall + ks or something that integrades well with provisioning tools
I wouldn't say "never", given that somewhere around 20% of the users of a Fedora-downstream distribution do install a desktop on their server, as of a random sampling of a user subset done a while back.
Given, that sample should be followed up on to see *why* they do it (unfamiliarity with CLI/used to Windows, bad defaults in installer, other reasons?). In fact, if we're being forward looking where *everything* is deployed at large-scale, then this number should be expected to go down as fewer people are doing single-digit deployments. But the need may still exist for some people
Those users can then always first install their DE of choice then the server application unless of course If it has dependence on the presence of an gui in some form ( or a browser ) we ship *that* particular server application with dependency on it but for any third party stuff that's something we should not be worrying about unless upstream comes to it's sense and decided to package it, maintain it and ship it downstream with us ( which makes it our responsibility ).
JBG
On 10/28/13 8:55 AM, Stephen Gallagher wrote:
- Come up with standardized mechanisms for centralized monitoring
- Come up with standardized mechanisms for centralized configuration
and management
I think the world has a number of those already. Perhaps making leading tools work well with Fedora (at this point mostly systemd & journald) would be better goal. I for one know that systemd and puppet aren't the best of friends.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 10/28/2013 09:56 AM, Jimmy Dorff wrote:
On 10/28/13 8:55 AM, Stephen Gallagher wrote:
- Come up with standardized mechanisms for centralized
monitoring * Come up with standardized mechanisms for centralized configuration and management
I think the world has a number of those already. Perhaps making leading tools work well with Fedora (at this point mostly systemd & journald) would be better goal. I for one know that systemd and puppet aren't the best of friends.
Sorry, that may have been unclear. I didn't mean "develop new ones", I meant "pick the ones that are our official solutions and make those work really well".
On 10/28/2013 02:18 PM, Stephen Gallagher wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 10/28/2013 09:56 AM, Jimmy Dorff wrote:
On 10/28/13 8:55 AM, Stephen Gallagher wrote:
- Come up with standardized mechanisms for centralized
monitoring * Come up with standardized mechanisms for centralized configuration and management
I think the world has a number of those already. Perhaps making leading tools work well with Fedora (at this point mostly systemd & journald) would be better goal. I for one know that systemd and puppet aren't the best of friends.
Sorry, that may have been unclear. I didn't mean "develop new ones", I meant "pick the ones that are our official solutions and make those work really well".
Yes we need to define criteria and the transitioning process from server application we ship to a "product" we add to our product portfolio and market and advertice.
JBG
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 10/28/2013 10:47 AM, Jimmy Dorff wrote:
On 10/28/13 8:55 AM, Stephen Gallagher wrote:
- Provide simple setup of a file-server (on par with Windows).
At the risk of "bloat".. * iSCSI target and initiator * NFS server and client
Sure, while they weren't specifically called out, I think we want to make sure those features are available. My phrasing may have made it appear that I was thinking of Samba (since I mentioned Windows), but I just want to be sure we have a comparable offering.
On Mon, Oct 28, 2013 at 1:55 PM, Stephen Gallagher sgallagh@redhat.com wrote:
I am also not committing us to covering any or all of these targets.
The question of how much energy can we devote and/or direct towards any of the goals is, in a sense, critical to setting the right ambitions. But we'll not know until we try...
- Come up with standardized mechanisms for centralized monitoring
- Come up with standardized mechanisms for centralized configuration
and management
I think these two are really important. I'd like the Server to be as close to zero "bullshit maintenance" as possible, automating everything that is automatable. E.g. * The administrator doesn't have to deal with N different configuration file formats, and N different semantics of how /usr/*, /etc/*, /usr/*.d interact. * The administrator isn't required to type directives into a file and to deal with the fallout of a typo in the directive name.[1] * Upgrades within a release always work without human intervention. (=> after we get some experience, they could even be automatic.) * Upgrades between releases always work without human intervention unless the feature is completely removed (and in that case the user will be told before the upgrade starts). E.g. configuration and file formats are transparently and automatically updated. * The administrator is never required to set up the same option in two places. (E.g. joining an IPA domain should automatically configure all services to use IPA. Perhaps even have "the services" (see below) preinstalled and allow the user to enable them, instead of install+enable as currently.) * No alerts by a system that works correctly.
This is obviously impossible right now with the full universe of Open Source services, and with the full configuration possibilities of them. However, we probably should be able to decide on the major functions (HTTP, mail, DNS, ... == "the services" above) and build front-ends with such fully automated semantics for them (something like Yast keeping primary configuration in /etc/sysconfig in a common format, and using that to control the services' natural configuration). With the infrastructure built, other applications that are not "the services" could join.
Even in this more limited format, it would be a long-term project and a non-trivial amount of work. Is this direction achievable and interesting to the WG? Mirek
[1] Yes, this is essentially calling for abolishing schema-less text files as the primary storage mechanism (but not necessarily as a configuration editing mechanism for those who really want to use a text editor and shell to modify them). I'm firmly convinced that we need a common configuration semantics and a common, reliable way to edit the configuration from a tool that is not a general text editor, and we can't get that by taking the union of all features of all existing text configuration file formats.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 10/28/2013 08:55 AM, Stephen Gallagher wrote:
== What are our use-cases? ==
Full disclosure: the use-cases I am listing below come from discussions I have had within Red Hat for what we would like to see in Fedora that we can best build on to become Red Hat Enterprise Linux 8. Please take this in the spirit it is given: I am disclosing what Red Hat wants up front, so we aren't accused of working towards hidden goals. I am also not committing us to covering any or all of these targets. I do believe however that the majority of these use-cases will be beneficial to both Fedora and Red Hat.
- Provide a platform for acting as a node in an OpenStack rack. *
Provide a platform and simple setup for certain infrastructure services, e.g. * FreeIPA Domain Controller * BIND DNS * DHCP * Database server (both free and commercial). * Provide simple setup of a file-server (on par with Windows). * Platform for deploying web applications with high-value frameworks. * Ruby on Rails * Django * Turbogears * Node.js * Make Fedora the best platform to deploy JBoss applications. * Come up with standardized mechanisms for centralized monitoring * Come up with standardized mechanisms for centralized configuration and management * Simple enrollment into FreeIPA and Active Directory domains * Provide the best platform for secure application deployment * Isolation of OS from applications * Isolation of applications from each other * Isolation of application users from each other * Management of application resource consumption * Simplify management and deployment.[1] * Deliver the world's best leading edge DevOps platform.
== My initial thoughts == I am open to counter-arguments, naturally.
[1] Ideally, we want a mid-level Microsoft admin to be able to manage Fedora without much learning curve.
There is a lot of content in this thread and much of it has been summarized in https://fedoraproject.org/wiki/Server/Use_Cases
I'd like to bring this discussion back around, because we're going to want to trim this list and get it into shape to put into the PRD very soon.
Please view the wiki link above and let's continue the discussion on the various questions raised there. I'd like to hammer this into a useful document before the next meeting.
On Tue, Dec 17, 2013 at 6:31 PM, Stephen Gallagher sgallagh@redhat.com wrote:
There is a lot of content in this thread and much of it has been summarized in https://fedoraproject.org/wiki/Server/Use_Cases
Please view the wiki link above and let's continue the discussion on the various questions raised there. I'd like to hammer this into a useful document before the next meeting.
So, comments on the draft use cases (mostly in the direction of pruning the list) 1. Do we have any proposal for the list of software that needs to work? If not, should we drop this? Or is this only to enable third-party software? 3. Is "email" valuable in here, or would most users deploy a third-party integrated product or a cloud solution anyway? 5. and 6. might be combined 10. is not an use case 11.-13. seem not to be use cases either 17. container migration is a rather big aspiration. Is it realistic? 18. is not an use case 19. is not an use case (and it's IMHO actively harmful to do more work on "config files", legitimizing and enshrining the mechanism, instead of focusing on the higher-level configuration management interfaces)
Also, https://fedoraproject.org/wiki/Server/Proposals/Server_Roles is something that should not be entirely forgotten (though that list does need some amount of changes, e.g. "failover clustering" is an infrastructure for a role, not a role in itself). Mirek
On fim 19.des 2013 14:56, Miloslav Trmač wrote:
Also,https://fedoraproject.org/wiki/Server/Proposals/Server_Roles is something that should not be entirely forgotten (though that list does need some amount of changes, e.g. "failover clustering" is an infrastructure for a role, not a role in itself).
I was thinking of "clustered roles" when I wrote the "failover clustering" so yeah it could be looked as an more of an infrastructure for a role but would think that proposal of mine would have been withdrawn along with me after my departure from the WG along with the rest of the proposals I made.
JBG
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 12/19/2013 10:17 AM, "Jóhann B. Guðmundsson" wrote:
On fim 19.des 2013 14:56, Miloslav Trmač wrote:
Also,https://fedoraproject.org/wiki/Server/Proposals/Server_Roles is something that should not be entirely forgotten (though that list does need some amount of changes, e.g. "failover clustering" is an infrastructure for a role, not a role in itself).
I was thinking of "clustered roles" when I wrote the "failover clustering" so yeah it could be looked as an more of an infrastructure for a role but would think that proposal of mine would have been withdrawn along with me after my departure from the WG along with the rest of the proposals I made.
Why would that happen? Some of your ideas were very good and we agreed with them. Why would we reject them simply because you stepped down? That would be irrational.
On fim 19.des 2013 15:24, Stephen Gallagher wrote:
Why would that happen? Some of your ideas were very good and we agreed with them. Why would we reject them simply because you stepped down?
You are not rejecting them I chose to withdraw my proposal as I clearly stated on the FOSS pages and other of my work with my departure from the server WG so I would very much appreciate if that would be honored since it would be the honorable thing to do.
What I wrote was written with everyone's interests at heart in the community not just the applications and applications stack maintained and best serves Red Hat's interests or teams there within and used to advanced those application or applications stacks over community maintained ones.
Eucalyptus or openstack, postgres or mariadb, 389ds or openldap etc it was aimed and intended for everybody in the community to benefit from not just single corporate that participates within our community..
To put it simply I do not want work of mine be associated with a process that's not with the best interest of the community at it's heart.
That would be irrational.
What's irrational is the WG process
Thanks JBG
server@lists.fedoraproject.org