So we currently have a staging instance of OpenShift Container Platform -- what are the next actions needed to get a production deployment going?
(My understanding is that, as a new service, we're not blocked by infra freeze itself, but there may be other things in the way on top of just having cycles free.)
On Mon, Jul 03, 2017 at 01:12:48PM -0400, Paul W. Frields wrote:
So we currently have a staging instance of OpenShift Container Platform -- what are the next actions needed to get a production deployment going?
My understanding is that we are at least waiting on the wildcard cert for it.
Pierre
On 07/03/2017 11:17 AM, Pierre-Yves Chibon wrote:
On Mon, Jul 03, 2017 at 01:12:48PM -0400, Paul W. Frields wrote:
So we currently have a staging instance of OpenShift Container Platform -- what are the next actions needed to get a production deployment going?
My understanding is that we are at least waiting on the wildcard cert for it.
We were, but that arrived a week or two ago. ;)
The next steps are:
* Make sure our plan for managing apps/routers/etc will work. Basically we thought we would check in .json configs in ansible and then have it run os commands loading those. We need a app using that method to confirm it works as we expect. I think relrod was going to try and do that with waverdb or modernpaste, but we should confirm, he might have been working on the non config part of this. ;)
* Figure out monitoring if we need nagios to check things or can somehow get alerts from os when an app dies/stops working.
Might be more, but at that point I think we could look at spinning up the prod one and getting a wildcard cert for it.
Then after that and we have our new cloud setup, we can setup another os there for contributors/dev/community use.
kevin
On Mon, Jul 03, 2017 at 11:51:03AM -0600, Kevin Fenzi wrote:
On 07/03/2017 11:17 AM, Pierre-Yves Chibon wrote:
On Mon, Jul 03, 2017 at 01:12:48PM -0400, Paul W. Frields wrote:
So we currently have a staging instance of OpenShift Container Platform -- what are the next actions needed to get a production deployment going?
My understanding is that we are at least waiting on the wildcard cert for it.
We were, but that arrived a week or two ago. ;)
[...]
Might be more, but at that point I think we could look at spinning up the prod one and getting a wildcard cert for it.
I meant the prod wildcard cert :)
Pierre
On Mon, Jul 03, 2017 at 08:01:16PM +0200, Pierre-Yves Chibon wrote:
On Mon, Jul 03, 2017 at 11:51:03AM -0600, Kevin Fenzi wrote:
On 07/03/2017 11:17 AM, Pierre-Yves Chibon wrote:
On Mon, Jul 03, 2017 at 01:12:48PM -0400, Paul W. Frields wrote:
So we currently have a staging instance of OpenShift Container Platform -- what are the next actions needed to get a production deployment going?
My understanding is that we are at least waiting on the wildcard cert for it.
We were, but that arrived a week or two ago. ;)
[...]
Might be more, but at that point I think we could look at spinning up the prod one and getting a wildcard cert for it.
I meant the prod wildcard cert :)
Do we have that prod wildcard cert now?
On 17 July 2017 at 12:23, Paul W. Frields stickster@gmail.com wrote:
On Mon, Jul 03, 2017 at 08:01:16PM +0200, Pierre-Yves Chibon wrote:
On Mon, Jul 03, 2017 at 11:51:03AM -0600, Kevin Fenzi wrote:
On 07/03/2017 11:17 AM, Pierre-Yves Chibon wrote:
On Mon, Jul 03, 2017 at 01:12:48PM -0400, Paul W. Frields wrote:
So we currently have a staging instance of OpenShift Container Platform -- what are the next actions needed to get a production deployment going?
My understanding is that we are at least waiting on the wildcard cert for it.
We were, but that arrived a week or two ago. ;)
[...]
Might be more, but at that point I think we could look at spinning up the prod one and getting a wildcard cert for it.
I meant the prod wildcard cert :)
Do we have that prod wildcard cert now?
Working on it.
On 17 July 2017 at 12:26, Stephen John Smoogen smooge@gmail.com wrote:
On 17 July 2017 at 12:23, Paul W. Frields stickster@gmail.com wrote:
On Mon, Jul 03, 2017 at 08:01:16PM +0200, Pierre-Yves Chibon wrote:
On Mon, Jul 03, 2017 at 11:51:03AM -0600, Kevin Fenzi wrote:
On 07/03/2017 11:17 AM, Pierre-Yves Chibon wrote:
On Mon, Jul 03, 2017 at 01:12:48PM -0400, Paul W. Frields wrote:
So we currently have a staging instance of OpenShift Container Platform -- what are the next actions needed to get a production deployment going?
My understanding is that we are at least waiting on the wildcard cert for it.
We were, but that arrived a week or two ago. ;)
[...]
Might be more, but at that point I think we could look at spinning up the prod one and getting a wildcard cert for it.
I meant the prod wildcard cert :)
Do we have that prod wildcard cert now?
Working on it.
Done and scripts updated to point to the new certs.
On 07/03/2017 01:51 PM, Kevin Fenzi wrote:> The next steps are:
- Make sure our plan for managing apps/routers/etc will work. Basically
we thought we would check in .json configs in ansible and then have it run os commands loading those. We need a app using that method to confirm it works as we expect. I think relrod was going to try and do that with waverdb or modernpaste, but we should confirm, he might have been working on the non config part of this. ;)
I'm ready to move the-new-hotness over. Is the expectation that I deliver a JSON file that defines my project as part of a request for resources ticket?
Are we going to disable editing in the web UI and only let changes get applied by hand-editing the JSON and pushing that with Ansible? I'm not necessarily objecting to that approach, I'm just curious.
Would it be better for us to have web editing privileges and use the staging instance to tune our configurations in an OpenShift configured the same way the production instance will be, then export those for production, or are we better off all having development OpenShift instances for that? We'll need to all be careful to run the same version as stg/production since it sounds like this JSON isn't particularly well-documented, nor is there any schema or validation tools.
On Mon, Jul 3, 2017, at 11:05 PM, Jeremy Cline wrote:
On 07/03/2017 01:51 PM, Kevin Fenzi wrote:> The next steps are:
- Make sure our plan for managing apps/routers/etc will work. Basically
we thought we would check in .json configs in ansible and then have it run os commands loading those. We need a app using that method to confirm it works as we expect. I think relrod was going to try and do that with waverdb or modernpaste, but we should confirm, he might have been working on the non config part of this. ;)
I'm ready to move the-new-hotness over. Is the expectation that I deliver a JSON file that defines my project as part of a request for resources ticket?
Are we going to disable editing in the web UI and only let changes get applied by hand-editing the JSON and pushing that with Ansible? I'm not necessarily objecting to that approach, I'm just curious.
Would it be better for us to have web editing privileges and use the staging instance to tune our configurations in an OpenShift configured the same way the production instance will be, then export those for production, or are we better off all having development OpenShift instances for that? We'll need to all be careful to run the same version as stg/production since it sounds like this JSON isn't particularly well-documented, nor is there any schema or validation tools.
Have we checked with how the RH IT and RH OpenShift Online teams do this?
They may have an answer for us.
regards,
bex
-- Jeremy Cline XMPP: jeremy@jcline.org IRC: jcline
infrastructure mailing list -- infrastructure@lists.fedoraproject.org To unsubscribe send an email to infrastructure-leave@lists.fedoraproject.org Email had 1 attachment:
- signature.asc 1k (application/pgp-signature)
On 07/04/2017 04:22 AM, Brian Exelbierd wrote:
On Mon, Jul 3, 2017, at 11:05 PM, Jeremy Cline wrote:
On 07/03/2017 01:51 PM, Kevin Fenzi wrote:> The next steps are:
- Make sure our plan for managing apps/routers/etc will work. Basically
we thought we would check in .json configs in ansible and then have it run os commands loading those. We need a app using that method to confirm it works as we expect. I think relrod was going to try and do that with waverdb or modernpaste, but we should confirm, he might have been working on the non config part of this. ;)
I'm ready to move the-new-hotness over. Is the expectation that I deliver a JSON file that defines my project as part of a request for resources ticket?
Are we going to disable editing in the web UI and only let changes get applied by hand-editing the JSON and pushing that with Ansible? I'm not necessarily objecting to that approach, I'm just curious.
Would it be better for us to have web editing privileges and use the staging instance to tune our configurations in an OpenShift configured the same way the production instance will be, then export those for production, or are we better off all having development OpenShift instances for that? We'll need to all be careful to run the same version as stg/production since it sounds like this JSON isn't particularly well-documented, nor is there any schema or validation tools.
Have we checked with how the RH IT and RH OpenShift Online teams do this?
They may have an answer for us.
Yes, and we could ask again now, but when we talked to folks at the hackfest in RDU pretty much the answer was that it was up in the air and some of them were using oc commands in ansible to make apps, etc...
kevin
On Tue, Jul 4, 2017, at 07:17 PM, Kevin Fenzi wrote:
On 07/04/2017 04:22 AM, Brian Exelbierd wrote:
On Mon, Jul 3, 2017, at 11:05 PM, Jeremy Cline wrote:
On 07/03/2017 01:51 PM, Kevin Fenzi wrote:> The next steps are:
- Make sure our plan for managing apps/routers/etc will work. Basically> >>> we thought we would check in .json configs in ansible and then
have it> >>> run os commands loading those. We need a app using that method to> >>> confirm it works as we expect. I think relrod was going to try and do> >>> that with waverdb or modernpaste, but we should confirm, he might have> >>> been working on the non config part of this. ;)
I'm ready to move the-new-hotness over. Is the expectation that I deliver a JSON file that defines my project as part of a request for> >> resources ticket?
Are we going to disable editing in the web UI and only let changes get> >> applied by hand-editing the JSON and pushing that with Ansible? I'm not> >> necessarily objecting to that approach, I'm just curious.
Would it be better for us to have web editing privileges and use the> >> staging instance to tune our configurations in an OpenShift configured> >> the same way the production instance will be, then export those for> >> production, or are we better off all having development OpenShift instances for that? We'll need to all be careful to run the same version> >> as stg/production since it sounds like this JSON isn't particularly> >> well-documented, nor is there any schema or validation tools.
Have we checked with how the RH IT and RH OpenShift Online teams do> > this?
They may have an answer for us.
Yes, and we could ask again now, but when we talked to folks at the hackfest in RDU pretty much the answer was that it was up in the air and> some of them were using oc commands in ansible to make apps, etc...
I didnt jnow know how well the operational side of that team was represented at the hackfest. Regards,
bex
kevin
infrastructure mailing list -- infrastructure@lists.fedoraproject.org> To unsubscribe send an email to infrastructure-leave@lists.fedoraproject.org Email had 1 attachment:
- signature.asc 1k (application/pgp-signature)
On 07/03/2017 03:05 PM, Jeremy Cline wrote:
On 07/03/2017 01:51 PM, Kevin Fenzi wrote:> The next steps are:
- Make sure our plan for managing apps/routers/etc will work. Basically
we thought we would check in .json configs in ansible and then have it run os commands loading those. We need a app using that method to confirm it works as we expect. I think relrod was going to try and do that with waverdb or modernpaste, but we should confirm, he might have been working on the non config part of this. ;)
I'm ready to move the-new-hotness over. Is the expectation that I deliver a JSON file that defines my project as part of a request for resources ticket?
Sure? Or that we come up with one as part of the process...
Are we going to disable editing in the web UI and only let changes get applied by hand-editing the JSON and pushing that with Ansible? I'm not necessarily objecting to that approach, I'm just curious.
For production, I would think we would lock down changes there yes. For staging, we could leave it open and that might help us figure out how we want the app configured in prod?
Would it be better for us to have web editing privileges and use the staging instance to tune our configurations in an OpenShift configured the same way the production instance will be, then export those for production, or are we better off all having development OpenShift instances for that? We'll need to all be careful to run the same version as stg/production since it sounds like this JSON isn't particularly well-documented, nor is there any schema or validation tools.
Well, since we have no dev currently, I'd be ok with leaving stg more open to change... I am pretty sure you can get it to dump out the json of the existing config, so you could in theory set things up, adjust via the web interface and get it the way you want it, then dump the json and we can use that in prod.
Of course we are all still feeling our way here so we will need to adjust a bunch I think. :)
kevin
On Tue, 2017-07-04 at 11:16 -0600, Kevin Fenzi wrote:
I am pretty sure you can get it to dump out the json of the existing config, so you could in theory set things up, adjust via the web interface and get it the way you want it, then dump the json and we can use that in prod.
I think this is the right workflow, but I wanted to point out that the JSON will have to be "massaged" in some way. I recently took an OpenShift course at Red Hat and we learned that this JSON export/import process does require some hand editing, but the course was an intro course and so we didn't go into the details about what exactly needed to be edited or in what way.
Excerpts from Randy Barlow's message of 2017-07-05 13:42 -04:00:
On Tue, 2017-07-04 at 11:16 -0600, Kevin Fenzi wrote:
I am pretty sure you can get it to dump out the json of the existing config, so you could in theory set things up, adjust via the web interface and get it the way you want it, then dump the json and we can use that in prod.
I think this is the right workflow, but I wanted to point out that the JSON will have to be "massaged" in some way. I recently took an OpenShift course at Red Hat and we learned that this JSON export/import process does require some hand editing, but the course was an intro course and so we didn't go into the details about what exactly needed to be edited or in what way.
Speaking only from my very limited experience -- I think the reason it needs massaging is that when you export an object with oc, it includes various (read-only) status properties -- ones which reflect the current state in time and are not configurable. You wouldn't want those to be tracked in git or posted back when you update the object, because that's kind of meaningless.
Here is an example below (and btw I suggest YAML over JSON because it's easier to edit, oc happily deals in either format). You can see large parts are not configuration, for example the top-level "status" key, "creationTimestamp", the JSON blob in "kubectl.kubernetes.io/last-applied-configuration" annotation, "selfLink", "uid", etc.
I think oc knows to ignore those when you post an object back, but you would want to edit them out before you commit it to git I guess.
$ oc -n waiverdb-stg get route waiverdb-stg-web -o yaml apiVersion: v1 kind: Route metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: '{"kind":"Route","apiVersion":"v1","metadata":{"name":"waiverdb-stg-web","creationTimestamp":null,"labels":{"app":"waiverdb"}},"spec":{"host":"waiverdb.stage.engineering.redhat.com","to":{"kind":"Service","name":"waiverdb-stg-web","weight":null},"port":{"targetPort":"web"},"tls":{"termination":"edge","insecureEdgeTerminationPolicy":"Redirect"}},"status":{"ingress":null}}' creationTimestamp: 2017-06-22T00:18:48Z labels: app: waiverdb name: waiverdb-stg-web namespace: waiverdb-stg resourceVersion: "6225347" selfLink: /oapi/v1/namespaces/waiverdb-stg/routes/waiverdb-stg-web uid: 5cc0dced-56e0-11e7-83a0-009b1a10019b spec: host: waiverdb.stage.engineering.redhat.com port: targetPort: web tls: insecureEdgeTerminationPolicy: Redirect termination: edge to: kind: Service name: waiverdb-stg-web weight: 100 wildcardPolicy: None status: ingress: - conditions: - lastTransitionTime: 2017-06-22T00:18:48Z status: "True" type: Admitted host: waiverdb.stage.engineering.redhat.com routerName: router wildcardPolicy: None - conditions: - lastTransitionTime: 2017-06-30T09:50:58Z status: "True" type: Admitted host: waiverdb.stage.engineering.redhat.com routerName: storage-project-router wildcardPolicy: None
Excerpts from Dan Callaghan's message of 2017-07-06 09:05 +10:00:
Here is an example below (and btw I suggest YAML over JSON because it's easier to edit, oc happily deals in either format). You can see large parts are not configuration, for example the top-level "status" key, "creationTimestamp", the JSON blob in "kubectl.kubernetes.io/last-applied-configuration" annotation, "selfLink", "uid", etc.
Oh I just remembered... The oc get command *does* have an option --export which is supposed to strip out all this stuff. But it seems to leave some things in. For example the selfLink, status, last-applied-configuration are all still present (although there is certainly *less* gunk):
$ oc -n waiverdb-stg get route waiverdb-stg-web -o yaml --export apiVersion: v1 kind: Route metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: '{"kind":"Route","apiVersion":"v1","metadata":{"name":"waiverdb-stg-web","creationTimestamp":null,"labels":{"app":"waiverdb"}},"spec":{"host":"waiverdb.stage.engineering.redhat.com","to":{"kind":"Service","name":"waiverdb-stg-web","weight":null},"port":{"targetPort":"web"},"tls":{"termination":"edge","insecureEdgeTerminationPolicy":"Redirect"}},"status":{"ingress":null}}' creationTimestamp: null labels: app: waiverdb name: waiverdb-stg-web selfLink: /oapi/v1/namespaces//routes/waiverdb-stg-web spec: host: waiverdb.stage.engineering.redhat.com port: targetPort: web tls: insecureEdgeTerminationPolicy: Redirect termination: edge to: kind: Service name: waiverdb-stg-web weight: 100 wildcardPolicy: None status: ingress: null
infrastructure@lists.fedoraproject.org