Good Morning Everyone,
Our infrastructure is mostly a python store, meaning almost all our apps are
written in python and most using wsgi.
However in python we are using a number of framework:
* flask for most
* pyramid for some of the biggest (bodhi, FAS3)
* Django (askbot, Hyperkitty)
* TurboGears2 (fedora-packages)
* aiohttp (python3, async app: mdapi)
While this makes sometime things difficult, these are fairly standard framework
and most of our developers are able to help on all.
However, as I see us starting to look at JS for some of our apps (fedora-hubs,
wartaa...), I wonder if we could start the discussion early about the different
framework and eventually see if we can unify around one.
This would also allow those of us not familiar with any JS framework to look at
the recommended one instead of picking one up semi-randomly.
So has anyone experience with one or more JS framework? Do you have one that
would you recommend? Why?
Thanks for your inputs,
I wrote  to devel some time ago regarding the deprecation of the apps.fp.o
index and plan to move its content to the main docs. Kevin mentionned that it
could end up in the infrastructure docs and that the whole should be moved to
docs.fp.o at some point. I will take a look at both since I have wanted to play
with the new documentation pipeline for a while. I am not the best guy to
meddle with the infrastructure doc but I might as well do something useful
while playing with antora. Tell me if it's not or if I missed something.
I might have something to show you at Flock if I have troubles sleeping in the
See you in Budapest,
On Mon, Apr 29, 2019 at 4:47 PM Kamil Paral <kparal(a)redhat.com> wrote:
> On Mon, Apr 29, 2019 at 11:39 AM Sinny Kumari <ksinny(a)gmail.com> wrote:
>> On Wed, Apr 24, 2019 at 12:19 AM Kevin Fenzi <kevin(a)scrye.com> wrote:
>>> Or could we move f29+ all to whatever is replacing it? (taskotron?)
>> It will be nice but I am not aware of any other system in place which
>> replace checks performed by autocloud.
>> (CC'ed tflink and kparal)
>> Does taskotron provides capability to perform tests on Fedora cloud
>> Images like booting images and other basic checks?
> Theoretically it is possible using nested virt. However, Taskotron is
> going away as well. The replacement is Fedora CI:
Thanks kamil! yeah, it doesn't make sense to move to Taskotron if it is
going to be deprecated as well.
> I recommend to ask in the CI list:
> It should be possible for them to provide the infrastructure you need.
Hmm, I am not very sure if we should spend time investigating and setting
to autocloud unless we have usecases for long run. Fedora Atomic Host Two
Week releases ends with F29 EOL.
A few weeks ago I went and tagged a bunch of old releng and
infrastructure tickets with the 'backlog' tag. Clement added some more
the other day.
In our last meeting we did some simple voting on the list to determine
The idea is that we will take the top 1-3 per week and get small groups
of people to work on them (at the very least one person can do the work
and explain it to another to document/provide feedback on). Then at each
meeting we look at the list, confirm what we did and take some more.
Of course we have other priorities to deal with too, but focusing on a
few tasks and trying to spread knowledge around them will hopefully get
our backlog down over time.
As a side note we are trying also with this copying these to an internal
jira instance to see if we can track backlog flow better and figure out
workflows, but this is purely a copy and pagure instances are where all
the work and comments are done, so anyone not on the CPE team can ignore
this for now. :)
So, for this week we decided to work on:
8178 provision new aarch64 builders xxxxxx
I have this one, it takes access to get them setup, but I am happy to
explain on IRC what I am doing and how the setup works. Assistance could
be used to add them to ansible, as well as documentation if there's
anything special about them. I intend to work on this tomorrow morning.
I'll ping everyone in #fedora-admin on this who is interested.
8157 ansible: enable ansible-report as a hook xxxxxx
I'd love someone else to take this one. Any takers?
I can provide pointers...
8065 Move older koji builds to archive volumes xxxxxx
This is already in progress and has been for some time. I'd love to talk
to others and explain how it's being done and get some documentation
written up on it and others to know how to do it if I am not around.
We could also use some discussion about how to split the koji volume a
bit more. I might be able to work on this wed morning? Anyone interested
Thanks and hopefully we can start cranking out some of this backlog
Fedora Infrastructure currently has the majority of its hardware in a
datacenter in Arizona, USA. Red Hat leases this space for use by a number of
teams, including Fedora. However, they've been seeking a more modern and cost
effective location for some time and have decided on one:
So, we will be migrating to a new datacenter located in Ashburn, Virginia
FESCo has approved a 2 week window for the actual move to take place
( https://pagure.io/fesco/issue/2221 ): 2020-06-01 to 2020-06-15.
This window is after Fedora 32 is released, but before any major
Fedora 33 Milestones.
At a high level, our current plan is:
* Setup the new datacenter with networking/storage/management
* Populate the new datacenter with new hardware to replace old hardware that
either wouldn’t survive the shipping or is due to be refreshed
* Ship some small shipment of hardware from the old datacenter to the new
that are not easily duplicated like signing hardware,
alternative arch builders, etc.
* Setup and have by the early part of the outage window a
Minimum Viable Fedora Infrastructure (see below) using new hardware
and some old.
* Function in this minimal state as all the rest of the hardware is
shipped to the new datacenter.
* Re-add hardware to return to normal state.
We want to maintain continuity of service as best we can,
so we have defined a Minimal Viable Fedora which will move in advance
of the main hardware. Our intention is to reroute traffic to this setup
before moving the bulk of our hardware.
Our current list of what a Minimum Viable Fedora Infrastructure is:
* Mirroring fully functional. Users get metalinks, mirrors are crawled, etc
* The complete package lifecycle must work.
From commit to update installed on users machines.
We need this to push security and important bugfixes as well as to allow
maintainers to work toward Fedora 33.
* Our production openshift cluster must be up and running normally.
(This cluster has fas, bodhi and other important items in it)
* Builders will likely be constrained.
Ie, less of most arches.
Capacity will be re-added as soon as the hardware for it arrives.
* Rawhide composes take place as normal.
* Nameservers functional
* rabbitmq/fedora-messaging should be up and functional.
* Internal proxies must be functional (used by builders and other internal items)
* Mailing lists must be functional
* Backups must be functional
* OpenQA must be available to test updates/rawhide composes
* Wiki must be available for common bugs / qa
Other services not listed may or may not be up depending on capacity
and issues with more important services.
And explicitly some things will NOT be available during that window:
* Staging. There will be no staging, so no rolling out new services.
* Full capacity/number of builders
* External proxies in the new datacenter
* HA for some services.
We are sending this announcement not only to let you all be aware of this move,
but to help us plan. If you see some service that you think is critical
to Fedora and cannot be down for 2 weeks, and isn't listed above
please let us know so we can adjust our plans.
We want to make sure things that are critical keep running
smoothly for the Fedora community.
Feedback by next friday (2019-10-04) would be welcome.
Kevin for CPE and the Fedora Infrastructure team.
I’d like to introduce myself first, my name is Aoife Moloney and I recently
started with the Community Platform Engineering (CPE) team. My role within
this team is going to be a hybrid role of a Product Owner / Project Manager.
As part of that, I want to send a weekly update to the lists to give an
insight into what the CPE team have worked on over the past week or so.
This will also be mirrored as a higher level blog to give maximum
We, as a team, welcome your input and comments and please do let us know
how we can improve this community facing information segment!
As you know, the CPE team looks after interests in both Fedora and CentOS,
so this update is also going to include work done on areas that are not
your Community, but for transparency, we are including it :)
High Level Project Updates:
Rawhide Gating: <https://github.com/fedora-infra/bodhi/projects/3>
First work around merging side tags has been completed
Open PR <https://github.com/fedora-infra/bodhi/pull/3498> that needs
to be finished
We need to update our staging environment which broke when we branched
F31 in production:
Tracked in: https://pagure.io/releng/issue/8838
Koji stuck/full: https://pagure.io/fedora-infrastructure/issue/8240
Overview page of the remaining blockers and dependencies organized at:
Input welcome for anything that would be missing
High priority items in the “Ready” column are all hard-dependency
for pushing multi-builds to production
Race condition discovered has been fixed and tested
Test suite has a number of issues which are being triaged and worked
through at the moment
Performance testing is next on our agenda
Badges has had some further conversations with community members and we
are aligning on a handover date
Packagedb-cli is being retired this week
Documentation for onboarding contributors to Community OpenShift was
started with a good mail thread on Fedora Devel
Pastebin: Ongoing conversations with future maintainer, expecting an
update in the next 2 weeks
Elections: Ben Cotton is taking this over and the CPE team are assisting
with moving this to Communishift
Fedocal still needs your help, we need a Maintainer here urgently
If no one steps up by October 15th, we will be looking at
Nuancier, we possibly have identified a maintainer here and the team are
engaging in a conversation
Misc highlights from various parts of the ecosystem:
FPDC had some light work on it, an instance of Kinto (
http://docs.kinto-storage.org/en/stable/) was deployed in staging at
Fedora Container base image update for F30, F31 and F32. (Dockerhub
Rawhide compose failures, due to podman gating, filed
https://github.com/fedora-infra/bodhi/issues/3512 on it
Announced F31 Beta freeze over, adjusted thing
F31 Beta saw the team help in various places, everything went smoothly!
Fixed pagure stunnel to use tls1.1/1.2 and a valid cert
Fixed pagure event server to use valid cert with intermediate:
Mdapi app fully moved to OpenShift:
Branched versioned Fedora docs for F31
Worked on updates to Fedora contributor docs
Progressing more fedmsg → fedora messaging conversions on our scripting
Fixed the Koji rpm.sign fedora-messaging message (
CentOS 8 was released this week :)
Core artifacts (repos and install media) were refreshed and sent to
CentOS Stream was built and staged
Cloud and Container images are outstanding, coming up next
Sources and debuginfo content synced to the mirrors
Armhfp enablement coming soon
Activities related to releasing CentOS 7.7 (17-September) and
subsequently CentOS 8 (24-September) consumed the entire teams time!
Community Platform Engineering Team
Red Hat EMEA <https://www.redhat.com>