I launched a F17 instance in our Eucalyptus system.
It works. (Awesome!)
I have not done anything to try and break it. Let me know if I should. :)
--
Matthew Miller ☁☁☁ Fedora Cloud Architect ☁☁☁ <mattdm(a)fedoraproject.org>
Hi everyone,
Here is a little about me.
My name is Matthew Whittle and I currently work as a senior application
developer at VSP (Vision Service Plan) - it is eye insurance. I have a
lovely wife and no kids (yet!). We live in Sacramento California in the US
and I heard about this project through someone at JBoss at the JavaOne
conference in San Francisco. At my current work I develop backend services
for our membership using Java and Websphere and for fun I develop games for
the iPhone (I have about 8 out now). I have been using Fedora at home for
about 5 years now.
I joined this group because I have a desire to build a 3D operating
system. By tomorrow. Just kidding. Yes I know that is a huge task but I
thought I could start small by making a 3D desktop background... or maybe
I should start even smaller. But at least you know my short and long term
goals.
My IRC handle I believe is MattWhCaUs (it is the nickname I used)
(by the way, http://www.wikihow.com/Register-a-User-Name-on-Freenode lists
IRC at http://webchat.freenode.com/ but it is really
http://webchat.freenode.net/ Later on you can show me how to fix that
myself!)
Cool beans - looking forward to hearing from y'all.
Cheers,
Matthew Whittle
Hi,
I have been a member of this list for a few years now but due to some
constraints I have not been able to actively participate. I would now
like to start be active in Fedora development. I have been using fedora
as my 1st choice OS since Fedora Core 6.
I would be very glad to have someone who can show me the ropes.
Regards,
Onalenna Junior Makhura
Couldn't see this in Infra SOP
What criteria governs the Upgrading of major compenents of FedoraHosted
FedoraN+, Trac, *sql*, the underlying infra,
not the hosted projects themselves.
--
Regards,
Frank
"Jack of all, fubars"
When running a ftbfs run on both euca and openstack I started to see some
pretty big differences in performance. CPU and mem were all the same or
close enough so I decided to look at disk performance.
euca is backed by local disks in a raid/lvm layout and/or exported via
iscsi through the Storage Controller.
openstack is using a replicated/distributed gluster for all disk back
ends - including ephemeral (local) and volume-backed (iscsi)
Results are here and are kinda staggering:
http://skvidal.fedorapeople.org/misc/cloudbench.txt
in short - gluster performance really bogs us down for building in the
cloud instances.
thoughts on improving that performance or do we simply want to have
certain workloads in euca specifically b/c the disks are more disposable
and/or faster?
-sv
On Mon, 8 Oct 2012, Jeff Darcy wrote:
>> http://skvidal.fedorapeople.org/misc/cloudbench.txt
>
> Two things jump out at me from these results. First is that using GlusterFS
> replication for ephemeral storage seems . . . strange. Is there some reason
> that the OpenStack setup can't use local storage like the Eucalyptus one does?
> Using remote storage for ephemeral is just always going to be sub-optimal no
> matter how well that remote storage works.
Our primary reasons for using gluster here were two:
1. to use all the space available on all of the nodes - compute and
storage/networking
2. to have a common disk backend for openstack to enable their live
migration capability - so we can migrate instances off of compute node so
we can put it in downtime.
> The second thing is the ratios between Euca/iSCSI and OS/GlusterFS speeds.
> Here are the numbers worked out:
>
> sequential read: 5.2x to 6.0x
> random read: 3.1x to 6.0x
> sequential write: 5.7x to 8.2x
> random write: 1.7x to 2.0x
>
> This is a bit surprising, because I've always thought of random writes as one
> of our worst cases. Apparently it's also someone else's, though we still get
> beaten. It's also interesting that the worse numbers (for us) tend to be at
> the higher thread counts, which is kind of contrary to our being all about
> scalability rather than per-thread performance. These results are grim.
They are kind of brutal but the conditions underwhich I was testing them
was about the same between the two cloudlets.
> I'm inclined to think that the read results - as bad as they might be - aren't
> the problem here because reads can benefit from caching and locality of
> reference.
> Seth, let me know if that's not true for your workload. The real
> problem is writes. The report for random write seems inconsistent here, with
> very low throughput but also very low latency. I'll go with the throughput
> numbers and say that for 4KB requests we're looking at ~160 IOPS. Blech. If
> it were me, I'd stick with the Euca instances for this workload unless/until we
> figure out why GlusterFS is performing so horribly.
Our goal is to use gluster to let us do live migrations. We've discussed
the possibility of having separate cloudlets on purpose - so we can have
use cases of disposable/fast instances and longer-term, reliable
instances. And I think we're all on board with that, actually. There's
just no harm in wanting cake and ice cream. Sometimes you might not get
them both. :)
Thanks for the analysis, Jeff.
-sv
Greetings.
We are working to finalize planning on the security fad:
https://fedoraproject.org/wiki/FAD_Infrastructure_Security_2012
If anyone would like to attend and would be helpful in achieving our
goal, please add your information asap.
If you have your info there, but have not updated it, we may drop you
from the list.
Thanks,
kevin
Hi all. Here's my intro...
My name is (obviously) Miguel and I'm from Portugal. Regarding IRC (and
pretty much everything else) I'll go by the nick of miguelcnf.
I'm currently a systems & operations engineer at an IT & Telco company in
Portugal.
I'm a Linux enthusiast and huge fan of Perl. I've been using Fedora as my
personal choice of OS since Cambridge.
I've recently joined the Portuguese L10N team and I'm looking forward to
contribute to the Infrastructure team as well.
I'm a RHCE and work on a daily basis with Red Hat/CentOS systems
administration so I think I could probably be helpful on the
sysadmin-web/sysadmin-tools FIGs.
Let me know what I can do to help!
Cheers
Miguel
The infrastructure team will be having it's weekly meeting tomorrow,
2012-10-04 at 18:00 UTC in #fedora-meeting on the freenode network.
Suggested topics:
#topic New folks introductions and Apprentice tasks.
If any new folks want to give a quick one line bio or any apprentices
would like to ask general questions, they can do so in this part of the
meeting. Don't be shy!
#topic Applications status / discussion
Check in on status of our applications: pkgdb, fas, bodhi, koji,
community, voting, tagger, packager, dpsearch, etc.
If there's new releases, bugs we need to work around or things to note.
#topic Sysadmin status / discussion
Here we talk about sysadmin related happenings from the previous week,
or things that are upcoming.
#topic Private Cloud status update
#topic Security FAD update
#topic Upcoming Tasks/Items
#info 2012-10-08 purge inactive fi-apprentices
#info 2012-10-08 - announce smolt retirement
#info 2012-10-09 to 2012-10-23 F18 Beta Freeze
#info 2012-10-23 F18 Beta release
#info 2012-11-01 nag fi-apprentices
#info 2012-11-07 - switch smolt server to placeholder code.
#info 2012-11-13 to 2012-11-27 F18 Final Freeze
#info 2012-11-20 FY2014 budget due
#info 2012-11-22 to 2012-11-23 Thanksgiving holiday
#info 2012-11-26 to 2012-11-29 Security FAD
#info 2012-11-27 F18 release.
#info 2012-11-30 end of 3nd quarter
#info 2012-12-24 to 2013-01-01 Red Hat Shutdown for holidays.
#info 2013-01-18 to 2013-01-20 FUDCON Lawrence
#topic Open Floor
Submit your agenda items, as tickets in the trac instance and send a
note replying to this thread.
More info here:
https://fedoraproject.org/wiki/Infrastructure/Meetings#Meetings
Thanks
kevin
hello team,
what can I do next for Fedora Infrastructure? Please let me know about
any issue which requires immediate attention and I can help in
that...:)
thanks & regards,
--
Vipin K.
Research Engineer,
C-DOTB, India