Hey guys, So some of you have heard that the load from today was pretty close to crippling for a while. We got it under 'control' for a bit but it wasn't happy. Mike suggested making some of our pages in the wiki static so we pulled them out using firefox and then put them onto the site using apache redirects. That worked out okay to bring the load down. We're still get beaten pretty badly but at least the load is manageable.
Let's assume for now that the next N releases will be just like this. That the world will melt down. I think we should probably put together some docs on what steps to take. Here are some items we did today:
1. iptables rate limiting on fpserv: 2-4 new connection per ip for every 10-20 seconds.
2. look at the logs, figure out where all the hits are going and make those pages static. You can do this by using firefox to save the page (it grabs images and css, too)
3. turn off any unnecessary bits: zope (if its not being used, rsyncd)
4. trim out your log outputs. Decrease the number of things writing out to disk anywhere
5. get comfortable with /etc/init.d/httpd restart - just to get the load back under control
What else should we be adding?
-sv
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Add HW.
You need a dedicated load balanced env if you want to ever run the primary bits. We have 5 Dell servers for PRX, 4 APP that fell over. On top of that the Load Balancers died one time due to load - and we were not even serving pages, just redirects!
These are Cisco CSM blades that failed today. Not small Load Balancers.
Probably need to define well, the roles people will play for administration, so that you do not have a free for all. Maybe even shifts of people so Seth can take a shower or get food :)
seth vidal wrote:
Hey guys, So some of you have heard that the load from today was pretty close to crippling for a while. We got it under 'control' for a bit but it wasn't happy. Mike suggested making some of our pages in the wiki static so we pulled them out using firefox and then put them onto the site using apache redirects. That worked out okay to bring the load down. We're still get beaten pretty badly but at least the load is manageable.
Let's assume for now that the next N releases will be just like this. That the world will melt down. I think we should probably put together some docs on what steps to take. Here are some items we did today:
- iptables rate limiting on fpserv: 2-4 new connection per ip for every
10-20 seconds.
- look at the logs, figure out where all the hits are going and make
those pages static. You can do this by using firefox to save the page (it grabs images and css, too)
turn off any unnecessary bits: zope (if its not being used, rsyncd)
trim out your log outputs. Decrease the number of things writing out
to disk anywhere
- get comfortable with /etc/init.d/httpd restart - just to get the load
back under control
What else should we be adding?
-sv
Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
- -- ======================================================== = Stacy J. Brandenburg Red Hat Inc. = = Manager, Network Operations sbranden@redhat.com = = 919-754-4313 http://www.redhat.com = ========================================================
Fingerprint 03F7 43BE 1150 CCFA F57B 54DD AEDB 1C27 1828 D94D
We could also implement a load balancer using iptables "Nth" more details here:
http://www.netfilter.org/patch-o-matic/pom-base.html#pom-base-nth
if you want to balance the load to the 3 addresses 10.0.0.5, 10.0.0.6 and 10.0.0.7, then you can do as follows :
iptables -t nat -A POSTROUTING -o eth0 -m nth --counter 7 --every 3 --packet 0 -j SNAT --to-source 10.0.0.5 iptables -t nat -A POSTROUTING -o eth0 -m nth --counter 7 --every 3 --packet 1 -j SNAT --to-source 10.0.0.6 iptables -t nat -A POSTROUTING -o eth0 -m nth --counter 7 --every 3 --packet 2 -j SNAT --to-source 10.0.0.7
I am sure this could help.
On 24/10/06, Stacy J. Brandenburg sbranden@redhat.com wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Add HW.
You need a dedicated load balanced env if you want to ever run the primary bits. We have 5 Dell servers for PRX, 4 APP that fell over. On top of that the Load Balancers died one time due to load - and we were not even serving pages, just redirects!
These are Cisco CSM blades that failed today. Not small Load Balancers.
Probably need to define well, the roles people will play for administration, so that you do not have a free for all. Maybe even shifts of people so Seth can take a shower or get food :)
seth vidal wrote:
Hey guys, So some of you have heard that the load from today was pretty close to crippling for a while. We got it under 'control' for a bit but it wasn't happy. Mike suggested making some of our pages in the wiki static so we pulled them out using firefox and then put them onto the site using apache redirects. That worked out okay to bring the load down. We're still get beaten pretty badly but at least the load is manageable.
Let's assume for now that the next N releases will be just like this. That the world will melt down. I think we should probably put together some docs on what steps to take. Here are some items we did today:
- iptables rate limiting on fpserv: 2-4 new connection per ip for every
10-20 seconds.
- look at the logs, figure out where all the hits are going and make
those pages static. You can do this by using firefox to save the page (it grabs images and css, too)
turn off any unnecessary bits: zope (if its not being used, rsyncd)
trim out your log outputs. Decrease the number of things writing out
to disk anywhere
- get comfortable with /etc/init.d/httpd restart - just to get the load
back under control
What else should we be adding?
-sv
Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
======================================================== = Stacy J. Brandenburg Red Hat Inc. = = Manager, Network Operations sbranden@redhat.com = = 919-754-4313 http://www.redhat.com = ========================================================
Fingerprint 03F7 43BE 1150 CCFA F57B 54DD AEDB 1C27 1828 D94D
-----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.5 (GNU/Linux) Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org
iD8DBQFFPoNnrtscJxgo2U0RAp6rAKDGX11u0J2onLKYfyXlsC1MUhjC/wCg1Tqp Y/E2QQi7Lwv8NKm2lOktLk4= =6+yT -----END PGP SIGNATURE-----
Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Squid can do a similar task with reverse proxying + caching + LB.
The SNAT table below would possibly explode with the number of concurrent connections that we had today.
I think the most important thing would be to get a small set of static pages on a dedicated server or three that could take 90% of the hits.
-- Jason Watson
Damian Myerscough wrote:
We could also implement a load balancer using iptables "Nth" more details here:
http://www.netfilter.org/patch-o-matic/pom-base.html#pom-base-nth
if you want to balance the load to the 3 addresses 10.0.0.5, 10.0.0.6 and 10.0.0.7, then you can do as follows :
iptables -t nat -A POSTROUTING -o eth0 -m nth --counter 7 --every 3 --packet 0 -j SNAT --to-source 10.0.0.5 iptables -t nat -A POSTROUTING -o eth0 -m nth --counter 7 --every 3 --packet 1 -j SNAT --to-source 10.0.0.6 iptables -t nat -A POSTROUTING -o eth0 -m nth --counter 7 --every 3 --packet 2 -j SNAT --to-source 10.0.0.7
I am sure this could help.
On 10/24/06, Stacy J. Brandenburg sbranden@redhat.com wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Add HW.
You need a dedicated load balanced env if you want to ever run the primary bits. We have 5 Dell servers for PRX, 4 APP that fell over. On top of that the Load Balancers died one time due to load - and we were not even serving pages, just redirects!
These are Cisco CSM blades that failed today. Not small Load Balancers.
Probably need to define well, the roles people will play for administration, so that you do not have a free for all. Maybe even shifts of people so Seth can take a shower or get food :)
We do have 4 proxy servers and 2 app servers at present. But the wiki is not on any of them at present. Its at duke.
-Mike
On Tue, 2006-10-24 at 17:19 -0400, Stacy J. Brandenburg wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Add HW.
You need a dedicated load balanced env if you want to ever run the primary bits. We have 5 Dell servers for PRX, 4 APP that fell over. On top of that the Load Balancers died one time due to load - and we were not even serving pages, just redirects!
These are Cisco CSM blades that failed today. Not small Load Balancers.
Probably need to define well, the roles people will play for administration, so that you do not have a free for all. Maybe even shifts of people so Seth can take a shower or get food :)
Here's what we talked about on IRC:
1. get the websites and docs people to split out a structure in the wiki that is the final release lay out. This will be frozen N days prior to release and static pages will be generated. This static content will be fedoraproject.org/
2. the static content will get mirrored to a new set of mirrors in the world that we will recruit people into. These will be simple http-only mirrors.
3. fedoraproject.org can be globally load balanced using (more or less) dns round robin (more robust mechanisms are welcome)
4. we make the wiki server multiple and redundant by making use of multiple machines.
So that would mean we get: 1. more capacity for the wiki server 2. global capacity for the front page and key pages for a release 3. network/site redundancy.
-sv
On 10/24/06, seth vidal skvidal@linux.duke.edu wrote:
On Tue, 2006-10-24 at 17:19 -0400, Stacy J. Brandenburg wrote: Here's what we talked about on IRC:
I agree with this in theory, implementation will be complex though.
- get the websites and docs people to split out a structure in the wiki
that is the final release lay out. This will be frozen N days prior to release and static pages will be generated. This static content will be fedoraproject.org/
+1
- the static content will get mirrored to a new set of mirrors in the
world that we will recruit people into. These will be simple http-only mirrors.
if its our infrastructure it might be a good idea to do a nightly sync of the actual fp.o wiki incase the main server goes down.
- fedoraproject.org can be globally load balanced using (more or less)
dns round robin (more robust mechanisms are welcome)
I don't know of any good way to do this.
- we make the wiki server multiple and redundant by making use of
multiple machines.
So that would mean we get:
- more capacity for the wiki server
- global capacity for the front page and key pages for a release
- network/site redundancy.
-sv
I'm wondering if we could just request some budget from max and the RH folks to grab some public mirrors during release time. I mean, digg.com has a dugmirrors site for hosts they DDOS, I believe other such services exist as well. I'm just trying to think of the easiest solution for our needs. It'd be nice to just say:
1. Freeze the wiki 2. sync the mirrors 3. cross fingers 4. enjoy the FC7 release instead of working all day :)
-Mike
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
I *MIGHT* be able to get a dedicated 11503 Cisco Content Switch for use as a LB if you want. I can make no guarantees of its success under the onslaught, but it is better than a kick in the butt.
Mike McGrath wrote:
On 10/24/06, seth vidal skvidal@linux.duke.edu wrote:
On Tue, 2006-10-24 at 17:19 -0400, Stacy J. Brandenburg wrote: Here's what we talked about on IRC:
I agree with this in theory, implementation will be complex though.
- get the websites and docs people to split out a structure in the wiki
that is the final release lay out. This will be frozen N days prior to release and static pages will be generated. This static content will be fedoraproject.org/
+1
- the static content will get mirrored to a new set of mirrors in the
world that we will recruit people into. These will be simple http-only mirrors.
if its our infrastructure it might be a good idea to do a nightly sync of the actual fp.o wiki incase the main server goes down.
- fedoraproject.org can be globally load balanced using (more or less)
dns round robin (more robust mechanisms are welcome)
I don't know of any good way to do this.
- we make the wiki server multiple and redundant by making use of
multiple machines.
So that would mean we get:
- more capacity for the wiki server
- global capacity for the front page and key pages for a release
- network/site redundancy.
-sv
I'm wondering if we could just request some budget from max and the RH folks to grab some public mirrors during release time. I mean, digg.com has a dugmirrors site for hosts they DDOS, I believe other such services exist as well. I'm just trying to think of the easiest solution for our needs. It'd be nice to just say:
Freeze the wiki
sync the mirrors
cross fingers
enjoy the FC7 release instead of working all day :)
-Mike
- -- ======================================================== = Stacy J. Brandenburg Red Hat Inc. = = Manager, Network Operations sbranden@redhat.com = = 919-754-4313 http://www.redhat.com = ========================================================
Fingerprint 03F7 43BE 1150 CCFA F57B 54DD AEDB 1C27 1828 D94D
I fully agree Mike,
let freeze the project notify only the mirror onwers and release
If I now more what is needed to be a mirror I will have more diskspace in january I will have unlimited traffic upon 1T at DE so maybe I will also mirror some stuff if it do not kill my server (qos or other limits)
Mario,
On Tue, 2006-10-24 at 16:59 -0500, Mike McGrath wrote:
On 10/24/06, seth vidal skvidal@linux.duke.edu wrote:
On Tue, 2006-10-24 at 17:19 -0400, Stacy J. Brandenburg wrote: Here's what we talked about on IRC:
I agree with this in theory, implementation will be complex though.
- get the websites and docs people to split out a structure in the wiki
that is the final release lay out. This will be frozen N days prior to release and static pages will be generated. This static content will be fedoraproject.org/
+1
- the static content will get mirrored to a new set of mirrors in the
world that we will recruit people into. These will be simple http-only mirrors.
if its our infrastructure it might be a good idea to do a nightly sync of the actual fp.o wiki incase the main server goes down.
- fedoraproject.org can be globally load balanced using (more or less)
dns round robin (more robust mechanisms are welcome)
I don't know of any good way to do this.
- we make the wiki server multiple and redundant by making use of
multiple machines.
So that would mean we get:
- more capacity for the wiki server
- global capacity for the front page and key pages for a release
- network/site redundancy.
-sv
I'm wondering if we could just request some budget from max and the RH folks to grab some public mirrors during release time. I mean, digg.com has a dugmirrors site for hosts they DDOS, I believe other such services exist as well. I'm just trying to think of the easiest solution for our needs. It'd be nice to just say:
Freeze the wiki
sync the mirrors
cross fingers
enjoy the FC7 release instead of working all day :)
-Mike
Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Did anyone look at iptables Nth extention?
On 25/10/06, Mario Verbelen mario@verbelen.org wrote:
I fully agree Mike,
let freeze the project notify only the mirror onwers and release
If I now more what is needed to be a mirror I will have more diskspace in january I will have unlimited traffic upon 1T at DE so maybe I will also mirror some stuff if it do not kill my server (qos or other limits)
Mario,
On Tue, 2006-10-24 at 16:59 -0500, Mike McGrath wrote:
On 10/24/06, seth vidal skvidal@linux.duke.edu wrote:
On Tue, 2006-10-24 at 17:19 -0400, Stacy J. Brandenburg wrote: Here's what we talked about on IRC:
I agree with this in theory, implementation will be complex though.
- get the websites and docs people to split out a structure in the wiki
that is the final release lay out. This will be frozen N days prior to release and static pages will be generated. This static content will be fedoraproject.org/
+1
- the static content will get mirrored to a new set of mirrors in the
world that we will recruit people into. These will be simple http-only mirrors.
if its our infrastructure it might be a good idea to do a nightly sync of the actual fp.o wiki incase the main server goes down.
- fedoraproject.org can be globally load balanced using (more or less)
dns round robin (more robust mechanisms are welcome)
I don't know of any good way to do this.
- we make the wiki server multiple and redundant by making use of
multiple machines.
So that would mean we get:
- more capacity for the wiki server
- global capacity for the front page and key pages for a release
- network/site redundancy.
-sv
I'm wondering if we could just request some budget from max and the RH folks to grab some public mirrors during release time. I mean, digg.com has a dugmirrors site for hosts they DDOS, I believe other such services exist as well. I'm just trying to think of the easiest solution for our needs. It'd be nice to just say:
Freeze the wiki
sync the mirrors
cross fingers
enjoy the FC7 release instead of working all day :)
-Mike
Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
seth vidal wrote:
- get the websites and docs people to split out a structure in the wiki
that is the final release lay out. This will be frozen N days prior to release and static pages will be generated. This static content will be fedoraproject.org/
I definitely think this is a good idea. We need to be able to make the most hit pages on day of release static pages. That should help reduce load a fair amount, which we already saw a bit of my the static pages put in place this afternoon.
I wouldn't think this to be a major issue, though we will want to be sure to let the websites and docs teams know well in advance what we would like to see.
- the static content will get mirrored to a new set of mirrors in the
world that we will recruit people into. These will be simple http-only mirrors.
Mirrored is good. I am more inclined to use resources we have control of to prevent undo management complexity. Bandwidth didn't seem to as much of an issue as server load. I am thinking we should use Duke, PHX and ask Stacy nicely for a bit of space at the TPA DC. This helps disperse the load across several locations. We would still need to have the server hardware to back it up with of course.
The PHX DC has a few servers to throw at the problem. 4 Proxy servers and 2 app servers. The Xen boxes could always be added to that mix on release week (guests shut down and Apache running on the host boxes) to assist with the increased load.
- fedoraproject.org can be globally load balanced using (more or less)
dns round robin (more robust mechanisms are welcome)
- we make the wiki server multiple and redundant by making use of
multiple machines.
I'm in agreement here as well.
--Jeffrey
On Tue, Oct 24, 2006 at 05:52:23PM -0400, seth vidal wrote:
- get the websites and docs people to split out a structure in the wiki
that is the final release lay out. This will be frozen N days prior to release and static pages will be generated. This static content will be fedoraproject.org/
I've been looking at this a little. There are moinmoin patches [1] to let it work more cleanly with a reverse proxy (the patches proposed are used by the Apache Software Foundation). mediawiki has options [2] to let non-authenticated users hit static cached pages, cache updated when an authenticated user edits a page. I'm still looking for same for moinmoin. Together, barring DDOS, that should reduce the load significantly.
[1] http://moinmoin.wikiwikiweb.de/MoinMoinPatch/CachingProxies
[2] $wgUseFileCache = true; $wgFileCacheDirectory = "/home/httpd/cache"; $wgShowIPinHeader = false; $wgUseGzip = false;
We can always use squid for the static and some dynamic content. I.e. when we have the MirrorManagement project ready, we can cache the full list of mirrors for 30mins or something
just my 2 cents,
Paulo
On 10/26/06, Matt Domsch Matt_Domsch@dell.com wrote:
On Tue, Oct 24, 2006 at 05:52:23PM -0400, seth vidal wrote:
- get the websites and docs people to split out a structure in the wiki
that is the final release lay out. This will be frozen N days prior to release and static pages will be generated. This static content will be fedoraproject.org/
I've been looking at this a little. There are moinmoin patches [1] to let it work more cleanly with a reverse proxy (the patches proposed are used by the Apache Software Foundation). mediawiki has options [2] to let non-authenticated users hit static cached pages, cache updated when an authenticated user edits a page. I'm still looking for same for moinmoin. Together, barring DDOS, that should reduce the load significantly.
[1] http://moinmoin.wikiwikiweb.de/MoinMoinPatch/CachingProxies
[2] $wgUseFileCache = true; $wgFileCacheDirectory = "/home/httpd/cache"; $wgShowIPinHeader = false; $wgUseGzip = false;
-- Matt Domsch Software Architect Dell Linux Solutions linux.dell.com & www.dell.com/linux Linux on Dell mailing lists @ http://lists.us.dell.com
Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
On 10/26/06, Paulo Santos paulo.banon@googlemail.com wrote:
We can always use squid for the static and some dynamic content. I.e. when we have the MirrorManagement project ready, we can cache the full list of mirrors for 30mins or something
just my 2 cents,
Paulo
Actually that part of mirror management is already there and being used. I think the main goal here is to spread out the load among many different machines, static content helps but only so much.
-Mike
On Oct 26, 2006, at 7:01 AM, Mike McGrath wrote:
Actually that part of mirror management is already there and being used. I think the main goal here is to spread out the load among many different machines, static content helps but only so much.
Are the docs/instructions/faq pages really being hit so much that a server or two with static pages and a well configured http server can't keep up?
- ask
On Thu, Oct 26, 2006 at 08:35:38AM -0500, Matt Domsch wrote:
On Tue, Oct 24, 2006 at 05:52:23PM -0400, seth vidal wrote:
- get the websites and docs people to split out a structure in the wiki
that is the final release lay out. This will be frozen N days prior to release and static pages will be generated. This static content will be fedoraproject.org/
I've been looking at this a little. There are moinmoin patches [1] to let it work more cleanly with a reverse proxy (the patches proposed are used by the Apache Software Foundation). mediawiki has options [2] to let non-authenticated users hit static cached pages, cache updated when an authenticated user edits a page. I'm still looking for same for moinmoin. Together, barring DDOS, that should reduce the load significantly.
[1] http://moinmoin.wikiwikiweb.de/MoinMoinPatch/CachingProxies
[2] $wgUseFileCache = true; $wgFileCacheDirectory = "/home/httpd/cache"; $wgShowIPinHeader = false; $wgUseGzip = false;
Is there any reason we should not do the update to moin 1.5 and then check what additional steps need to be taken? This would ensure fast upstream integration and a not too heavily patched local system.
regards,
Florian La Roche
On 10/26/06, Florian La Roche laroche@redhat.com wrote:
On Thu, Oct 26, 2006 at 08:35:38AM -0500, Matt Domsch wrote:
On Tue, Oct 24, 2006 at 05:52:23PM -0400, seth vidal wrote:
- get the websites and docs people to split out a structure in the wiki
that is the final release lay out. This will be frozen N days prior to release and static pages will be generated. This static content will be fedoraproject.org/
I've been looking at this a little. There are moinmoin patches [1] to let it work more cleanly with a reverse proxy (the patches proposed are used by the Apache Software Foundation). mediawiki has options [2] to let non-authenticated users hit static cached pages, cache updated when an authenticated user edits a page. I'm still looking for same for moinmoin. Together, barring DDOS, that should reduce the load significantly.
[1] http://moinmoin.wikiwikiweb.de/MoinMoinPatch/CachingProxies
[2] $wgUseFileCache = true; $wgFileCacheDirectory = "/home/httpd/cache"; $wgShowIPinHeader = false; $wgUseGzip = false;
Is there any reason we should not do the update to moin 1.5 and then check what additional steps need to be taken? This would ensure fast upstream integration and a not too heavily patched local system.
regards,
Florian La Roche
Its my understanding the web team has some code getting commited back to moin from the SOC. We've just been waiting on them.
-Mike
I have been looking into the issue has anyone though of: mod_backhand this module could help with load balancing.
http://www.backhand.org/mod_backhand/
On 26/10/06, Mike McGrath mmcgrath@fedoraproject.org wrote:
On 10/26/06, Florian La Roche laroche@redhat.com wrote:
On Thu, Oct 26, 2006 at 08:35:38AM -0500, Matt Domsch wrote:
On Tue, Oct 24, 2006 at 05:52:23PM -0400, seth vidal wrote:
- get the websites and docs people to split out a structure in the wiki
that is the final release lay out. This will be frozen N days prior to release and static pages will be generated. This static content will be fedoraproject.org/
I've been looking at this a little. There are moinmoin patches [1] to let it work more cleanly with a reverse proxy (the patches proposed are used by the Apache Software Foundation). mediawiki has options [2] to let non-authenticated users hit static cached pages, cache updated when an authenticated user edits a page. I'm still looking for same for moinmoin. Together, barring DDOS, that should reduce the load significantly.
[1] http://moinmoin.wikiwikiweb.de/MoinMoinPatch/CachingProxies
[2] $wgUseFileCache = true; $wgFileCacheDirectory = "/home/httpd/cache"; $wgShowIPinHeader = false; $wgUseGzip = false;
Is there any reason we should not do the update to moin 1.5 and then check what additional steps need to be taken? This would ensure fast upstream integration and a not too heavily patched local system.
regards,
Florian La Roche
Its my understanding the web team has some code getting commited back to moin from the SOC. We've just been waiting on them.
-Mike
Fedora-infrastructure-list mailing list Fedora-infrastructure-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-infrastructure-list
Damian Myerscough wrote:
I have been looking into the issue has anyone though of: mod_backhand this module could help with load balancing.
I think this will hurt (from their FAQ):
Question:
Does mod_backhand support Apache 2?
Answer:
Nope. There has been thought, but no real work. The design of mod_backhand is very specific to both UNIX and a multiprocess-model. Apache 2 does away with the latter, so mod_backhand 2.x will need to be written from the ground up.
On Oct 26, 2006, at 8:46 AM, Damian Myerscough wrote:
I have been looking into the issue has anyone though of: mod_backhand this module could help with load balancing.
perlbal is different, but fits in similarly shaped holes as mod_backhand.
- ask
On Tue, Oct 24, 2006 at 05:09:24PM -0400, seth vidal wrote:
Hey guys, So some of you have heard that the load from today was pretty close to crippling for a while. We got it under 'control' for a bit but it wasn't happy. Mike suggested making some of our pages in the wiki static so we pulled them out using firefox and then put them onto the site using apache redirects. That worked out okay to bring the load down. We're still get beaten pretty badly but at least the load is manageable.
Let's assume for now that the next N releases will be just like this. That the world will melt down. I think we should probably put together some docs on what steps to take. Here are some items we did today:
- iptables rate limiting on fpserv: 2-4 new connection per ip for every
10-20 seconds.
- look at the logs, figure out where all the hits are going and make
those pages static. You can do this by using firefox to save the page (it grabs images and css, too)
I think dynamic web pages are pretty overrated...
You can use mod_cache if your web application (zope, wiki, ..) correctly generates pages with Etag, Last-Modified or Expires tag. For more information read:
http://www.us.apachecon.com/presentations/WE18/WE18_Performance_Up.pdf
there is nice example with MoinMoin at wiki.apache.org.
What else should we be adding?
More servers + load balancing?
Karel
infrastructure@lists.fedoraproject.org