So for those wondering what I've done with the wiki upgrade, I've made a simple diagram. As we get more hardware and resources you can see where we're headed as far as HA goes in our environment. I'm still trying to acquire a NAS or SAN for us, this will make what we need to do much easier. Also at the application layer once we get in a Xen environment we can add and remove app servers easily without having to expand the number of proxy servers we have until they get overloaded. I'm still experimenting with various things but right now app2 is our biggest SPOF[1] as it houses the wiki and shares it with app1. The proxy servers are using mod-rewrite [P] to proxy services. Basically the load balancer balances between proxy[1-2] and each proxy in turn proxies to app[1-2]. Instead of proxy1 -> app1, proxy2 -> app2. The proxy servers will also mount or contain copies of static content (like /extras, or favicon.ico)
There are many tweaks to be made to make this useful and hands-off HA, but this is a good first step for us. As always I'm interested in discussion so send it my way.
-Mike
[1] Our load balancer may also be a SPOF, not sure.
Mike McGrath wrote:
So for those wondering what I've done with the wiki upgrade, I've made a simple diagram.
Cool, what software did you use for the diagram?
As we get more hardware and resources you can see where we're headed as far as HA goes in our environment. I'm still trying to acquire a NAS or SAN for us, this will make what we need to do much easier. Also at the application layer once we get in a Xen environment we can add and remove app servers easily without having to expand the number of proxy servers we have until they get overloaded. I'm still experimenting with various things but right now app2 is our biggest SPOF[1] as it houses the wiki and shares it with app1. The proxy servers are using mod-rewrite [P] to proxy services.
* Which mod_cache are you using? o pro: mod_disk_cache is able to share cache among worker threads con: a response will not go out until the full response is written to disk o pro: mod_mem_cache is *fast* con: cache is not shared among workers * We should really be using squid.
Basically the load balancer balances between proxy[1-2] and each proxy in turn proxies to app[1-2].
What load balancer are we using?
Instead of proxy1 -> app1, proxy2 -> app2. The proxy servers will also mount or contain copies of static content (like /extras, or favicon.ico)
There are many tweaks to be made to make this useful and hands-off HA, but this is a good first step for us. As always I'm interested in discussion so send it my way.
-Mike
[1] Our load balancer may also be a SPOF, not sure.
Unless there are multiple paths, I would expect so also.
;; QUESTION SECTION: ;fedoraproject.org. IN A ;; ANSWER SECTION: fedoraproject.org. 959 IN A 209.132.176.120
Looks to be the case, unless there is something at the network level.
So cool.
Jonathan
Jonathan Steffan wrote:
Mike McGrath wrote:
So for those wondering what I've done with the wiki upgrade, I've made a simple diagram.
Cool, what software did you use for the diagram?
dia
As we get more hardware and resources you can see where we're headed as far as HA goes in our environment. I'm still trying to acquire a NAS or SAN for us, this will make what we need to do much easier. Also at the application layer once we get in a Xen environment we can add and remove app servers easily without having to expand the number of proxy servers we have until they get overloaded. I'm still experimenting with various things but right now app2 is our biggest SPOF[1] as it houses the wiki and shares it with app1. The proxy servers are using mod-rewrite [P] to proxy services.
* Which mod_cache are you using? o pro: mod_disk_cache is able to share cache among worker threads con: a response will not go out until the full response is written to disk o pro: mod_mem_cache is *fast* con: cache is not shared among workers * We should really be using squid.
We aren't caching anything at the moment as it was determined we wouldn't gain anything by enabling caching. Moin doesn't allow for caching very well (See archives). Regarding Squid, its great when you experience performance problems. We really aren't. The biggest hit we take right now is saving on the wiki and thats related to the way Moin sends email out (It iterates over all of our users). When we do start using proxy caching for our other sites squid will be examined but I'm a KISS guy when it comes to proxies. The fact is right now we don't get hit that hard except for on release day, and until moin better implements a caching method. We won't gain much from it.
-Mike
infrastructure@lists.fedoraproject.org