Originally posted by chithanh
View Post
@Slashdot: When a site is slashdot'ed, Apache is rarely the tier that tips over. It's typically the middleware tier or DB tier, manifested as a 500 to the user. I see magical numbers being put out there like "15000 requests per second". I do not believe that this occurs in any real-world setting, with the exception of DDOS style attacks.
I could see size being a factor for SOC embedded implementations of the pre-flash era, or on home brew flash based routers, but x86? Seriously? I can see RPS being an issue if you have an optimized cluster serving web-statics to users, but the statics would need to be very small for > 10000 to occur (hello world).
Does anyone here run a stack that exceeds 1000 RPS, or 50 TPS? To put things in perspective, a Tier 2 network operator (ATT, Verizon, Sprint) rarely exceeds 50 TPS, which is about 500-2000 RPS on the front end depending on the workflow. I don't believe that the Apple store exceeds 50TPS.
Still confused, as I am unable to reconcile the numbers with any plausible real-world scenario. A go-kart is small and fast, but I have yet to see a scenario where it would be a good fit.
I'll drop my friends at Amazon an e-mail to see what their numbers look like.
F
Comment