Page 2 of 2 FirstFirst 12
Results 11 to 16 of 16

Thread: NGINX Might Be Included With Ubuntu Server ISOs

  1. #11
    Join Date
    Sep 2008
    Location
    Vilnius, Lithuania
    Posts
    2,525

    Default

    That's interesting information. I am also considering setting up my own server eventually, and since my current website is running on Joomla, I'll want to keep it that way. Which means using Apache or nginx. And so far nginx does sound interesting.

  2. #12
    Join Date
    Jun 2012
    Posts
    293

    Default

    Quote Originally Posted by curaga View Post
    With its embedded focus, it tends to match or outdo nginx in RAM use and static page serving.
    Btw, while nginx could run even on embedded device (I launched it on my phone for fun), it also scales well, so if your project goes big, you will be ready to go big. As it could do load balance to multiple backends, load balancer itself could scale to many CPUs, apply caching on dynamic pages to static versions and so on. So it could withstand impressive load on modern hardware, especially if admin is about to help it a bit. Probably that's why busy sites preferring nginx over anything else these days.

    That focus also means it doesn't do caching, period, leaving that to the far-longer-developed linux FS cache.
    Ha-ha, FAIL. Nginx can take backend's response generated by scripts (PHP, perl, .... whatever you use for dynamic page generation) and then cache response on disk as "just file" by using certain rules to select what to cache and how to expire it. Then it could serve it as static page from disk, avoiding costly script re-interpretation, DB calls and so on. And since it great at serving static, it will do great job serving that cached version as well. In some cases this allows to use much less hardware than it would be otherwise. Even if you cache page for just 10 seconds, re-running script in interpreter once per 10 seconts is much easier than running script 1000 times a second when you faced 1000 requests per second from clients. Often it's pointless to re-generate page 1000 times a second if result is known to be the same (or rarely changed). Example: you can usually cache wiki article to static version and do not launch script for each and every request to re-generate article via running script, doing dozen of calls to DB and so on. Since page haven't changed it's just waste of resources. Caveat here is that some users could face stale version of page after editing. So setting up cache could be a bit tricky. However if you manage to use it, you can serve half of planet on fairly weak hardware as long as your bandwidth allows it. You see, it haves nothing to do with Linux FS cache. It's application-level cache to avoid extra calls to dynamic page generation scripts and dynamic page generation is usually major resource hog on modern sites

  3. #13
    Join Date
    Feb 2008
    Location
    Linuxland
    Posts
    4,994

    Default

    What you're describing is an awkward workaround to improper site design. For the proper way to do that (generate static pages in the first place), see my previous post.

    The web server has no knowledge of when things need to be refreshed. If your site is properly designed, you can trigger the refresh properly, instead of some timeout that's bound to be always wrong/non-optimal.

  4. #14

    Default

    Quote Originally Posted by curaga View Post
    What you're describing is an awkward workaround to improper site design. For the proper way to do that (generate static pages in the first place), see my previous post.
    Reminds me of this:

    "There are only two hard problems in Computer Science:cache invalidation and naming things." -- Phil Karlton

    Have you ever tried your above-mentioned strategy? I once shared your view, but there are a couple of reasons why it may not be practical in many situations:

    - you need to model dependencies accurately, if you fuck up, you're in trouble

    - if you need to customize pages based on the visitors, this is easy to do in the JIT model, you just bypass the cache, but harder to do in a statically-compiled model

    - if you have a sidebar with aggregated info on all pages, you can't assemble the finished pages, else you'll have to regenerate all pages everytime the aggregated info changes; this adds complexity

    - integrating POSTs and form handling can end up being awkward, unless you mix the two approaches; this also adds complexity

    I'm not saying static compilation is stupid or impossible, it just has its own set of constraints and in many situations it's probably not worth it compared to just slapping a simple cache in front. If you do that and have < 100 pages, you can cache them for 10 seconds and get at most 10 req/s to the backend. Even a small VPS should be able to handle 10 req/s.

  5. #15
    Join Date
    Jun 2012
    Posts
    293

    Default

    Quote Originally Posted by curaga View Post
    What you're describing is an awkward workaround to improper site design.
    This is debatable IMO. You see, fast state machine written in C which does caching could show very impressive speeds, much higher than any script language could ever dream to have. The only option I see is that some script backend could bother self to handle caching and dependencies, but then either it still have to interact with fast C frontend to serve it or it will be much slower. Not to mention this approach only works for things written from scratch where programmer is explicitly aware of this requirement. This implies that this approach is only going to work for few custom-designed high-load projects when you have a lot of bucks to pay for such very custom software implemtation, made for you and your requirements exclusively. This is a way too expensive and time consuming option in many cases.

    For the proper way to do that (generate static pages in the first place), see my previous post.
    Bah, this world is not ideal. We're living in some real world and have to deal with it. As it is. With all shortcomings. Sure, in ideal world you can just go on and rewrite thing like MediaWiki. In real world it would take a way too much time/$$$/efforts to do something comparable in terms of features, etc. That's why I like nginx. It does not learns me to live properly in abstract ideal world. It allows me to achieve my goals in real world with reasonable efforts. Maybe sometimes it's a bit hackish or something. But at the end of day it's achieved goal and not "proper design" what counts. And I don't have to rewrite everything from scratch. Which saves me a really decent amount of time and bucks.

    The web server has no knowledge of when things need to be refreshed. If your site is properly designed, you can trigger the refresh properly, instead of some timeout that's bound to be always wrong/non-optimal.
    This is correct statement and taking this effect into account could be some challenge. System administrator must be explicitly aware of this shortcoming when using this feature. However, it still works for many types of already existing web software and does it quite well. Sure, you will have trouble caching, say, user's shopping cart page this way and it would cause numerous troubles. But, say, it would work like a charm for wiki article. And if caching is short enough, nobody would even notice that there is caching is in effect. Yet it will handle "hot spots" and server could easily handle "slashdot effect" on modest hardware. So unlike your solution for ideal world, Nginx would work in real world, here and now. With already existing software. That's what makes it especially valuable ally on my side.
    Last edited by 0xBADCODE; 05-29-2013 at 06:33 PM.

  6. #16
    Join Date
    Feb 2008
    Location
    Linuxland
    Posts
    4,994

    Default

    Have you ever tried your above-mentioned strategy? I once shared your view, but there are a couple of reasons why it may not be practical in many situations:
    Yes, a few times. I used GNU make to handle the dependencies and rsync to only move updated content to the server. Cron jobs could also trigger a refresh from the server.

    - if you have a sidebar with aggregated info on all pages, you can't assemble the finished pages, else you'll have to regenerate all pages everytime the aggregated info changes; this adds complexity
    Make the sidebar an iframe, frame, or content fetched via JS. Then only that part needs to be refreshed. I wouldn't say it adds complexity that way; I think it'd be better than embedded PHP in the page from a maintainer's view.

    POST and per-visitor customization does need to be dynamic, yes.

    @0xBADCODE

    I agree. If you need to setup some huge PHP framework quickly, such caching is the best option.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •