Announcement

Collapse
No announcement yet.

NGINX Might Be Included With Ubuntu Server ISOs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    What you're describing is an awkward workaround to improper site design. For the proper way to do that (generate static pages in the first place), see my previous post.

    The web server has no knowledge of when things need to be refreshed. If your site is properly designed, you can trigger the refresh properly, instead of some timeout that's bound to be always wrong/non-optimal.

    Comment


    • #12
      Originally posted by curaga View Post
      What you're describing is an awkward workaround to improper site design. For the proper way to do that (generate static pages in the first place), see my previous post.
      Reminds me of this:

      "There are only two hard problems in Computer Science:cache invalidation and naming things." -- Phil Karlton

      Have you ever tried your above-mentioned strategy? I once shared your view, but there are a couple of reasons why it may not be practical in many situations:

      - you need to model dependencies accurately, if you fuck up, you're in trouble

      - if you need to customize pages based on the visitors, this is easy to do in the JIT model, you just bypass the cache, but harder to do in a statically-compiled model

      - if you have a sidebar with aggregated info on all pages, you can't assemble the finished pages, else you'll have to regenerate all pages everytime the aggregated info changes; this adds complexity

      - integrating POSTs and form handling can end up being awkward, unless you mix the two approaches; this also adds complexity

      I'm not saying static compilation is stupid or impossible, it just has its own set of constraints and in many situations it's probably not worth it compared to just slapping a simple cache in front. If you do that and have < 100 pages, you can cache them for 10 seconds and get at most 10 req/s to the backend. Even a small VPS should be able to handle 10 req/s.

      Comment


      • #13
        Originally posted by curaga View Post
        What you're describing is an awkward workaround to improper site design.
        This is debatable IMO. You see, fast state machine written in C which does caching could show very impressive speeds, much higher than any script language could ever dream to have. The only option I see is that some script backend could bother self to handle caching and dependencies, but then either it still have to interact with fast C frontend to serve it or it will be much slower. Not to mention this approach only works for things written from scratch where programmer is explicitly aware of this requirement. This implies that this approach is only going to work for few custom-designed high-load projects when you have a lot of bucks to pay for such very custom software implemtation, made for you and your requirements exclusively. This is a way too expensive and time consuming option in many cases.

        For the proper way to do that (generate static pages in the first place), see my previous post.
        Bah, this world is not ideal. We're living in some real world and have to deal with it. As it is. With all shortcomings. Sure, in ideal world you can just go on and rewrite thing like MediaWiki. In real world it would take a way too much time/$$$/efforts to do something comparable in terms of features, etc. That's why I like nginx. It does not learns me to live properly in abstract ideal world. It allows me to achieve my goals in real world with reasonable efforts. Maybe sometimes it's a bit hackish or something. But at the end of day it's achieved goal and not "proper design" what counts. And I don't have to rewrite everything from scratch. Which saves me a really decent amount of time and bucks.

        The web server has no knowledge of when things need to be refreshed. If your site is properly designed, you can trigger the refresh properly, instead of some timeout that's bound to be always wrong/non-optimal.
        This is correct statement and taking this effect into account could be some challenge. System administrator must be explicitly aware of this shortcoming when using this feature. However, it still works for many types of already existing web software and does it quite well. Sure, you will have trouble caching, say, user's shopping cart page this way and it would cause numerous troubles. But, say, it would work like a charm for wiki article. And if caching is short enough, nobody would even notice that there is caching is in effect. Yet it will handle "hot spots" and server could easily handle "slashdot effect" on modest hardware. So unlike your solution for ideal world, Nginx would work in real world, here and now. With already existing software. That's what makes it especially valuable ally on my side.
        Last edited by 0xBADCODE; 29 May 2013, 06:33 PM.

        Comment


        • #14
          Have you ever tried your above-mentioned strategy? I once shared your view, but there are a couple of reasons why it may not be practical in many situations:
          Yes, a few times. I used GNU make to handle the dependencies and rsync to only move updated content to the server. Cron jobs could also trigger a refresh from the server.

          - if you have a sidebar with aggregated info on all pages, you can't assemble the finished pages, else you'll have to regenerate all pages everytime the aggregated info changes; this adds complexity
          Make the sidebar an iframe, frame, or content fetched via JS. Then only that part needs to be refreshed. I wouldn't say it adds complexity that way; I think it'd be better than embedded PHP in the page from a maintainer's view.

          POST and per-visitor customization does need to be dynamic, yes.

          @0xBADCODE

          I agree. If you need to setup some huge PHP framework quickly, such caching is the best option.

          Comment

          Working...
          X