Ha-ha, FAIL. Nginx can take backend's response generated by scripts (PHP, perl, .... whatever you use for dynamic page generation) and then cache response on disk as "just file" by using certain rules to select what to cache and how to expire it. Then it could serve it as static page from disk, avoiding costly script re-interpretation, DB calls and so on. And since it great at serving static, it will do great job serving that cached version as well. In some cases this allows to use much less hardware than it would be otherwise. Even if you cache page for just 10 seconds, re-running script in interpreter once per 10 seconts is much easier than running script 1000 times a second when you faced 1000 requests per second from clients. Often it's pointless to re-generate page 1000 times a second if result is known to be the same (or rarely changed). Example: you can usually cache wiki article to static version and do not launch script for each and every request to re-generate article via running script, doing dozen of calls to DB and so on. Since page haven't changed it's just waste of resources. Caveat here is that some users could face stale version of page after editing. So setting up cache could be a bit tricky. However if you manage to use it, you can serve half of planet on fairly weak hardware as long as your bandwidth allows it. You see, it haves nothing to do with Linux FS cache. It's application-level cache to avoid extra calls to dynamic page generation scripts and dynamic page generation is usually major resource hog on modern sitesThat focus also means it doesn't do caching, period, leaving that to the far-longer-developed linux FS cache.