Announcement

Collapse
No announcement yet.

G-WAN Web Server Claims Speed Records, Features

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by balouba View Post
    every now and then someone discovers that apache+mod_php is way faster than nginx+fcgid+php due to mod_php not having any IPC.
    Well, faster is relative. I had that on ubuntu and while trying to benchmark it thanks to prefork it ate up 10+ GB memory. If it was fast before, it wasn't fast anymore then. I am pretty sure for more complex php sites mod_php is only suitable for a small number of users. Apache/nginx + php-fpm solved that thanks to a sane php process management.


    And...
    G-WAN Web Server Claims Speed Records
    "Being website about benchmarking I just tell you what these claims are and not examine them through benchmarks until enaugh people nag me to do it."
    I wonder how many people are also coming here mainly for the forums and not for the articles.

    Comment


    • #22
      Let's see why would anyone need G-WAN's source code? To do what?[snipped]
      I'm just tired of reading such excuses. If that's a matter of revenue, let the secret die with you, who cares?

      Comment


      • #23
        Originally posted by moonlite View Post
        Having free access to the source is important if you are the least worried about the future of the software you are using. I find the comment ignorant and a bit absurd.
        Not only that, what about security? Having the code you can see what everything does, if that is a requirement of your organization. Having the community scrutinize your code also improves quality and security.

        This could very well be a trojan or god knows what.


        "Here, have this binary, it's really omgpwnwtf fast"

        I can understand that 25 years ago, people where releasing stuff as 'freeware', opensource wasn't really people where thinking about. "I wrote this program, it's really cool! Have it!" Calling it freeware back then made sense, since 'shareware' was about money in the end. But now? It's being ridiculous.

        What is in it for him? It's free for commercial and private use, so no money there. 'Support'-ware? E.g. pay for support? Fine. That I get, but keeping the source closed? Stupid.

        And the argument about someone stealing your shit? Sue them. The GPL does not allow that. I can get g-wan and use a hexeditor to change it to suit my name as well and sell it as my own.

        Sounds more of being ashamed of the crap you stole and wrote imo.

        Comment


        • #24
          I've been benchmarking GWAN against NGINX for use in our company. GWAN always comes on top, on lower core count is around 2 to 3x faster. On high core count (8+) it doesn't scale so good but keeps at least 30%+ performance. The downside is it doesn't support proxy (of course we can code it, not difficult) and other features. CPU side its eats a bit more than NGINX and it seems like it has a bit of slow start, the first connections are served slower and then it speedups.

          For serving static content is the best we had until now.

          I will not post benchmarks, since they are quite oriented for what we need to deliver. I will wait for Phoronix ones since their automated testing could bring more fair results.

          PS: We always tested with weighttpd. And we don't bench all the servers around.

          Comment


          • #25
            The problem here isn't if it's faster, the problems are:

            - it runs only under Linux
            - it's x86 and x86_64 only
            - it's only binary so it may or may not run on future distributions if they "die" or if they do not update it anymore against newer libs.
            - it lacks features and no one can help this if not them
            - they aren't Microsoft. Microsoft delivers proprietary software but Microsoft won't die tomorrow. These guys have an high chance to die, so they aren't reliable and a big company should NEVER rely on such "small" companies for crucial services.

            What else to say, a smart company that looks at future and not only present will not rely on them. Only big companies can justify proprietary software (and this is not always the case). Small companies that deliver proprietary software are infinite times more "dangerous", no one should rely on them.

            Comment


            • #26
              Originally posted by bulletxt View Post
              The problem here isn't if it's faster, the problems are:

              - it runs only under Linux
              I don't see that as a problem.

              - it's x86 and x86_64 only
              That could be a problem.

              - it's only binary so it may or may not run on future distributions if they "die" or if they do not update it anymore against newer libs.
              That is a MAJOR problem.

              - it lacks features and no one can help this if not them
              Another MAJOR problem

              - they aren't Microsoft. Microsoft delivers proprietary software but Microsoft won't die tomorrow. These guys have an high chance to die, so they aren't reliable and a big company should NEVER rely on such "small" companies for crucial services.
              Balmer won't be around forever. They're massive, so changes won't be so abrupt, but they're facing major problems now that the technology business is actually competitive. They've never had a real edge in servers, they've only had the desktop, and the desktop is fading. Slowly, but fading nevertheless.

              Of course, gwan might not be around tomorrow if whatsisname gets run over by a bus or loses interest.

              What else to say, a smart company that looks at future and not only present will not rely on them. Only big companies can justify proprietary software (and this is not always the case). Small companies that deliver proprietary software are infinite times more "dangerous", no one should rely on them.
              Nortel was a pretty big company.... they're gone.
              RIM was pretty big... they're on their way out.
              MS was/is enormous.... they'll go away also.

              You can't put your future in ANYTHING proprietary, unless you're big enough to replicate that proprietary junk from the ground up, and even if you are that big, it still makes more sense to not have to.

              Comment


              • #27
                Originally posted by droidhacker View Post
                Balmer won't be around forever. They're massive, so changes won't be so abrupt, but they're facing major problems now that the technology business is actually competitive. They've never had a real edge in servers, they've only had the desktop, and the desktop is fading. Slowly, but fading nevertheless.
                Even big companyes can abruptly drop support for certen products.

                Comment


                • #28
                  Originally posted by droidhacker View Post
                  I don't see that as a problem.
                  Of course, if you are using Linux.

                  Comment


                  • #29
                    Apache has a wide variety of supported workers/threading and configuration options. Whenever I see "BoB-httpd is faster than Apache-httpd", I always read it as "Honda is faster than General Motors". With such a wide variety of end configurations, and serving such a diverse number of usage profiles, it's almost completely impossible to create a general performance comparison. Even when you narrow the comparison criteria, platform, configuration, and requirements, the differences are still tough to reconcile.

                    My recommendation to SE/SA/CTOs is to start with Apache, optimize for your usage, and move to something else if Apache cannot meet your demands (performance/budget/resources).

                    My OLTP system uses Apache as a front end. It handles SSL negotiation, mod_security, and passes requests through to the inbound and/or human-interface tier via mod(_proxy/_jk/_weblogic). There's a bit of static serving, compression, custom error pages, and mod_rewrite, but nothing out of the ordinary. Aside from having to tweak the MPM and SSL settings to match our hardware and usage profile, I've never really encountered a performance issue with the core tech.

                    I feel left out. Are there http shops out there that are exceeding the performance limits of Apache's core tech? If you don't mind me asking.... How? (no really, I am genuinely curious).

                    F

                    Comment


                    • #30
                      Phew, what a marketing brainwashing!

                      Marketing brainwashing.

                      You see, nginx haves no trouble to saturate 1Gbps link on common desktop hardware and would do 10Gbps at decent server hardware, leaving a plenty of resources for other tasks, granted that other I/O like HDDs could keep up with such speeds. So in fact if you dont do things horribly wrong, in real world you would end up being I/O limited anyway so no way to gain more than that without further upgrade of hardware, etc

                      Not to mention Nginx features very cool cache system which could make a day if you're slashdotted. You see, serving static copy is almost instant. Running PHP (or whatever) script is not. This way, an average cheap hardware could easily withstand slashdot effect.

                      And yes, it comes with source. So I have it everywhere. Up to my ARM based NAS and MIPS based router, where low resources consumption counts 10 times as much as on x86.

                      p.s. lack of source implies vendor lock-in and inability to choose OS and CPU arch except those "approved" by ppl who builds blob. And ton of other artificial restrictions. That's just stupid. FAIL.
                      Last edited by 0xBADCODE; 20 July 2012, 08:54 PM.

                      Comment

                      Working...
                      X