Announcement

Collapse
No announcement yet.

HTTP/2 "Rapid Reset" DDoS Attack Disclosed By Google, Cloudflare & AWS

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by bug77 View Post

    On the other hand, besides dev environments and stress test setups, where do you run your servers without a rate limit?
    Well having rate limits is also one way where people can DDoS you so it is no panacea either.

    Comment


    • #12
      "We had problems because we haven't done rate limiting properly" vs "we helped the global internet community to deal with a novel threat".

      Comment


      • #13
        Originally posted by F.Ultra View Post

        Well having rate limits is also one way where people can DDoS you so it is no panacea either.
        Obviously, but your ops team should be equipped to deal with that. As opposed to whatever random way your web server decides to crash in the absence of those limits.

        Comment


        • #14
          Originally posted by mb_q View Post
          "We had problems because we haven't done rate limiting properly" vs "we helped the global internet community to deal with a novel threat".
          Fwiw, both can be true at the same time.

          Comment


          • #15
            Originally posted by bug77 View Post

            Obviously, but your ops team should be equipped to deal with that. As opposed to whatever random way your web server decides to crash in the absence of those limits.
            Well the issue is that both creates exactly what the goal is, to not make you able to serve content to actual users and only to the bots. There really is no way to win here unless you have Google amount of bandwidth and machines.

            Comment


            • #16
              Perfection is not achieved when there's nothing to add, but nothing to take away. And since there's a lot that could be taken away from HTTP/2...

              Design deficiencies come about because designers demonise on aspect of an old protocol so much that they are too influenced when design the replacement and getting blinded.
              While head-of-line blocking in HTTP/1.1 certainly is one issue, you can just open a few more TCP connections. Those too can be an issue when all of them eventually HOL-block, but the solution isn't to demonise HOL, declare a single TCP socket as the ultimate solution and implementing a complex stream multiplex three (OSI) layers above. If all of your five HTTP/1.1 connections managed to HOL-block and you are denied opening a sixth TCP connection, maybe that's a sign the client is unwelcome.

              Comment


              • #17
                Add Microsoft to the "big cloud providers" as their October cumulative patch addresses this flaw as well. No doubt just about every web server that implemented HTTP/2 probably needs some kind of mitigation whether they need to change defaults or what.

                Comment


                • #18
                  Originally posted by F.Ultra View Post
                  Well the issue is that both creates exactly what the goal is, to not make you able to serve content to actual users and only to the bots. There really is no way to win here unless you have Google amount of bandwidth and machines.
                  Rate-limiting the bot's connection will only improve bandwidth for real users.

                  Comment


                  • #19
                    Huh, web servers Like nginx have supported various rate limits for years and every website admin worth his salary knows how to set this up in a minute.
                    I also don't see the problem. You want fast protocols but then complain that they are too fast?!

                    I've set up limits on all my servers years ago. As for requests/s, without a limit any user with low RTT gains an advantage over all others anyway even without this "trick".

                    Comment


                    • #20
                      Http2 is dead, long live http3.

                      Seriously, I always tought http2 is a broken by design protocol.

                      Sorry, adding / is stupid for me. That was always a red flag for me.

                      Someday, something like a mix of IPFS and Tor and PSYNC2+ and Tox and Bitorrrent and Kadenlia and mesh networks will be used to power Internet, along with the best features of other protocols. Even Matrix has deficiencies, but it's a bit better than the outdated IRCv3+ protocol.

                      Comment

                      Working...
                      X