Announcement

Collapse
No announcement yet.

A New Internet Draft Of HTTP 2.0 Published

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • A New Internet Draft Of HTTP 2.0 Published

    Phoronix: A New Internet Draft Of HTTP 2.0 Published

    The Hypertext Transfer Protocol Bis Working Group of the IETF has published a new Internet draft of the HTTP 2.0 protocol...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    So is this based on Google SPDY?

    Comment


    • #3
      It also introduces unsolicited push of representations from servers to clients.
      that's like heaven for black hats and script-kiddies around the world. Including but not limiting to organisations like the NSA.

      In other words: How retarded are the people that design something like that? A 5 year old knows that that is just wrong.

      Also I really, really hope that someone manages to get the people responsible for that on a data limited plan and sets a website as their start page that transmits a few gibibytes/page request...
      Last edited by Detructor; 09 July 2013, 12:59 PM.

      Comment


      • #4
        Originally posted by Detructor View Post
        that's like heaven for black hats and script-kiddies around the world. Including but not limiting to organisations like the NSA.

        In other words: How retarded are the people that design something like that? A 5 year old knows that that is just wrong.

        Also I really, really hope that someone manages to get the people responsible for that on a data limited plan and sets a website as their start page that transmits a few gibibytes/page request...
        Would you like to explain further what this is and why it is, or even point me in the direction to study about it.

        Comment


        • #5
          Originally posted by AJenbo View Post
          So is this based on Google SPDY?
          Basically yes, but while Google's solution makes webpages load faster, HTTP2.0 makes pages load even faster, before you hit Enter.

          Comment


          • #6
            Originally posted by AJenbo View Post
            So is this based on Google SPDY?
            Somewhat.
            From the abstract it compresses the headers, multiplexes the connection, and preemptively sends data to the client (called Server Push employed using PUSH_PROMISE frames...this requires the use of GET).
            SPDY compresses the headers, multiplexing the connection, preemptively sending data to client, and prioritized requests(http://tools.ietf.org/html/draft-mbe...ttpbis-spdy-00).
            So, at first glance, the only difference is the prioritization of requests...BUT section 5.3 of HTTP/2.0 talks about stream priority which addresses this issue. The odd thing to me is that SPDY uses only 3bits to represent the various priorties while HTTP/2.0 uses the full 31bits.

            IOW, from a layman, they looks pretty damn similar.

            Comment


            • #7
              Originally posted by mark45 View Post
              Basically yes, but while Google's solution makes webpages load faster, HTTP2.0 makes pages load even faster, before you hit Enter.
              How would that work, the webserver doesn't know you are going to connect before you... connect. Or is it more like internal links on a site? We sould already be able to handle that usig JS to pefetch the content.

              Comment


              • #8
                Originally posted by Detructor View Post
                that's like heaven for black hats and script-kiddies around the world. Including but not limiting to organisations like the NSA.

                In other words: How retarded are the people that design something like that? A 5 year old knows that that is just wrong.

                Also I really, really hope that someone manages to get the people responsible for that on a data limited plan and sets a website as their start page that transmits a few gibibytes/page request...
                If your web client does not want the data from the server it doesn't have to look at it. It's no more dangerous than the requested data responses.

                As for extra data blowing bandwidth caps...what is the difference between normal HTTP 1.1 and this? Do you review and approve each URL request before it transmits? Since Firebug shows most sites with 20-30 requests, I doubt that you do. Any current web site using HTTP/1.1 could already load gigabytes of images or Javascript data and most people would never realize it was happening.

                Comment


                • #9
                  Originally posted by Zan Lynx View Post
                  If your web client does not want the data from the server it doesn't have to look at it. It's no more dangerous than the requested data responses.

                  As for extra data blowing bandwidth caps...what is the difference between normal HTTP 1.1 and this? Do you review and approve each URL request before it transmits? Since Firebug shows most sites with 20-30 requests, I doubt that you do. Any current web site using HTTP/1.1 could already load gigabytes of images or Javascript data and most people would never realize it was happening.
                  I don't have to review all the links to outright disable all the images on a site and only download specific ones.

                  I do it all the time when using tethering.

                  Comment


                  • #10
                    With a download you yourself started, you can press "Stop". With a push download in the background, will there even be any indication a transfer is ongoing, let alone a way to stop it? This be the concern.

                    Comment

                    Working...
                    X