Announcement

Collapse
No announcement yet.

Firefox 125 Adds AV1 Support In Encrypted Media Extensions, Other New Features

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    The killer feature for me is the built in translation. Perhaps in Firefox version 126 or 127.
    I read a note from 10.4. That the language model is already being trained.​

    Comment


    • #22
      Originally posted by cassiofb-dev View Post
      AV1 encrypted media? Can someone explain how it works and where is it useful?
      The TLDR answer is that it's an AV1 support update for the Encrypted Media Extensions in the W3C standards, which means DRM. It's basically a three-way development between Google, Microsoft, and Netflix. This applies more to the Android OS than it does any regular Linux distro.

      Comment


      • #23
        Originally posted by dlq84 View Post

        4 bytes out of a 32 byte Sha256 hash, literally nothing is the answer.
        Google is the one that created the initial series of hashes. They know which sites match which hashes. If a 4 byte hash matches, they know which one it is, and they can see the IP of the person who sent it. They know who is visiting what sites.

        Comment


        • #24
          Originally posted by Daktyl198 View Post

          Google is the one that created the initial series of hashes. They know which sites match which hashes. If a 4 byte hash matches, they know which one it is, and they can see the IP of the person who sent it. They know who is visiting what sites.
          The browser hashes the domain AND path (but not the protocol or query parameters), and a cryptographically secure hash is fundamentally designed to distribute change throughout the hash as widely as possible.

          ...and that's not counting things like https://www.fanfiction.net/s/2160751/1/Uhhhhh-ok where the URL may or may not have the post slug portion, depending on which link you clicked to get to it.

          Change that one digit that indicates the chapter and you get a completely different SHA256 hash, so the first four characters of the SHA256 hash will match sites from all across the Internet, and will have no association with a particular site.

          ...and then they layered "Fastly sees who's sending the requests but doesn't have the database. Google has the database but has no idea which requests come from the same user." on top of that to prevent Google from trying to use statistical clustering to narrow down which site you're on.

          Comment


          • #25
            Anux Daktyl198 SHA256 uses 256 bit hashes, which have 2^256 possible combinations. If you send 4 bytes (32 bits) then all you've done is narrow down the possible options to 2^224, which is still completely infeasible to brute force. And that's not considering hash collisions.

            Comment


            • #26
              Originally posted by EphemeralEft View Post
              Anux Daktyl198 SHA256 uses 256 bit hashes, which have 2^256 possible combinations. If you send 4 bytes (32 bits) then all you've done is narrow down the possible options to 2^224, which is still completely infeasible to brute force. And that's not considering hash collisions.
              If you have 2 URLs, hash them and trim to 4 bytes, I can pretty easily see which URL corresponds to which 4 byte hash. As long as there are not too many URLs you can reconstruct this stuff.
              But I don't know how many are in the list.

              Comment


              • #27
                Originally posted by Anux View Post
                If you have 2 URLs, hash them and trim to 4 bytes, I can pretty easily see which URL corresponds to which 4 byte hash. As long as there are not too many URLs you can reconstruct this stuff.
                But I don't know how many are in the list.
                But the point is that each thread on Phoronix would get a hash completely unrelated to the others on the same site, and the same would hold true for every URL on the Internet that gets submitted to Google, and truncating the hash is specifically designed to increase the probability of hash collisions so that every query corresponds to many many unrelated URLs that have been submitted to Google, and then the partnership with Fastly is intended to make it so Google just sees a stream of requests without knowing which ones correspond to the same user.

                It's not a question of how many URLs are in Google's database but, rather, how many URLs are being queried, globally.

                Comment


                • #28
                  Originally posted by ssokolow View Post
                  But the point is that each thread on Phoronix would get a hash completely unrelated to the others on the same site, and the same would hold true for every URL on the Internet that gets submitted to Google
                  That sounds unbelievable, how is google supposed to check a URL that has a random ID in it? I'm pretty confident they only check domain URLs and maybe subdomains (this should be easy to test by visiting a blocked site and just append random stuff to the URL).

                  Comment


                  • #29
                    Originally posted by Anux View Post
                    That sounds unbelievable, how is google supposed to check a URL that has a random ID in it? I'm pretty confident they only check domain URLs and maybe subdomains (this should be easy to test by visiting a blocked site and just append random stuff to the URL).
                    1. The algorithm they set out for generating the hashes says to strip the protocol (i.e. http vs. https) and the query parameters
                    2. Google doesn't "check" anything. The browser is requesting an intentionally over-broad query, then using the full hash locally to check if any results they received were false positives. It's similar to how bloom filters are used. Querying by hash instead of bare URL causes the results which may or may not be false positives to be evenly distributed across the entire space of valid URLs instead of clustered within a single domain.

                    Comment

                    Working...
                    X