Announcement

Collapse
No announcement yet.

ATI R600g Gains Mip-Map, Face Culling Support

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by droidhacker View Post
    Mpeg2 doesn't take too much CPU to decode. I take it the intention is ultimately to take on h.264 and similar?
    Edit: I wish I knew more about video decoding.... got a couple of engineering degrees here.
    Yes thats the plan.
    Well I started reading up on gallium and afterwars started to learn about mpeg2 decoding. Have had some lessons at the university about decoding, though.

    Comment


    • Just tried out Doom 3 on r600g.
      http://imgur.com/fFxWD.jpg

      That's the menu, and I can't get past it because I can't see anything, but at least the cursor and console works.

      Comment


      • @bridgman

        about the last 5% missing part in the spec for the max speed like catalyst.

        i think i can imagine what kind of stuff is this.

        there are some patented stuff for saving calculate triangles in hidden areas behind other stuff like PowerVR KYRO "tile-based deferred rendering "

        i think the R600-R800 do have some secret "hidden surface determination/occlusion culling (OC)/visible surface determination (VSD))" part

        and maybe amd can't open-source this part... :-(

        Comment


        • We may end up opening the stuff we held back as well, it was mostly things where we weren't sure about IP ownership or about the relationship to in-flight patent applications. I expect we'll end up releasing some of what we held back later anyways, once we have a chance to spend more time on it.

          Alex pushed out some r5xx docco revisions quite recently, adding some things we had initially held back but decided were OK to release after spending more time with them.

          If everyone is real good we'll show you how to turn off the "corrupt data and intermittently lock up" bit

          Comment


          • Originally posted by bridgman View Post
            We may end up opening the stuff we held back as well, it was mostly things where we weren't sure about IP ownership or about the relationship to in-flight patent applications. I expect we'll end up releasing some of what we held back later anyways, once we have a chance to spend more time on it.

            Alex pushed out some r5xx docco revisions quite recently, adding some things we had initially held back but decided were OK to release after spending more time with them.

            If everyone is real good we'll show you how to turn off the "corrupt data and intermittently lock up" bit
            why not tell us a exampel for an hold back technique?

            i have tried to make an example.

            Comment


            • Originally posted by Qaridarium View Post
              why not tell us a exampel for an hold back technique?

              i have tried to make an example.
              According to pdftotext+diff, in 1.5 they added information on the storage format of MSAA buffers, on how to do AA resolve, fast color/z clears, and how to use multiple render targets.

              It seems these are the vaguely unique features exposed:
              - Using the Z buffer hardware (idle due to fast z clears) to clear half of the color buffer
              - The exact swizzling/layout of the MSAA buffer
              - The ability to apply the MSAA sampling offsets without MSAA to achieve rotated grid supersampling (and perhaps also DX9 vs GL rasterization rules, even though the doc doesn't mention that)
              - Hardware gamma correct AA resolve with the color write units by drawing a primitive over the region and specifying the resolve buffer explicitly

              Comment


              • Originally posted by tball View Post
                Yes thats the plan.
                Well I started reading up on gallium and afterwars started to learn about mpeg2 decoding. Have had some lessons at the university about decoding, though.
                Is there any chance that hardware-accelerated video decoding support could go beyond mpeg2, beyond h264, and extend to Theora and/or WebM?

                Pretty please?

                It would be legendary if that could be done. It would remove all of the impetus of claims that "open codecs have no hardware support". It could be a real boost to open video, IMO.

                Comment


                • Originally posted by hal2k1 View Post
                  Is there any chance that hardware-accelerated video decoding support could go beyond mpeg2, beyond h264, and extend to Theora and/or WebM?

                  Pretty please?

                  It would be legendary if that could be done. It would remove all of the impetus of claims that "open codecs have no hardware support". It could be a real boost to open video, IMO.
                  From what I've read, a lot of the h264 stuff can be leveraged for VP8 (webm is just the container and needs no acceleration), so it is a natural progression once h264 is working -- i.e., the reason why ffmpeg already has a functional decoder that blows google's out of the water is because they use a lot of code from their h264 decoder.

                  Theora shouldn't be much of a priority -- it was never in a place where it could be considered "successful", and with VP8 now being free, it doesn't look like it ever will be.


                  *** I wonder if it would be possible to get any kind of support for this from google and/or ffmpeg? One would think that google would jump at the opportunity to get out some free universal VP8 acceleration, and it seems right up ffmpeg's alley. tball: have you considered asking either of them for assistance or funding?

                  Comment


                  • Originally posted by Qaridarium View Post
                    why not tell us a exampel for an hold back technique? i have tried to make an example.
                    One essential part of holding something back is, well, holding it back. I'm not going to give you a list of the stuff we *didn't* release

                    re: video decode, one of the cool things about shader-assisted decode is that once you have it working with one API it can be adapted to other APIs fairly easily. The key point though is that you want to be able to lean on an existing pure-SW decoder since some of the processing is going to stay on the CPU and you don't want to have to write all that code from scratch for each new standard.

                    Did I mention how much I hate having to delete and re-post every time I want to edit something ?

                    Comment


                    • Originally posted by bridgman View Post
                      re: video decode, one of the cool things about shader-assisted decode is that once you have it working with one API it can be adapted to other APIs fairly easily. The key point though is that you want to be able to lean on an existing pure-SW decoder since some of the processing is going to stay on the CPU and you don't want to have to write all that code from scratch for each new standard.
                      And one of the *complexities* of it is balancing the decode functions between the CPU and the GPU such that you can leverage as much shader-assist as that GPU is capable of without overloading it such that you end up with inadequate performance. This needs to be dynamic since you have a huge range of GPU capabilities from the fairly weak IGP's to the insanely powerful discrete (which can obviously handle a much greater portion of the work).

                      Comment

                      Working...
                      X