Announcement

Collapse
No announcement yet.

ATI R600g Gains Mip-Map, Face Culling Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • We may end up opening the stuff we held back as well, it was mostly things where we weren't sure about IP ownership or about the relationship to in-flight patent applications. I expect we'll end up releasing some of what we held back later anyways, once we have a chance to spend more time on it.

    Alex pushed out some r5xx docco revisions quite recently, adding some things we had initially held back but decided were OK to release after spending more time with them.

    If everyone is real good we'll show you how to turn off the "corrupt data and intermittently lock up" bit
    Test signature

    Comment


    • Originally posted by Qaridarium
      why not tell us a exampel for an hold back technique?

      i have tried to make an example.
      According to pdftotext+diff, in 1.5 they added information on the storage format of MSAA buffers, on how to do AA resolve, fast color/z clears, and how to use multiple render targets.

      It seems these are the vaguely unique features exposed:
      - Using the Z buffer hardware (idle due to fast z clears) to clear half of the color buffer
      - The exact swizzling/layout of the MSAA buffer
      - The ability to apply the MSAA sampling offsets without MSAA to achieve rotated grid supersampling (and perhaps also DX9 vs GL rasterization rules, even though the doc doesn't mention that)
      - Hardware gamma correct AA resolve with the color write units by drawing a primitive over the region and specifying the resolve buffer explicitly

      Comment


      • Originally posted by tball View Post
        Yes thats the plan.
        Well I started reading up on gallium and afterwars started to learn about mpeg2 decoding. Have had some lessons at the university about decoding, though.
        Is there any chance that hardware-accelerated video decoding support could go beyond mpeg2, beyond h264, and extend to Theora and/or WebM?

        Pretty please?

        It would be legendary if that could be done. It would remove all of the impetus of claims that "open codecs have no hardware support". It could be a real boost to open video, IMO.

        Comment


        • Originally posted by hal2k1 View Post
          Is there any chance that hardware-accelerated video decoding support could go beyond mpeg2, beyond h264, and extend to Theora and/or WebM?

          Pretty please?

          It would be legendary if that could be done. It would remove all of the impetus of claims that "open codecs have no hardware support". It could be a real boost to open video, IMO.
          From what I've read, a lot of the h264 stuff can be leveraged for VP8 (webm is just the container and needs no acceleration), so it is a natural progression once h264 is working -- i.e., the reason why ffmpeg already has a functional decoder that blows google's out of the water is because they use a lot of code from their h264 decoder.

          Theora shouldn't be much of a priority -- it was never in a place where it could be considered "successful", and with VP8 now being free, it doesn't look like it ever will be.


          *** I wonder if it would be possible to get any kind of support for this from google and/or ffmpeg? One would think that google would jump at the opportunity to get out some free universal VP8 acceleration, and it seems right up ffmpeg's alley. tball: have you considered asking either of them for assistance or funding?

          Comment


          • Originally posted by Qaridarium
            why not tell us a exampel for an hold back technique? i have tried to make an example.
            One essential part of holding something back is, well, holding it back. I'm not going to give you a list of the stuff we *didn't* release

            re: video decode, one of the cool things about shader-assisted decode is that once you have it working with one API it can be adapted to other APIs fairly easily. The key point though is that you want to be able to lean on an existing pure-SW decoder since some of the processing is going to stay on the CPU and you don't want to have to write all that code from scratch for each new standard.

            Did I mention how much I hate having to delete and re-post every time I want to edit something ?
            Test signature

            Comment


            • Originally posted by bridgman View Post
              re: video decode, one of the cool things about shader-assisted decode is that once you have it working with one API it can be adapted to other APIs fairly easily. The key point though is that you want to be able to lean on an existing pure-SW decoder since some of the processing is going to stay on the CPU and you don't want to have to write all that code from scratch for each new standard.
              And one of the *complexities* of it is balancing the decode functions between the CPU and the GPU such that you can leverage as much shader-assist as that GPU is capable of without overloading it such that you end up with inadequate performance. This needs to be dynamic since you have a huge range of GPU capabilities from the fairly weak IGP's to the insanely powerful discrete (which can obviously handle a much greater portion of the work).

              Comment


              • Agreed. The thing that works in our favour, though, is that as long as anyone working on shader decode starts at the end of the pipe and works backwards, they'll probably run out of development time about the same time the smallest GPUs run out of shader power

                I haven't had time to tinker with any code yet but my feeling is that everything from bitstream parsing to IDCT and inter-prediction should stay on the CPU, while motion comp (intra-prediction) and deblock filtering should go on the GPU. That seems like a good split in the sense that computationally expensive stuff would be on the GPU while "moving fiddly little bits around" would stay on the CPU.

                It's not clear that moving more of the work onto the GPU (ie going further back up the decode pipe) would be a win anyways.
                Test signature

                Comment


                • Originally posted by droidhacker View Post
                  tball: have you considered asking either of them for assistance or funding?
                  No I haven't. This is a sparetime project only, and I don't want to be bounded by any promises. I am going to USA for some time in a couple of month, and I don't know if I have time developing on GPU decoding over there.

                  Comment


                  • Originally posted by Hans View Post
                    No I haven't. This is a sparetime project only, and I don't want to be bounded by any promises. I am going to USA for some time in a couple of month, and I don't know if I have time developing on GPU decoding over there.
                    I guess I am confusing people here. Well Hans = Tball :-)
                    i created the user Hans, because I forgot my password for tball. Luckily firefox has stored the password, so I am now back with tball.

                    Once in a while I use chromium browser, which is login in automatically as Hans :-)

                    Comment


                    • Don't need to make any promises. If you tell them of your interests, they may HIRE you or hire someone to do some grunt work and/or even take over the project and do it FOR you.

                      Wishful thinking? Never hurts to ask. Especially google -- this could be exactly what they need to get VP8 really off the ground.... and they have TONS of money to throw around.

                      Comment

                      Working...
                      X