Announcement

Collapse
No announcement yet.

OpenCL 1.1 Specification Released

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by bridgman View Post
    I haven't actually seen any shader-assisted H.264 decode implementations in public yet, have you ?

    It's very common to use shaders for render aka presentation (scaling, colour space conversion, post processing etc...) but I haven't seen anything that does MC, deblock and loop filtering on shaders and everything further upstream on CPU.
    "I haven't actually seen any shader-assisted H.264 decode implementations in public yet, have you ? "

    sure? radeon HD2900 aka pure R600 do not have a UVD unit...

    the HD2900 is the pure SHADER based decode implementation in public !

    Comment


    • #32
      D'oh !!

      Good point, I'll check into that. Thanks !

      Comment


      • #33
        Originally posted by bridgman View Post
        D'oh !!

        Comment


        • #34
          Yeah, like that...

          ... although I have more hair.

          Comment


          • #35
            Originally posted by bridgman View Post
            D'oh !!
            Good point, I'll check into that. Thanks !
            "I'll check into that."

            means this bridgman openup the r600 h264 shader part of the catalyst driver?

            yes amd can't spec the UVD unit but amd can release the sourcecode of the R600-shader-based-h264 video acceleration

            ;-) gogogogo bridgman openup the code

            Comment


            • #36
              s/gogogogogo/nononono/

              There was a discussion going on about whether hardware-based decoding (eg UVD) produced higher quality than CPU-based decoding. My position was that the quality would generally be the same, and that it was processing further downstream (but before Xv/GL) that made the difference.

              Put differently, I was saying that the apparent quality difference between UVD decode and CPU decode was that the proprietary drivers, which typically had the serious post-processing, also used hardware decode since it was available to the developers.

              The post processing is considered "secret sauce" and it's highly unlikely we would open up that code. On the other hand, implementing it just requires video knowledge not any more hardware knowledge than we have already released, so there's no reason something similar could not be implemented in the open drivers.

              Comment


              • #37
                Anyways, bottom line here is that running an r600 against a UVD-based GPU (say an rv670 to keep everything else reasonably close) on Windows *might* be an interesting way to see whether UVD actually contributes to video quality the way that some people are suggesting.

                Comment


                • #38
                  Originally posted by bridgman View Post
                  On the other hand, implementing it just requires video knowledge not any more hardware knowledge than we have already released, so there's no reason something similar could not be implemented in the open drivers.
                  yes there is NO "reason something similar could not be implemented in the open drivers."

                  but it costs money and manpower and that point brings me to another point of view...

                  Originally posted by bridgman View Post
                  The post processing is considered "secret sauce" and it's highly unlikely we would open up that code.
                  yes "sauce" like Tomato sauce

                  its really sauce because its pointless!

                  if you build an opensource version of this 'secret' 'source-code' based on the spec the code does exact the same!

                  means there is no 'Secret'

                  its just do the same work again for the same and costs money and manpower!


                  means AMD just lost 'Money' if they don't touch the pointless Secret HoT-Pepper Tomato Sauce



                  Originally posted by bridgman View Post
                  s/gogogogogo/nononono/
                  oh nooooooooooooooooooooooo the lawyer kills the opensource driver for an pointless move of special burning Money action

                  call the fire-fighters..


                  Originally posted by bridgman View Post
                  There was a discussion going on about whether hardware-based decoding (eg UVD) produced higher quality than CPU-based decoding. My position was that the quality would generally be the same, and that it was processing further downstream (but before Xv/GL) that made the difference.

                  Put differently, I was saying that the apparent quality difference between UVD decode and CPU decode was that the proprietary drivers, which typically had the serious post-processing, also used hardware decode since it was available to the developers.
                  yes if you do the same post-processing the cpu based sould have the same quality

                  but in my point of view a cpu can have more quality--> Vector-based-movement-detect-super_sampling

                  you can rendering a 1920x1200 pixel 24fps viedeo @ 4000x2000 pixels and calculate movement detect upsamling to 60fps based on Vectors and then downsampling to the 1920X1200 monitor resolution

                  Comment


                  • #39
                    Originally posted by bridgman View Post
                    I haven't actually seen any shader-assisted H.264 decode implementations in public yet, have you ?
                    There is also a shader based implementation for the Xbox 360's Xenos gpu.

                    Comment


                    • #40
                      Bridgman, I think that would make a lot of sense. If AMD opened up the shader code for R600 (feel free to remove post-processing), it would be another big sweep of good FOSS PR.

                      Surely it would also be rather fast for a qualified dev to remove any secret post-processing; sure, there's bound to be a legal review after that, but for shader code it should be lighter than for actual specs.
                      Much faster than writing one from ground-up, to be sure

                      Could you tell whether it's in a standard spec (GLSL, OpenCL...) or in something ATI-specific? Even if it only ran on R600+ gpus, it would make a great headline, wink wink.

                      Comment


                      • #41
                        The shader code is *only* post processing AFAIK, I don't think we decode on shaders in the proprietary stack today - it's either CPU (r600) or UVD (everything else).

                        My recollection is that the code starts in a high level shader language (probably HLSL) but then is hand-tweeked in some places.

                        Comment


                        • #42
                          Oh, ok. I misunderstood that as having the UVD interfaces be implemented in shaders for that card.

                          Comment


                          • #43
                            I'm not 100% sure myself. All the interesting questions seem to come up on the weekends when I can't wander over and pick brains

                            Comment


                            • #44
                              Originally posted by bridgman View Post
                              Yeah, like that...

                              ... although I have more hair.
                              Yeah like this much, right?:

                              Comment


                              • #45
                                That's more like it.

                                Comment

                                Working...
                                X