Announcement

Collapse
No announcement yet.

AMD Releases Open-Source UVD Video Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #91
    Originally posted by Veerappan View Post
    Technically yes, but it's not quite the same as the UVD2 in the r7xx discrete chips. At the moment it's not working.

    Comment


    • #92
      Originally posted by Veerappan View Post
      AFAIK there are enough differences that the code which works on other UVD2 implementations doesn't work yet on the UVD in the IGPs. I believe Christian posted about this earlier in the thread.
      Test signature

      Comment


      • #93
        Originally posted by [Knuckles] View Post
        Had to be said!

        Good job
        Would be nice to get UVD1 on my X1300.
        Thanks that made my day.

        Comment


        • #94
          Originally posted by droidhacker View Post
          That's absurd. The open source drivers are very close to the blobs. You're way behind the times.

          Also, there's no such thing as "twenty times slower". "times" means multiply. You can't multiply "slowness".
          I think that the previous poster is referring to the fact that the Llano (and maybe Trinity) APUs default to a low power state in the VBIOS, whereas the AMD Desktop cards default to a high power state. In order to get full performance out of Llano/Trinity, you need to set the power profile to high, or you need to set the power method to dynpm.

          Comment


          • #95
            Originally posted by Veerappan View Post
            I think that the previous poster is referring to the fact that the Llano (and maybe Trinity) APUs default to a low power state in the VBIOS, whereas the AMD Desktop cards default to a high power state. In order to get full performance out of Llano/Trinity, you need to set the power profile to high, or you need to set the power method to dynpm.
            No, that's not it. Setting the profile or enabling dynpm will do nothing, because the driver simply does not allow higher clocks. There is no way to get the full clock working on mobile APUs without hacks.

            Comment


            • #96
              Originally posted by brent View Post
              No, that's not it. Setting the profile or enabling dynpm will do nothing, because the driver simply does not allow higher clocks. There is no way to get the full clock working on mobile APUs without hacks.
              Good to know. I've got a llano (3-core, maybe A6-3500) in my HTPC, but since I'm not using it for heavy 3d, I've never really investigated the clock speed/performance issue.

              Comment


              • #97
                Originally posted by bridgman View Post
                AFAIK there are enough differences that the code which works on other UVD2 implementations doesn't work yet on the UVD in the IGPs. I believe Christian posted about this earlier in the thread.
                Yeah, I hadn't finished reading the thread yet. Now I'm all caught up

                Hopefully he figures the differences out. I've got a Radeon 6850, A6-3500, HD4200, and an HD3200 in various systems at home. The HD3200 is in a file-server, but the rest of them could end up doing video decoding duties at any time in the future, and the A6-3500 spends an average of 2 hours a night playing back recorded HD TV episodes.

                Comment


                • #98
                  Joining the "Thank you AMD" chorus!

                  Comment


                  • #99
                    Serious Sam works on r600g same as on Catalyst at least for my humble 5730M. (Bad in both cases :P)

                    Comment


                    • Originally posted by artivision View Post
                      I really try hard to understand what you say.

                      1) Rasterizers inside GPU drivers are unified (as vendors say). They can execute shaders and draw graphics from multiple shader languages, with a simple front end, plus a compiler target back-end in order for a compiler to terget the GPU.

                      2) When i say SSE4.2 or AVX, i mean at least 6-insructions processors with 7 - 9.5 drystone(dmips/mhz) single thread.

                      3) Are you a programmer? Have you even try to compile GLSL_source to GLSL_bytecode and then to GLSL_machinery_code. It takes 2-15 minutes for simple shader programs, the most of it to the first half. Now add the HLSL_bytecode to GLSL_source and then you have it. The problem isn't to the dead corners. The only possibility here is to write some sub-extensions for OpenGL extensions that will compile D3D cases. Something like sub-compiler that will target open and closed GLSL compilers inside GPU driver, and this sub-compiler will be LLVM friendly.

                      4) MS has already lose court fight for HLSL implementation. We only ask that an MS-D3D(via Wine) can see the GPU directly, without translations.
                      1.) ofc the GPU internally can execute any form of shaders as long as it use supported by the hardware opcodes
                      2.) well my point is no game uses sse 4.2/AVX this days[maybe unreal4 but not sure yet] and the single thread performance is very relative after all most games bottlenecks are on the bandwith/GPU side more than the CPU and in some others the CPU do affect the max framerate but normally the FPS is high enough to not care, so this days for most games the CPU point is moot unless you wanna break a Bench record or play with multimonitors in 3D[which neither is properly supported in linux yet]
                      3.) i am and those timings are insanely high you probably have a huge time eater in your code unrelated to the to hlsl-glsl
                      4.) again my point the wine performance is close enough but wine is already exploring an option to have an external hlsl compiler especially for DX10/11 http://wiki.winehq.org/HLSLCompiler but again the current wine implementation for Dx9 can handle very well very taxing games like crysis 2 in very high settings fluid enough for me not to care.
                      5.) handle the gpu directly is probably not a good idea at all

                      here you can see how wine shaders work http://wiki.winehq.org/DirectX-Shaders

                      Comment

                      Working...
                      X