Announcement

Collapse
No announcement yet.

AMD Releases Open-Source UVD Video Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #81
    Originally posted by artivision View Post
    The first that matters is Wine behavior. There is not a reason to buy an extra GPU for non gaming purposes, i will prefer a cheap integrated. To this field Nvidia has the first place with a quality closed driver, wile AMD has the fourth place, after Intel with open driver and probably even Imagination(Atom). So if you are a hardcore gamer you need extreme single thread performance(Intel AVX or at least SSE4.2, OC) and an Nvidia_64bit GPU(even a small one 96-384 cores). If you not, you will buy from a contributor to MESA, for now there is only Intel. There is not a single reason for anything else. And a safe conclusion is: The first GPU vendor that will support native HLSL compilers with their Linux driver, wins probably for ever. They must give as the power to run MS-D3D via Wine without GLSL translation, natively. This means that they must give an HLSL compiler target back-end with the Linux driver, plus an recognition front-end for HLSL if needed. If they don't we must start to thing our own solution before an open HLSL compiler: We can extract the missing driver parts from the Windows drivers via Winetricks, and let MS-D3D see them (with some small Wine code).
    1.) well im not sure is even legal and technically make no sense since the HLSL types aren't compatible with GLSL forcing you to add a D3D to the driver too. (not going to happen)
    2.) AVX/2 / SSE4.2 in games?? LOL as far as im informed they mostly target SSE2(and SSE3/SSSE3 sometimes) in the most advanced game engines and mostly "scalate" up to 3 cores and the cheap game engines hardly have SIMD at all
    3.) As far as i know Wine is close enough to performance in most cases to windows(on nvidia as far i can tell) and the translation layer(glsl/HLSL) is quite fast too +/-10% difference in most scenarios

    Actually wine issues aren't coming from the translation layers but from the windows API itself or corner cases that drop to CPU render as fallback since many engines are very hackish and ovbiously they don't support DX10/11 API yet

    michael made some article about it sometime ago http://www.phoronix.com/scan.php?pag...in7_2010&num=4 and so far only unigine heavens[which is uber taxing and very well written] show a significant FPS drop due to a software fallback that got fixed in the 1.4 series i believe.

    so stop sreading FUD and investigate before go bitching and ok michael didn't test BF3 but i can tell you Crysis warhead and crysis 2 run very close to windows this days[wine 1.5.26 and nvidia blob]

    what wine will probably do is help the FOSS drivers to improve many corner cases and performance bottleneck since it will stress the driver way more than xonotic or openarena can handle

    Comment


    • #82
      Originally posted by droidhacker View Post
      Why would anybody possibly want to use catalyst??? Radeon performs close (and in some cases *ahead*) of catalyst, and supports everything that catalyst does.
      Try to play games like serious sam 3 on a hd 5750 with opensource drivers.

      Comment


      • #83
        Originally posted by brent View Post
        However, I still think something is really broken (and frustrating for all involved) if these processes take so long and have unpredictable outcome.
        Why do you say that ? We're talking about hardware blocks that were not designed with open source compatibility in mind, so it's going to be luck-of-the-draw whether we can expose the IP or not.

        Yes it is slow and painful, but so far I think we have been getting IP out more quickly than any of our competitors relative to the start of the open source driver effort.
        Test signature

        Comment


        • #84
          I don't have much experience with this kind of thing, but from the outside it just looks really frustrating. Not only for users, but for developers as well.
          If you are doing good compared to competitors with similar plans, that's nice of course, but on the other hand makes me even more depressed about stupid intellectual property issues.

          Comment


          • #85


            Had to be said!

            Good job
            Would be nice to get UVD1 on my X1300.

            Comment


            • #86
              Originally posted by brent View Post
              I don't have much experience with this kind of thing, but from the outside it just looks really frustrating. Not only for users, but for developers as well.
              If you are doing good compared to competitors with similar plans, that's nice of course, but on the other hand makes me even more depressed about stupid intellectual property issues.
              Oh, it *is* really *frustrating*... I'm just saying that's not the same as "broken"
              Test signature

              Comment


              • #87
                Originally posted by '[Knuckles
                Would be nice to get UVD1 on my X1300.
                I don't think the X1300 has UVD. The r5xx family did some decode acceleration on shaders IIRC.

                I believe we only made one X1xxx part with UVD (rv550 or something like that) and it was only used in a couple of laptops before we moved to r6xx.
                Last edited by bridgman; 03 April 2013, 11:30 AM.
                Test signature

                Comment


                • #88
                  Originally posted by JS987 View Post
                  discrete AMD card has less than 50% frame rate of Intel GPU with open source drivers
                  in Xonotic 0.6 Low
                  Radeon 4830 - 69 fps
                  Intel HD 4000 - 163 fps

                  Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

                  http://www.phoronix.com/scan.php?pag...ar_r4830&num=5
                  There is better result with faster CPU, but Radeon 4830 should be faster
                  Radeon 4830 - 114 fps
                  Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite


                  opensource driver provides only 30-50% performance of closed one with Radeon 5830

                  Xonotic Low
                  open - 123 fps
                  closed - 262 fps

                  Xonotic High
                  open - 50 fps
                  closed - 148 fps

                  Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

                  Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

                  Comment


                  • #89
                    Originally posted by jrch2k8 View Post
                    1.) well im not sure is even legal and technically make no sense since the HLSL types aren't compatible with GLSL forcing you to add a D3D to the driver too. (not going to happen)
                    2.) AVX/2 / SSE4.2 in games?? LOL as far as im informed they mostly target SSE2(and SSE3/SSSE3 sometimes) in the most advanced game engines and mostly "scalate" up to 3 cores and the cheap game engines hardly have SIMD at all
                    3.) As far as i know Wine is close enough to performance in most cases to windows(on nvidia as far i can tell) and the translation layer(glsl/HLSL) is quite fast too +/-10% difference in most scenarios

                    Actually wine issues aren't coming from the translation layers but from the windows API itself or corner cases that drop to CPU render as fallback since many engines are very hackish and ovbiously they don't support DX10/11 API yet

                    michael made some article about it sometime ago http://www.phoronix.com/scan.php?pag...in7_2010&num=4 and so far only unigine heavens[which is uber taxing and very well written] show a significant FPS drop due to a software fallback that got fixed in the 1.4 series i believe.

                    so stop sreading FUD and investigate before go bitching and ok michael didn't test BF3 but i can tell you Crysis warhead and crysis 2 run very close to windows this days[wine 1.5.26 and nvidia blob]

                    what wine will probably do is help the FOSS drivers to improve many corner cases and performance bottleneck since it will stress the driver way more than xonotic or openarena can handle


                    I really try hard to understand what you say.

                    1) Rasterizers inside GPU drivers are unified (as vendors say). They can execute shaders and draw graphics from multiple shader languages, with a simple front end, plus a compiler target back-end in order for a compiler to terget the GPU.

                    2) When i say SSE4.2 or AVX, i mean at least 6-insructions processors with 7 - 9.5 drystone(dmips/mhz) single thread.

                    3) Are you a programmer? Have you even try to compile GLSL_source to GLSL_bytecode and then to GLSL_machinery_code. It takes 2-15 minutes for simple shader programs, the most of it to the first half. Now add the HLSL_bytecode to GLSL_source and then you have it. The problem isn't to the dead corners. The only possibility here is to write some sub-extensions for OpenGL extensions that will compile D3D cases. Something like sub-compiler that will target open and closed GLSL compilers inside GPU driver, and this sub-compiler will be LLVM friendly.

                    4) MS has already lose court fight for HLSL implementation. We only ask that an MS-D3D(via Wine) can see the GPU directly, without translations.

                    Comment


                    • #90
                      Originally posted by wargames View Post
                      Great news, but no RS880 [Radeon HD 4200] support ? I can live without video acceleration because most processors can handle video decoding these days, but I cannot live without proper power management and OpenCL support. I don't even care about 3D games, but please AMD consider OpenCL... because even though your processors are not the best nowadays your graphics cards are way better than those from Nvidia for GPGPU. I would really love to buy a 7990 if it had proper OpenCL support.
                      Umm, doesn't RS880 have UVD2? http://en.wikipedia.org/wiki/Unified_Video_Decoder

                      Also, OpenCL support exists for r600g and radeonsi (the GCN gallium driver). It's just not complete yet... We're working on it (saying this as someone who's not an AMD employee).

                      Comment

                      Working...
                      X