Announcement

Collapse
No announcement yet.

AMD Releases Open-Source UVD Video Support

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by Hugh View Post
    If you publish specs, you might be endangering some trade secret. You certainly cannot endanger a copyright or a patent. We agree that trademarks aren't on the table. Your comment suggests to me that you are not talking enough with your lawyers. That surprises me since I picture you (or your predecessors) talking to them for half a dozen years.
    Hal2k1 was mentioning copyright. I included it in the larger list for completeness. Re: patents, the concern there is IP which is currently protected by trade secret but where we have patent applications in flight internally.

    Originally posted by Hugh View Post
    ...

    The sad thing is that in that five years, Intel has reached sufficient performance for video that it becomes the obvious choice. I bought a few cute Brazos systems but even the closed source drivers have made the experience disappointing. Stupid little things like not keeping up with the kernel, something caused by not being "in-tree".
    Intel had been working on releasing materials to open source for years before we started. If you said "the sad thing is that in 10 years (or maybe 8, not sure) Intel has reached sufficient performance for video..." I would agree. I'm sure we will reach at least the same level in the same time.

    Comment


    • Originally posted by arekm View Post
      I've played with UVD on my E-350 APU and I'm quite happy with the results. Tested with mplayer (mplayer -vo vdpau -vc ffh264vdpau)
      and also with new vdpau xbmc code. 1080p movies are now playing fine.

      I also tested adobe flash plugin with
      OverrideGPUValidation = 1
      EnableLinuxHWVideoDecode=1
      option sset and had render *and* decode hardware accelerated! 1080p youtube was now playing nicely where before I was able to watch 480p only.

      vdpauinfo reports such capabilities:

      Decoder capabilities:

      name level macbs width height
      -------------------------------------------
      MPEG1 16 1048576 16384 16384
      MPEG2_SIMPLE 16 1048576 16384 16384
      MPEG2_MAIN 16 1048576 16384 16384
      H264_BASELINE 16 9216 2048 1152
      H264_MAIN 16 9216 2048 1152
      H264_HIGH 16 9216 2048 1152
      VC1_SIMPLE 16 9216 2048 1152
      VC1_MAIN 16 9216 2048 1152
      VC1_ADVANCED 16 9216 2048 1152
      MPEG4_PART2_SP 16 9216 2048 1152
      MPEG4_PART2_ASP 16 9216 2048 1152
      Thanks, arekm for testing on your E350, and thanks AMD team for making this happen. Like a lot of users, I thought UVD would be too encumbered to be released. This announcement was absolutely astonishing.

      Comment


      • can anyone please please please point out how to test it out? any pointers?? please! too eager! . Have been struggling with XvBa on catalyst for far too long, and unfortunately I have no idea on how to build the packages from the latest code... Do I need linux-next + mesa-devel + radeon-git? how to test it?

        Comment


        • Originally posted by xgt001 View Post
          can anyone please please please point out how to test it out? any pointers?? please! too eager! . Have been struggling with XvBa on catalyst for far too long, and unfortunately I have no idea on how to build the packages from the latest code... Do I need linux-next + mesa-devel + radeon-git? how to test it?
          I'm wondering myself...but here's what I can figure out.

          Linux-next won't cut it, AFAICT.
          In fact, you may need to go to
          http://cgit.freedesktop.org/~deathsimple/linux/
          git clone that repo, and get branch uvd-v3.8.4 then make localmodconfig
          You'll probably need to get the radeon microcode from http://people.freedesktop.org/~agd5f/radeon_ucode/
          And mesa from
          http://cgit.freedesktop.org/~deathsimple/mesa/
          And I'm not sure what to do about the Xorg driver, but it looks like any recent one will do (probably thanks to the video over shaders work).
          And you probably need a new libdrm, just so mesa builds...

          Comment


          • Originally posted by artivision View Post
            1) That i don't want: Make an OpenGL based D3D_compiler, or translate D3D to OGL. When you try to represent HLSL to GLSL, many times "word to word" its impossible. That's why we call it emulation. So something that in HLSL needs 100flops to give result, translated or compiled to GLSL needs 120+flops, so its inefficient.

            2) That i want: Vendors to give an HLSL front-end for their IL. So if i write an HLSL compiler will not be OGL based =inefficient, plus 10 times the work. For the IL its a front-end that is missing, but for a compiler the same thing is actually a back-end. A simple job is to write some Wine code and make MS_D3D to work with Wine (WineD3D work on Windows), then point the back-end extracted form the Windows driver(using Winetricks) to crate IL code, and all that inside a Wine Session. Its far less work than anything else.
            1.) yeap you always have a penalty in doing so

            2.) wineD3D works on windows cuz windows support DirectX, i think you are seeing the problem from a high level perspective and is not where the problem reside as i linked before there are already many HLSL compiler and converters and wine actually can understand most of DirectX fixed functions. The problem is that you need the whole DirectX driver not just the HLSL compiler or IL ported to linux, meaning the sole IL or compiler are useless on linux and cannot be interpreted directly without the full driver and again the driver is not in the DirectX code but entirely outside of it in the WDDM layer:

            WDDM layer is Equivalent to:
            linux KMS/KernelDRM/Gallium Winsys/Gallium StateTracker/xf86-video-XXX/libva-intel/etc

            DirectX layer
            Mesa's libGL/Xorg-server/LibVA/libVDPAU/libXV/SDL/Pixman/Cairo

            DirectX is the high level layer that define the protocol not the access to the GPU in any way and the WDDL is the low level layer that have the driver and produce the actual IL/IR/ASM, the problem is not just dump some IL and Voila[if it were that easy any idiot would have done so ages ago] the problem is to port the WDDM layer to Linux and implement libD3D[libGL equivalent for DirectX] otherwise is not possible to interpret that IL.

            understand that the fact that Opengl and DirectX can support Tessalation[for example] it doesnt mean in any way the do it the same way[which you wrongly assume so] at internal ASM level, you assume that since both PROTOCOLS can render almost the same they work exactly the same internally and that is not true, DirectX and OpenGL handle types, allocations, swizzling, texture type,framebuffer addressing, Command String checking and packing, object linking, swaping, vblank, offscreen rendering, etc different enough to make impossible just dump some asm from one to the other and expect it to render.

            GPU's don't have something soo convinient like send 0x11111 to adress 0x3332ffa to activate tessalation render pass on the scene or pass 0xffff2+the tga texture to adress 0xfff12345 to get an FBO surface but is more like PIC ASM with very low level functions[opcodes] that after houndreds/thousands of lines of code you can get something to work and Opengl and DirectX are PROTOCOLS that define FUNCTIONALITY EXPECTATIONS as a high level API not a low level hardware standard, so for example both DirectX and Opengl support RGBA textures but Opengl think is easier to swizzle from bottom-right and allocate it like first N bytes as identifiers+TexData continiously at a high addrress but DirectX thinks is better to zswizzle top-left but allocate the identifier and Tex Data non contigiously in the frambuffer at a low address but both will either way show you the pretty pixmap you uploaded but as you see the HLSL compiler will look out that texture the DirectX way and the opengl driver wont be able to find it never cuz opengl never allocate a texture in that way <--- see the problem now

            so to use HLSL shaders directly in the GPU you need the rest of the API too directly in the GPU[meaing full driver support] that do things internally as DirectX do it otherwise will never run

            Comment


            • Originally posted by arekm View Post
              I've played with UVD on my E-350 APU and I'm quite happy with the results. Tested with mplayer (mplayer -vo vdpau -vc ffh264vdpau)
              and also with new vdpau xbmc code. 1080p movies are now playing fine.

              I also tested adobe flash plugin with
              OverrideGPUValidation = 1
              EnableLinuxHWVideoDecode=1
              option sset and had render *and* decode hardware accelerated! 1080p youtube was now playing nicely where before I was able to watch 480p only.

              vdpauinfo reports such capabilities:

              Decoder capabilities:

              name level macbs width height
              -------------------------------------------
              MPEG1 16 1048576 16384 16384
              MPEG2_SIMPLE 16 1048576 16384 16384
              MPEG2_MAIN 16 1048576 16384 16384
              H264_BASELINE 16 9216 2048 1152
              H264_MAIN 16 9216 2048 1152
              H264_HIGH 16 9216 2048 1152
              VC1_SIMPLE 16 9216 2048 1152
              VC1_MAIN 16 9216 2048 1152
              VC1_ADVANCED 16 9216 2048 1152
              MPEG4_PART2_SP 16 9216 2048 1152
              MPEG4_PART2_ASP 16 9216 2048 1152
              Thanks for testing! Will get on with testing right away as well What puzzles me however is how did You get MPEG4_PART2 to report as detected. From what I have read it is part of the UDV3 which was not part of the code drop. That would probably require an explanation here from AMD guys.

              And how about asked about earlier in the thread deinterlance support? It would be great if that worked as well.

              Comment


              • Originally posted by ryszardzonk View Post
                Thanks for testing! Will get on with testing right away as well What puzzles me however is how did You get MPEG4_PART2 to report as detected. From what I have read it is part of the UDV3 which was not part of the code drop. That would probably require an explanation here from AMD guys.

                And how about asked about earlier in the thread deinterlance support? It would be great if that worked as well.
                the drop was UVD2 and superior aka from Radeon HD 4000 series upto Radeon HD 7000 series(southern island), the code that didn't land was UVD1

                Comment


                • Originally posted by artivision View Post
                  Anyway thanks for the UVD support, but GPUs are all about gaming.
                  Games are for children. Games do not make the world work. Games are a pointless waste of time. Totally irrelevant.

                  Comment


                  • Originally posted by jrch2k8 View Post
                    the drop was UVD2 and superior aka from Radeon HD 4000 series upto Radeon HD 7000 series(southern island), the code that didn't land was UVD1
                    so it was UDV2 + UDV3 then right? From what I understood it was more like UDV2 stuff working on HD 4000 and up but not UDV3

                    Comment


                    • Originally posted by ryszardzonk View Post
                      Thanks for testing! Will get on with testing right away as well What puzzles me however is how did You get MPEG4_PART2 to report as detected. From what I have read it is part of the UDV3 which was not part of the code drop. That would probably require an explanation here from AMD guys.
                      First off, you're reading wikipedia.
                      Second, it doesn't say that mpeg4part2 was introduced in uvd3, it says that h263 was introduced in uvd3, and that this support was via something (mpeg4part2) that perhaps existed before uvd3.
                      Third, E350, I think, is UVD3.

                      And how about asked about earlier in the thread deinterlance support? It would be great if that worked as well.
                      Do you know what post-processing is? UVD doesn't do post processing. UVD does Decoding.
                      Last edited by droidhacker; 04-05-2013, 04:06 PM.

                      Comment


                      • Cool. Now what would be a nice fanless board supported by this driver for running XBMC?

                        Comment


                        • And how about asked about earlier in the thread deinterlance support? It would be great if that worked as well.
                          Any information about deinterlacing capabilities would really be appreciated

                          Comment


                          • Originally posted by droidhacker View Post
                            First off, you're reading wikipedia.
                            source just as good as any other and unlike on most if You saw any omissions or wrong content perhaps it could be wise to make use of that edit button
                            Originally posted by droidhacker View Post
                            Second, it doesn't say that mpeg4part2 was introduced in uvd3, it says that h263 was introduced in uvd3, and that this support was via something (mpeg4part2) that perhaps existed before uvd3.
                            yes I stay corected. mpeg4part2 probably was introduced earlier and with UDV3 only extended, however IMHO question still remains and hence I repeat it. I did not ask if the code is working on UDV3 machines as it has been said many times it does but does it accelerate stuff that has been introduced with UDV3
                            Originally posted by droidhacker View Post
                            Third, E350, I think, is UVD3.
                            yes hardware supports it.
                            Originally posted by droidhacker View Post
                            Do you know what post-processing is? UVD doesn't do post processing. UVD does Decoding.
                            So what that it is part of something else then decoding? Others like Nvidia and Intel do deinterlancing part of post-processing on the dedicated video engines and my question just as well as others was does AMD do that or not. It would be great if it did as the thread shows I am not only one that would like to know...

                            Comment


                            • Thread going down hill fast!

                              In any event a real big thank you to all of the people at AMD involved in this release.

                              A few suggestions for further announcements on Phoronix:
                              1.
                              Strive to publish a list of all supported chips/architectures from the beginning. It would minimize a lot of postings in the forums.
                              2.
                              Publish links to the relevant code in a clear manner. Some of us don't follow AMD that closely that we have links to all of the open source sites handy and frankly AMDs main web site sucks royally when trying to find information.
                              3.
                              Related to the above, lobby management at AMD to clean up their main web site. It use to be rather easy to find support areas for documentation and product offerings. The last time I was there looking for APU information it was one brick wall after another. In that case I was looking for E-Series/BRAZOS info. The site is in general hostile to technical users. (As a side note this was several months ago).
                              4.
                              Items 1-3 above highlight the fact that many of us have very long periods between hardware purchases where our interests in following technology slides. When it comes time to look for new hardware excessive friction trying to find relevant information can cause people to find the smoother path.

                              Comment


                              • Originally posted by ryszardzonk View Post
                                So what that it is part of something else then decoding? Others like Nvidia and Intel do deinterlancing part of post-processing on the dedicated video engines and my question just as well as others was does AMD do that or not. It would be great if it did as the thread shows I am not only one that would like to know...
                                Deinterlacing is part of VDPAU. I think the mesa vdpau state tracker should handle it (through post-processing shaders), but i can't say i've ever tested it or know for sure.

                                Comment

                                Working...
                                X