Announcement

Collapse
No announcement yet.

ATI, please release an Open UVD API

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Originally posted by droidhacker View Post
    ... did you have a point to make?
    I was just a bit sarcastic. Never mind.

    Comment


    • #62
      HokTar
      Quote:
      Originally Posted by droidhacker
      UVD is VERY VERY quickly starting to look uninteresting. XvMC on G3D!! WOO!!
      This is what AMD devs wish for. In that case they don't have to bother with the documentation and people will stop bashing them for this...
      Originally posted by droidhacker View Post
      ... did you have a point to make?
      it seems he made his point perfectly clear.

      you seem very easy to impress even though there's no production ready XvMC on G3D yet, and dont forget we are told that the UVD fixed ASIC was really only for the 'super terminal' use in the work station market place, where AMD make its money.

      OC there's always the flip side of using that cheap fixed ASIC UVD option, were/when the 'work station' markets start using more flexible video demand's and yet need to reduce cost's in the start of the next financial quarter/Year..

      they may in fact move over to the integrated Intel sandy bridge CPU at that time, and that means every single super'work station' will have the internal Encode/Decode engine and Gfx core included in the price, no discreet Gfx card required anymore, and thats a bit of a pickle for AMD as the UVD3 (3B ?) wont be integrated into the new AMD's CPUs going into the super work station markets ,or will it ?, and can they meet Intel Sandy bridge timelines.

      but UVD3b (everyone likes a freebee) or whatever does not have (SD/HD x264 type visual quality ?) Encode capability anyway, as i say a bit of a pickle, so price war in the super terminal market it is then hmmmm

      Comment


      • #63
        Not sure I understand your point. Are you suggesting that the Sandy Bridge GPU (whether with open source drivers or a hypothetical workstation-oriented proprietary driver) will have comparable 3D performance to a midrange or high-end discrete GPU with proprietary drivers (which is what the workstation market generally buys) ?

        Comment


        • #64
          Originally posted by bridgman View Post
          Not sure I understand your point. Are you suggesting that the Sandy Bridge GPU (whether with open source drivers or a hypothetical workstation-oriented proprietary driver) will have comparable 3D performance to a midrange or high-end discrete GPU with proprietary drivers (which is what the workstation market generally buys) ?
          keeping in mind we are in the "ATI, please release an Open UVD API" thread OC.

          well that's the point isn't it, we dont know what the quality of the SB Encode/Decode is yet really,as the x264 patch is not available just yet, and i dont think you have took a commercial x264 licence yet and working under NDA to produce something interesting and far more use outside the workstation market.

          and AMD Did put the UVD decoder in there to satisfy this very same workstation market YES?

          the new Sandy Bridge GPU again is really an unknown right now, but at least that has a lot of dev's porting code to it and running tests to get it working to a good standard, so the question seems to be, IS it 'good enough' for this key market and will they be inclined to save the budget by using it

          the question of "comparable 3D performance to a midrange or high-end discrete GPU with proprietary drivers" doesn't seem relevant in this UVD verses SB Encode/Decode engine for the workstation market, UVD is what you chose to put on there as it meets their Need and they use UVD right now yes!

          Comment


          • #65
            Originally posted by popper View Post
            it seems he made his point perfectly clear.
            You are very confused.
            He didn't say anything new, he didn't provide a new level of understanding over anything. He simply stated what is already VERY WELL KNOWN.

            you seem very easy to impress even though there's no production ready XvMC on G3D yet,
            No, we have one that is marginally functional, but functional nevertheless.

            No, I am NOT easy to impress, but when changes happen that ARE VERY IMPRESSIVE, I certainly will be.

            Do you want to know what is impressive?
            Got a little teaser message on Friday. Come back on Monday, and the thing is FUNCTIONAL! That is WICKED fast.

            and dont forget we are told that the UVD fixed ASIC was really only for the 'super terminal' use in the work station market place, where AMD make its money.
            VERY VERY VERY confused, you are..... the workstation market is where AMD makes its money ***IN LINUX***. The UVD was put there for WINDOZE REGULAR USERS who want to watch bluray disks.

            OC there's always the flip side of using that cheap fixed ASIC UVD option, were/when the 'work station' markets start using more flexible video demand's and yet need to reduce cost's in the start of the next financial quarter/Year..
            This is totally indecypherable. Could you please rewrite that though?

            they may in fact move over to the integrated Intel sandy bridge CPU at that time, and that means every single super'work station' will have the internal Encode/Decode engine and Gfx core included in the price, no discreet Gfx card required anymore, and thats a bit of a pickle for AMD as the UVD3 (3B ?) wont be integrated into the new AMD's CPUs going into the super work station markets ,or will it ?, and can they meet Intel Sandy bridge timelines.
            Intel crippleGPU won't be impressing any workstation graphics users, so that theory goes straight out the window.

            Also, fixed setting h.264 encoder won't be impressing anyone doing professional video encoding. Get that cheesy sony-look on your next BD movie....

            but UVD3b (everyone likes a freebee) or whatever does not have (SD/HD x264 type visual quality ?) Encode capability anyway, as i say a bit of a pickle, so price war in the super terminal market it is then hmmmm
            Again, there appears to be a major disconnect between your thoughts and your keyboard. Are you saying something about UVD not having an encoder? Well guess what? Gallium CAN be an encoder!

            You just argued for gallium, where the rest of your rant was about how great intel is. FOCUS!!!

            Comment


            • #66
              Originally posted by popper View Post
              the new Sandy Bridge GPU again is really an unknown right now,
              The rumors points that it will have a performance similar to an HD5500. Which it is a big improvement over current intel gpus. Most impressive is that the next generation, intel ivy bridge, is expected to double sandy bridge gpu performance.

              Comment


              • #67
                Originally posted by Jimbo View Post
                The rumors points that it will have a performance similar to an HD5500. Which it is a big improvement over current intel gpus. Most impressive is that the next generation, intel ivy bridge, is expected to double sandy bridge gpu performance.
                As you say... RUMORS!!!

                Intel graphics are a massive turd, don't be expecting them to suddenly make something that's actually mediocre -- mediocre is far better than intel.

                Comment


                • #68
                  (a bit off-topic: ) Intel Graphics hardware has a perfomance very FAR, FAR away of what we get with a discrete ATI or nVidia graphics card... So, I don't expect anyone interested in very heavy video editing to use Intel Graphics hardware on their workstations... (Most people on linux will (still) use QuadroFX of FireGL hardware for video editing).

                  About the subject: I'd prefer to have a proprietary yet functional UVD API with H264, VC-1 and MPEG-2 decoding, (similar to what nVidia actually provides us (with VDPAU)) than an Open-Source implementation that could be risky to be implemented, due to patent infringing (personally, I hate patents). So, I think the best way to provide us an "Open UVD API", would be, instead of releasing the UVD API to the open-source community, use the graphics card shader capabilities to decode H264, VC-1, MPEG-2 formats on hardware...

                  These were my "2 cents"...

                  Cheers

                  Comment


                  • #69
                    Or you could always do what I did... buy a broadcom bcm970015, which is a video decoder with OPEN SOURCE DRIVERS and fully redistributable firmware -- to the point that the whole works as is in Fedora (970015 support comes in F14 to be released in a few days), not even the need to add in weird repos. Lets you have that out-of-the-box *EVERYTHING* experience. And at less than 1 watt....

                    Now if only the post office would hurry up with it...


                    The big question I have is this;
                    How is it that broadcom, typically known for being tough to deal with in open source, is able to provide the source code for this device (which supports all that DRM nonsense on the windoze side of things), but AMD, known for being a happy player in the world of open source, is not able to provide the similar code for their implementation of virtually the exact same thing?

                    Comment


                    • #70
                      Originally posted by evolution View Post
                      About the subject: I'd prefer to have a proprietary yet functional UVD API with H264, VC-1 and MPEG-2 decoding, (similar to what nVidia actually provides us (with VDPAU)) than an Open-Source implementation that could be risky to be implemented, due to patent infringing (personally, I hate patents). So, I think the best way to provide us an "Open UVD API", would be, instead of releasing the UVD API to the open-source community, use the graphics card shader capabilities to decode H264, VC-1, MPEG-2 formats on hardware...
                      Cheers
                      Imho it's (for the moment) fine if they released an Open API, but closed source. So that would end up in a freely available header file and a binary "XvBA" .so file that is tied to the fglrx driver. Yes, it's not perfect for "fully open source minded poeple" (me included) but it's better then the current state and still allows AMD to close whatever the hell they want to close.

                      As for the argument (from AMD, not you) that it would be to easy to reverse engineer it i say that's a non argument since we already have the current XvBA somewhat working implementation which is using the exact same functions from the core XvBA lib that AMD tries to protect so much so WHY isn't the API documentation released yet? You don't need to provide UVD specifics! just the XvBA usage docs.

                      Note: AMD already has released XvBA since it's in every catalyst driver since.. some time. All they just didn't release yet are the function names and the header to make use of it.

                      While looking at the file : libXvBAW.so.1.0
                      with the ldd command (to find which libraries it links to) it's neat to see that it's not relying on a single AMD lib!:

                      linux-vdso.so.1 => (0x00007fff39fab000)
                      libm.so.6 => /lib/libm.so.6 (0x00007f4032fbb000)
                      librt.so.1 => /lib/librt.so.1 (0x00007f4032db3000)
                      libGL.so.1 => /usr/lib/libGL.so.1 (0x00007f4032b3d000)
                      libc.so.6 => /lib/libc.so.6 (0x00007f40327e0000)
                      libpthread.so.0 => /lib/libpthread.so.0 (0x00007f40325c3000)
                      libXext.so.6 => /usr/lib/libXext.so.6 (0x00007f40323b0000)
                      libXdamage.so.1 => /usr/lib/libXdamage.so.1 (0x00007f40321ae000)
                      libXfixes.so.3 => /usr/lib/libXfixes.so.3 (0x00007f4031fa9000)
                      libXxf86vm.so.1 => /usr/lib/libXxf86vm.so.1 (0x00007f4031da3000)
                      libX11-xcb.so.1 => /usr/lib/libX11-xcb.so.1 (0x00007f4031ba2000)
                      libX11.so.6 => /usr/lib/libX11.so.6 (0x00007f4031865000)
                      libxcb-glx.so.0 => /usr/lib/libxcb-glx.so.0 (0x00007f4031650000)
                      libxcb.so.1 => /usr/lib/libxcb.so.1 (0x00007f4031435000)
                      libdrm.so.2 => /usr/lib/libdrm.so.2 (0x00007f403122a000)
                      libdl.so.2 => /lib/libdl.so.2 (0x00007f4031025000)
                      /lib/ld-linux-x86-64.so.2 (0x00007f4033619000)
                      libXau.so.6 => /usr/lib/libXau.so.6 (0x00007f4030e23000)
                      libXdmcp.so.6 => /usr/lib/libXdmcp.so.6 (0x00007f4030c1d000)
                      So, i bet that you can access the UVD chip if you just know how to access it using that lib. And i bet this is the function you need for that:
                      00000000000d4120 T UVDCommand::Initialize(Device const*)
                      (found with the command: nm --special-syms -D -C libAMDXvBA.so.1.0) Sadly the XvBA function didn't spit out their required arguments:
                      00000000000e9a20 T XVBACreateContext
                      00000000000eb360 T XVBACreateDecode
                      00000000000eaff0 T XVBACreateDecodeBuffers
                      00000000000ec150 T XVBACreateGLSharedSurface
                      00000000000e9ce0 T XVBACreateSurface
                      00000000000eb680 T XVBADecodePicture
                      00000000000e9b80 T XVBADestroyContext
                      00000000000eb590 T XVBADestroyDecode
                      00000000000eb150 T XVBADestroyDecodeBuffers
                      00000000000e9eb0 T XVBADestroySurface
                      00000000000eb740 T XVBAEndDecodePicture
                      00000000000eb210 T XVBAGetCapDecode
                      00000000000e9bb0 T XVBAGetSessionInfo
                      00000000000ea570 T XVBAGetSurface
                      00000000002b7f20 D XVBAGetSurfaceCap
                      00000000000e9a00 T XVBAQueryExtensionEx
                      00000000000eb5c0 T XVBAStartDecodePicture
                      00000000000e9ee0 T XVBASyncSurface
                      00000000000eac70 T XVBATransferSurface
                      00000000000ea070 T XVBAUpdateSurface
                      And interesting while browsing through the libAMDXvBA.so.1.0 is a bunch of new functions for the 6xxx series (i see a lot of barts* functions. But what's even more wanted here now is the advanced deinterlacing:
                      MotionAdaptiveDeinterlacingFilter::MotionAdaptiveD einterlacingFilter
                      So the neat deinterlacing IS in the XvBA API!

                      If someone that's a bit more knowledgeable then me regarding pulling a .so file apart, please do so! I do seriously wonder if it's possible to reconstruct the header file.. I haven't found anything yet.

                      The thing that pains me in this is that all the functions are there to give perfect hardware accelerated video decoding and even a range of different deinterlacing methods are there! All we need is the information to make use of these functions.

                      i'm guessing gbeauche and bridgman are under strict NDA rules here and can't say a thing?

                      Comment

                      Working...
                      X