Announcement

Collapse
No announcement yet.

AMD Releases Open-Source R600/700 3D Code

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Originally posted by Extreme Coder View Post
    As far as I understand, there should be no differences in playback performance between fglrx and radeon/radeonhd driver, since both just use Xv for playback. and the CPU has to do the decoding in both cases.
    You'd be surprised what a difference you can make when you properly optimize that CPU usage, though. It's entirely possible for there to be a vast gulf between the two drivers' performance achievements.

    Comment


    • #62
      Thanks for the Hard Work

      I bought a 3850 a while back before the 4800 series came out. My only regret is not buying a 3870.

      Thanks for the hard work bringing this to the community. I hope the community repays AMD/ATI in kind.

      Trying to do my part. Just bought a 780g motherboard (Foxconn A7GM-S) and an X2 cpu. The board I replaced had an Nvidia chipset (Biostar 6100 -M9).
      Been very happy with it so far.

      Now if I could use the 3850 for graphics while offloading the onboard 3200 for other tasks (like physics..hint...hint).

      Anyways thanks again.

      Comment


      • #63
        seriously, effort is spent MUCH better elsewhere than trying to implement video acceleration, ESPECIALLY useless mpeg2 acceleration.

        Even my AMD64 S754 2ghz can do 1080i/p mpeg2 decoding. What does this tell me? well.. today we have an enormously bigger amount of cpu. Even today, the cpu's that are becoming mainstream can almost do 1080p h264 in realtime, and that is despite the fact that the branch in ffmpeg for properly threaded h264 decoding isnt even merged. Now consider that ffmpeg at some point will merge that, plus everyone is getting better cpus constantly, and 1080p H264 acceleration is gonna be as non important for us, as mpeg2 acceleration was back when our cpu's couldnt handle DVD playback!! we are talking at MOST 1 year.. and for enthusiasts, which we probably could be qualified as, we already have by far the capability to simply choose our hardware purchase, and most of us, probably already have the capable hardware.

        So its simply not the trouble worth to do these things, when so many other, more important, more FUN areas can be worked on.

        Also consider, that eventually in the future, with gallum3d, gpgpu etc, we are gonna simply implement the decoding using THOSE facilities(of which work is already in progress by atleast one person, with promising results). This further strengthens the plan of simply not borthering with conventional video acceleration right now.

        What will do the most gain, is just getting plain old Xv working.


        Comment


        • #64
          Thank you AMD/Bridgeman/Alex/Egbert/others so much for this!
          This is a really great moment and a big step.

          re: ongoing 5xx love, I think the plan is to implement KMS/MM all the way back to R100 and Gallium3D/GL2 as far back as R300, so no worries.
          It is so great that you don't forget about "old" devices. Many people still have these and are so happy about improvements.

          What are the next steps?
          Are you going to extend GEM to be able to be used with radeon/-hd?
          DRI 2 for flicker free video watching is coming, too?
          For video decoding there are many possibilities, you probably know better what makes most sense...via gallium3d, extended xvmc or vaapi or via openCL... I'm sure you are gonna make a wise decision.
          Power management would be really sweet, too, but for that, a new interface to user-space would be needed, in order to control the usage and so on...


          Thank you, thank you, thank you. I can't say it too often.

          Comment


          • #65
            Originally posted by Nille View Post
            I have Try it but i think i have make something wrong.



            if i try to move window the cpu usage grows up to 100% and the Video Playback give only a Black field.

            PS: smt is Spamming in the xorg.log he is about 31 MB o_O with stuff like this Klick

            NOTE: I use a Radeon HD 3850 with Ubuntu 8.10
            As mentioned in the release notes, this is targeted more at developers rather than users. Most features are incomplete at this point.

            Comment


            • #66
              I'm sure this sounds really bad, but you guys should be thanking the companies that are customers of AMD and pressured them into doing all this, because otherwise, AMD wouldn't give a rat's ass about you and me (= the Linux user) :P

              Comment


              • #67
                > Bridgman is probably talking about developer interest,
                > not general interest.

                I'm talking about customer interest. If AMD/ATI wants to
                sell me a GPU, it *must* have FLOSS support for mpeg
                decode (and working Xv and s-video out code as well).
                Code that actually works, none of this tearing/purple/crashing
                stuff I read about with the half-baked Xv code.

                I realize that the profit margins are higher selling high
                end chips to extreme gamers. But the volumes are higher
                for video, nearly everyone watches tv.

                AMD/ATI puts out binary-only drivers that decode video.
                Obviously the chips can do it, and code exists to do it.
                There is no reason AMD/ATI can't put out FLOSS code to
                do it.

                > What's so signifcant about 2009-02-17?

                Analog TVs and VCRs in the US become boat anchors.
                The obvious solution is to add ATSC tuner(s) to your computer
                and have it be a DVR. There are FLOSS drivers for some
                ATSC tuners. With ATSC the data arrives already compressed
                (mpeg2 transport stream), so it takes very little CPU to record.
                Basically just copy the data from the tuner to a disk file.
                The problem is playback. Most channels are HD, so you have to
                decode HD regardless of the display's resolution. This takes
                a LOT of CPU if you decode in the CPU. Yes the latest CPUs
                may be fast enough, but if your CPU is 2-3 years old you have
                a problem. Buy a new CPU? AMD keeps changing the sockets,
                so you have to buy at *least* a new mainboard and memory as
                well as CPU, and maybe a new P/S as well. At that point you
                might as well get a new case so you can use the old machine to
                google for answers to the problems encountered bringing a new
                machine up. Maybe you don't have the time and money to invest
                in an entire new computer during Great Depression 2.0? Maybe
                you object to wasting CPU on a job better done by a GPU? Maybe
                you have other things for your CPU to be doing? A CPU takes more
                energy to decode video than a GPU, so decoding with the CPU isn't
                very green. Buying an entire new computer instead of just a video
                card isn't very green either.

                Or... AMD/ATI's developers that write the binary-only drivers could
                write some FLOSS code and submit it to X.org or whoever.

                Comment


                • #68
                  Originally posted by Dieter View Post
                  AMD/ATI puts out binary-only drivers that decode video.
                  Obviously the chips can do it, and code exists to do it.
                  There is no reason AMD/ATI can't put out FLOSS code to
                  do it.
                  I don't know the specific, but there are certainly MANY reasons why AMD might not be allowed to put out FOSS code for it. For starts, the video decode chip might not even be designed by AMD's engineers but instead might be a third-party part used by AMD under agreement to keep its interfaces secret.

                  There also might be patent issues at stake, such as the chip interface or the driver code using something that AMD had to license under very restrictive (and expensive) terms.

                  Finally, there's the security issue brought up -- the chip might heva some design flaws that allow parts of it to be used to bypass system security under Windows (I can think of some contrived examples of how that might be, but again I don't have any specifics) and the sad fact is that making the FOSS drivers at the expense of losing Windows certification would be corporate suicide.

                  It may be totally possible and safe for AMD to release specs to use the video decode hardware in FOSS drivers. We can't assume that's the case, though.

                  Comment


                  • #69
                    Originally posted by elanthis View Post
                    I don't know the specific, but there are certainly MANY reasons why AMD might not be allowed to put out FOSS code for it. For starts, the video decode chip might not even be designed by AMD's engineers but instead might be a third-party part used by AMD under agreement to keep its interfaces secret.

                    There also might be patent issues at stake, such as the chip interface or the driver code using something that AMD had to license under very restrictive (and expensive) terms.

                    Finally, there's the security issue brought up -- the chip might heva some design flaws that allow parts of it to be used to bypass system security under Windows (I can think of some contrived examples of how that might be, but again I don't have any specifics) and the sad fact is that making the FOSS drivers at the expense of losing Windows certification would be corporate suicide.

                    It may be totally possible and safe for AMD to release specs to use the video decode hardware in FOSS drivers. We can't assume that's the case, though.
                    nope its not possible
                    if i remember what bridgman said correctly it took a lot of time to make sure its "impssible" reverse-engineer certain parts of the gpu with the given documentation to ensure amd/ati wont face legal problems...
                    and on top of that this videodecoding can be done with gallium3D in a "few" months, so its not that bad!

                    Comment


                    • #70
                      You're both right

                      What I said was that we would spend some time to see if it was possible to open up the UVD hardware, but unless you hear me say otherwise you should assume that it's not going to happen.

                      On the other hand, MC can be done with the information we have already released, and I'm pretty sure we will be able to open up the IDCT hardware on 6xx-and-earlier parts. That said, I'm not convinced that the old IDCT hardware is worth using unless you have a *really* old CPU and GPU.

                      My guess is that a shader-based implementation for IDCT will be the way to go as well, particularly since all of the subsequent functions (filtering, MC, render stuff like CSC, scaling etc..) can be done in shaders as well so no need to push information back and forth between system and video memory. Anything upstream of IDCT will probably stay on the CPU, but AFAIK that runs pretty efficiently on CPUs today.

                      The only thing I'm not sure about is IDCT performance on shaders since it will probably require some texture indirection on most current GPUs that and I don't have a good feel for the performance implications of that. If it turns out that IDCT on shaders does run efficiently (as I expect) then I think we will probably go in that direction before trying to open up the old IDCT hardware.
                      Last edited by bridgman; 30 December 2008, 04:27 PM.
                      Test signature

                      Comment

                      Working...
                      X