Announcement

Collapse
No announcement yet.

AMD working on XvMC for r300g?

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Hahahaha, well they got me.

    Comment


    • #17
      Originally posted by ghost_o View Post
      The "bird test" is meant to be a joke in case anyone was wondering. It has been re-encoded at over 100mb/s at points.. Max for "any" currently supported codec is still 40mb/s at 1080p..

      That has been a joke "benchmark" floating around for quite some time..

      If you re-encode it to proper levels, it will play just fine.
      err NO, thats a nice bit of FUD or wrong assumptions there ghost_o .

      to be clear, it is Not a joke "benchmark"
      it infact plays fine on PS3, popcorn,and the right PC hardware and decoder,as it stands right Now,(putting the video and audio into another container being perfectly fine if you need to OC but No Re-encoding the video to cheat) No re-Encoding down to a lower spec, thats the point.

      sure, its hard to decode, but it can be Decoded and played back smooothly Today,coreAVC helps OC.

      if Your H.264 Decoder Cant do it. then its time to sit down and seriously refactor YOUR software decoder until it can,just like BetaBoy and his team did.

      theres a reason that clip is used as a benchmark and its because its hard to play but not impossible on todays PC hardware, dont blame the clip, use it to see if Your hardware and software decoders good enough, and if not make them good enough like others have done, simples.

      Comment


      • #18
        Originally posted by BlackStar View Post
        Lol, even my VDPAU-enabled laptop cannot keep up with the birds video (Core 2 @1.8GHz, Quadro NVS135M, 195.30 drivers, reading from a *ramdisk*).
        try it again with an i7,or even i5 and i3, i dont think you will have to many problems then.

        Comment


        • #19
          will someone please set the 1 minute time for the board back to how it used to be, as more than a minute passes and it wont let you edit Your posts to add or edit them, 10 minutes is a good generic limit if you really must set one.

          Comment


          • #20
            Originally posted by popper View Post
            theres a reason that clip is used as a benchmark and its because its hard to play but not impossible on todays PC hardware
            But is he right that at points the clip is encoded at such high bit rates not reflecting any real world material out there? Because if he is it might be interesting for a select group of people whether or not the clip is playable by H.264 decoder X, but most people won't give a toss about it.

            Comment


            • #21
              Originally posted by popper View Post
              try it again with an i7,or even i5 and i3, i dont think you will have to many problems then.
              Nope, no go, sorry. It's the VDPAU decoder that fails to keep up here, not the CPU.

              Comment


              • #22
                Originally posted by monraaf View Post
                But is he right that at points the clip is encoded at such high bit rates not reflecting any real world material out there? Because if he is it might be interesting for a select group of people whether or not the clip is playable by H.264 decoder X, but most people won't give a toss about it.
                He is. My laptop plays every 1080p movie I've thrown at it without batting an eyelid (and I've thrown many).

                Frankly, I couldn't care less if it drops frames on a benchmark, when every real world test works smoothly.

                Comment


                • #23
                  Originally posted by monraaf View Post
                  But is he right that at points the clip is encoded at such high bit rates not reflecting any real world material out there? Because if he is it might be interesting for a select group of people whether or not the clip is playable by H.264 decoder X, but most people won't give a toss about it.
                  it seems [Dark_Shikari] the x264 developer took it directly off the BBC : Planet Earth Birds BR disk as far as i know rather than the studio masters as the visual quality isnt that good.

                  go ask him about it over on freenode #x264 if your that interested in the reasons, and he will tell you why its so hard to decode, and potential ways to make your decoder and processing chain better etc perhaps, so it is real world material, go buy the BR and try it yourself perhaps.

                  Comment


                  • #24
                    Originally posted by BlackStar View Post
                    Nope, no go, sorry. It's the VDPAU decoder that fails to keep up here, not the CPU.
                    AFAIK Dark_Shikari's i7 works with it, and while i cant find his post right now im pritty sure his i7 laptop has the Nvidia VDPAU ASIC in there, dont know if he uses that though for his testing.

                    that i7 laptop is used for refactoring his x264 patches to get better throughput etc OC.

                    Comment


                    • #25
                      Originally posted by BlackStar View Post
                      He is. My laptop plays every 1080p movie I've thrown at it without batting an eyelid (and I've thrown many).

                      Frankly, I couldn't care less if it drops frames on a benchmark, when every real world test works smoothly.
                      you seem to miss the point?, its just an h.264 clip thats hard to decode, its nothing to do with "A Benchmark" app or whatever.

                      it is You that choose what stages of decode you make that clip pass through to get the end result, be it patch XvMC and related code to make it play that clip smoothly in the future,or weather You Take Younes Manton's summer of code 2008Gallium patches he left to ROT way back in January 18, 2009
                      rather than finish it to a useable state by SOC 2009.

                      or even go the prefered Bridgeman way today of OpenCL over new Gallium rather than the UVD way....

                      eather way it involves refactoring or coding New ways to play that AVC 1080 HD clip, and so lay the ground work for future 2K and 4K super HD, those that actually make the HW and code the patches make the future market,simples.

                      Comment


                      • #26
                        OC Younes Manton's summer of code 2008 Gallium unfinished Proof Of concept patches (POC as he couldnt be bothered to actually finish them and get payed ?)stand as they are , he's not interested in it any more and he's almost invisable today since he went back to Nouveau MESA patches etc, so its up to other devs to take that code or not and do SOMETHING that actually works adn useable by the end user, will that dev or goup of devs be You ?

                        Comment


                        • #27
                          Now I'm out of my depth here and this may be off-topic in this thread (r300g?). If so, my apologies, but being interested in open-source drivers and support for hardware acceleration...

                          ATI's new low-end Radeon HD 5450 looks like an interesting fanless card and Anandtech's review (http://www.anandtech.com/video/showdoc.aspx?i=3734&p=4) pointed out some decoding issues using "a specially crafted heavily interlaced 1080i MPEG-2 file called Cheese Slices, made by blaubart of the AV Science Forum". It was comparing Vector Adaptive Deinterlacing vs Motion Adaptive.

                          Where are the ATI open drivers in terms of hardware acceleration support? Is there a status chart or table somewhere showing different families/generations and their supported capabilities at present and planned? Such a table could be very useful for laymen wishing to stick with the open drivers while hoping to squeeze the maximum hardware-supported features out of their graphics hardware.

                          Comment


                          • #28
                            well dont get me wrong but i think the bird example is not a good example for now but i agree is a good case testbed for the future, why?

                            well is just a dream to get a decoder that effienctly accelerated just from thin air, in reality that is just not going to happen.

                            so the best solution i think is make a modular system and begin to accelerate the most critical parts, and then begin with the full optimization process.

                            in linux we dont have any acceleration, so everything even if little at begining that start to provide some acceleration will be just peachy, even using shader and later migrate to opencl.

                            even an shader accelerated XV at the beggining can help to reduce cpu overheat a bit, next step could be an semi gpu accelerated codec routines in libavcodec for example and some more openmp magic etc, etc. so little by little we will end with one hell of a video decoding system eventually

                            so even when you are rigth that my BD torrented h264 sux compared to birds, we need accelaration for those at first aka ppl mostly use this techs to watch videos not for ultra pro jobs. so when normal BD plays just fine then that kind of ultra bitrate videos is the next step

                            Comment


                            • #29
                              Originally posted by misGnomer View Post
                              Where are the ATI open drivers in terms of hardware acceleration support? Is there a status chart or table somewhere showing different families/generations and their supported capabilities at present and planned? Such a table could be very useful for laymen wishing to stick with the open drivers while hoping to squeeze the maximum hardware-supported features out of their graphics hardware.
                              http://wiki.x.org/wiki/RadeonFeature

                              Comment


                              • #30
                                Well d'oh (for me ;-)

                                Thanks agd5f. The table lists ATI hardware features up to R700 series (aka Radeon HD 4xxx) so I presume the current HD 5xxx line is called "Evergreen".

                                Being the well-intentioned, demanding and nice fellow that I am, it'd also be nice to have regular post-release human-readable "State Of The Union of Radeon" summaries aimed at layman users. Something referring to the model #s such as HD 4xxx, 5xxx etc. and with simple explanations of what's there and what's not (or yet to come). 2D, 3D, partial/full hardware acceleration, TV-out, power-saving and so forth, with some recommendations too.

                                I realize that much is still in the pipeline (pun unintended) so this might be something for the Phoronix editors to ponder in the future.

                                Meanwhile for me as an end user it looks like none of the HD 4xxx or 5xxx generation cards will be reasonably usable in the near future. Maybe a cast out 3xxx card from an upgrading windows user...

                                Comment

                                Working...
                                X