Announcement

Collapse
No announcement yet.

XvMC support

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #76
    Originally posted by bridgman View Post
    In the meantime, a lot of the computationally expensive work associated with H.264 decode and encode can be done with shaders, (...)
    This might be a very short question with a very long and complicated answer, but: What shader power is required to match the custom hardware?

    I did follow some attempts to implement generic shader decoding this summer like here: http://www.bitblit.org/gsoc/g3dvl/ - the other was Rudd on the XMBC team but that didn't seem to go anywhere and the g3dvl project was only able to do 854x480 in realtime.

    Surely AMD looked a bit into whether regular shaders could do the work as they decided to go for dedicated hardware, so is it realistic to do Blu-Ray streams on shaders or is that just exaggerating the power of shaders?

    It's a also a question of what class of chips can do this - like would it be something for an integrated chip with 10 shaders or only something for a 48xx class card? Some ballpark idea of that would be nice.

    Comment


    • #77
      Yep, the answer is long and complicated

      First off, let's get one thing clear. Implementing decode on shaders is not a complete substitute for dedicated hardware, but it is more flexible (fixed function hardware is picky about encoding details) and should be able to reduce CPU utilization enough to make a lot more systems able to decode in real time.

      There are a bunch of activities in h.264 decode (bitstream parsing, entropy decode, spatial prediction) which don't lend themselves to being implemented on shaders so that work is going to have to stay on the CPU anyways. Fixed function hardware can handle the entire decode operation and use less power when doing the decoding.

      In terms of hardware required, quick answer is "nobody knows for sure until the code is written". I doubt that the 40-ALU parts (HD2400, HD34xx, 780) will have enough power since the 3D engine is also being used for render accel (colour space conversion, scaling, deinterlacing etc..). I have always suggested that anyone wanting to run the open source drivers go with at least a 120-ALU part (2600, 3650 etc..) to have some shader power left over for decode work.

      Again, this is all hypothetical right now anyways. I am just trying to give everyone an idea of what the likely scenarios are -- we are going to look into opening up UVD, I just can't make any commitments until we have actually gone through the investigation and it won't be quick. We have 6xx/7xx 3d code out now, so IMO the next priority should be basic power management.

      Comment


      • #78
        Originally posted by bridgman View Post
        In terms of hardware required, quick answer is "nobody knows for sure until the code is written". I doubt that the 40-ALU parts (HD2400, HD34xx, 780) will have enough power since the 3D engine is also being used for render accel (colour space conversion, scaling, deinterlacing etc..). I have always suggested that anyone wanting to run the open source drivers go with at least a 120-ALU part (2600, 3650 etc..) to have some shader power left over for decode work.
        What about R5xx parts? Are the shaders on those units usable for shader-assisted decode?

        Comment


        • #79
          Yep, there's nothing special about the 6xx/7xx shaders in that regard. That's one of the interesting things about a shader-based implementation -- it can't accelerate as much as dedicated hardware, but it can work on GPUs which don't *have* dedicated hardware. You would probably need a fairly high end card though -- the rv530 was the first time we started cranking up the ALU:TEX and ALU:ROP ratio (partly for more shader-intensive games, and partly for video processing), and you probably would need X8xx, X18xx or X19xx realistically.

          Again, until something is implemented these are all SWAGs.
          Last edited by bridgman; 01-08-2009, 12:51 PM.

          Comment


          • #80
            "we are going to look into opening up UVD, I just can't make any commitments until we have actually gone through the investigation and it won't be quick. We have 6xx/7xx 3d code out now, so IMO the next priority should be basic power management. "


            thats a shame, we are looking at months at the very least then!

            "I think the attraction of the [NV cuda] library is that it makes it easy to retrieve the decoded frame, while most of the decoder implementations supplied by HW vendors tend to only output to the screen simply because that was the main requirement.

            We make a similar capability available to ISVs :

            http://www.cyberlink.com/eng/press_room/view_1756.html
            "

            yep,that about covers it for basic needs it appears, your average dev and indeed Pro coders such as BetaBoy and the CoreAVC coders dont really need that much help once they have the right library and docs access it seems, BetaBoy said he wanted to support ATI UVD in CoreAVC and related apps but you dont give them or the open SW coders access to the ATI UVD.

            "I suspect the library uses the DXVA framework in the NVidia drivers, so having DXVA die might be a bit inconvenient, but that's just a guess "

            i think its just entry points into and out of the generic DSP "blackbox" they put on their cards/SOC chips TBO.....

            i dont really see why ATI/AMD couldnt also make such as "blackbox" UVD available as a stop gap measure to help multi OS devs in the short term TBO...!

            i dont know why (other than saving pennys they could recoup on the retail cost) you HW vendors dont just move away from these antiquated DSP SOC, and start using current faster and vastly more expandable FPGA's for your UVD, you (or indeed anyone) could then simply re-program on the fly for many other HW assisted tasks and market sectors?.

            imagine the open source and even closed add-on FPGA (Field Programable Gate Array)code you could market and sell into a generic mass markets.

            simply taking the chance and putting a current fast,low power FPGA on every ATI/AMD gfx and related card/MB would bring the worlds FPGA prices right down in line or below the cheap DSPs favoured today perhaps....fostering lots of innovative cost effective uses in the near/long term for everyones benefit.

            and there wouldnt be any problems as regards DRM layden code.

            given the apparent potential long wait for anything ATI UVD related, perhaps its finally time to move over to NV cards for now as the only viable option for many people world wide today!, as CoreAVC have a linux library available and have released test HW assisted cuda/VS2 CoreAVC on windows that apparently gives it a massive (x2-x4) decoding boost, i dont know if it will be usable on linux X86 as yet though.
            Last edited by popper; 01-08-2009, 08:06 PM.

            Comment


            • #81
              Originally posted by popper View Post
              thats a shame, we are looking at months at the very least then!
              For open source, yes, but I expect fglrx will have it sooner.

              Originally posted by popper View Post
              i dont know why (other than saving pennys they could recoup on the retail cost) you HW vendors dont just move away from these antiquated DSP SOC, and start using current faster and vastly more expandable FPGA's for your UVD, you (or indeed anyone) could then simply re-program on the fly for many other HW assisted tasks and market sectors?. imagine the open source and even closed add-on FPGA (Field Programable Gate Array)code you could market and sell into a generic mass markets. puttig an FPGA on every ATI/AMD gfx and related card/MB would bring the FPGA prices right down in line or below the cheap DSPs favoured today perhaps....
              FPGAs need an insane number of random-logic transistors (ie they take a bunch of die area per transistor) to perform the same functionality as application-specific logic. I think UVD would end up roughly the same size as the RV770 shader core if we did it in FPGA. It's a nice idea but the cost would probably be a lot higher than you expect.

              Originally posted by popper View Post
              given the apparent potential long wait for anything ATI UVD related, perhaps its finally time to move over to NV cards for now as the only viable option for many people world wide today!, as CoreAVC have a linux libray available and have released test HW assisted cuda/VS2 on wiodows,
              Again, I was talking about UVD support in the open source drivers.

              Originally posted by popper View Post
              i dont know if its will be usable on linux X86 as yet though.
              The nvcuvid library that everything else builds on is Windows only AFAIK.
              Last edited by bridgman; 01-08-2009, 07:58 PM.

              Comment


              • #82
                If you want a fpga based graphics card you should look at the open graphics project. Their initial developer cards are fpga based, and note that they use a fairly beefy fpga just to do a "simple" graphics card. You can also gasp at the price of one of those boards (whilst it is true, that the fact that it is niche keeps costs up).

                However even the open graphics people want to transition/create a wide release asic version. Fpgas use much more power,silicon, and are less powerful than an equivalent asic, and as a result their primary market applications are for developers and applications where one needs specific functionality but for which there are no equivalent asics.
                Last edited by _txf_; 01-08-2009, 08:42 PM.

                Comment


                • #83
                  > If you want a fpga based graphics card you should look at the open graphics project.

                  http://www.traversaltech.com/

                  Comment


                  • #84
                    the reason i mentioned FPGAs in passing was not for looking into making the whole core of the gfx card ,but rather finally tapping into the far broader mass markets were many gfx apps and datasets could benefit IF there were an FPGA put on a common board such as a gfx card or a common a garden motherboard for instance...

                    NV and ATI are using a static ASIC today for their blackbox video engine, i think FPGA in this case would be better (other than as Bridgeman outlined as space in your blackbox video AV SOC [System On a Chip]) OC

                    the "Fpgas use much more power,silicon, and are less powerful than an equivalent asic" doesnt seem to be the case as much today, for instance this

                    http://www.videsignline.com/products...PCKH0CJUNN2JVN

                    true its a large chip but its billed as "... are claimed to offer the industry's largest density, highest performance, highest system bandwidth, and lowest power among high-end FPGA solutions. "

                    they as you might expect also offer conversion to Altera's transceiver-based HardCopy IV GX ASICs.

                    but thats not my point, if all your new HW has a generally available FPGA onboard, making just as common as a USB port, than you can start to use that to set yourself apart on the mass market, and offer lots of innovative ways for your customersbase to interact with that FPGA(s)for their own use for instance.

                    im trying to get away from the old school thinking, prototype on FPGA and copy to your slighty cheaper static mass ASIC, to later find you need to fix a bug or limitation you didnt imagine, and run off a new batch/revision.

                    using FPGAs in all your kit could remove that problem and be used to bring that convenience to the masses of home devs and also give you the ability to fix your design errors/limitations later.

                    in the case above as bridgeman says OC, it increases die area in a specialist SOC so its not as versitile for that case, but theres nothing stopping you putting a seperate off the shelf version on your main PCB and linking it in to your main SOC that way to use as you see fit .....

                    generally speaking its become cheaper now to take other peoples SOC, and mix and match them to get your desired end result, rather than have the ASIC made yourself, so its just the logical step to forget the limited static ASIC and just use flexable FPGAs on mass to bring the prices down there, they are as fast or faster and would only get better as CPU and Gfx did, if HW vendors started using them on mass in the PC realm.

                    or thats how i see it today, and how it could be tomorrow with some insight and innovative thinking

                    i like this http://www.pldesignline.com/ for getting you thinking...

                    if you wanted to get real twisted OC theres also the KiloCORE PPC based FPGA with 256 and 1024 cores from way back in 2006 , mixed PPC/Altivec and x86 now thats a sure fire headline even today

                    http://www.rapportincorporated.com/p...20Platform.pdf
                    Last edited by popper; 01-09-2009, 02:06 AM.

                    Comment


                    • #85
                      Might be considered ranting, rather than helping but...

                      Where exactly lays linux appeal for the average Windows convertee
                      (being one myself)? Can't play some fancy 3d games, ok, I can live
                      with that. Must learn a lot about things I never even knew they were
                      needed. For example - what's the thing with XRandR - everyone gets
                      excited about (gasp!) changing resolution, multiple displays and
                      rotating them? Well that's an amazing feature - guess it shouldn't be
                      had for granted in the third millenium.

                      Going on - want to play mp3? Good luck finding your way through names
                      like GStreamer, Xine, Phonon, PulseAudio... Fancy window rendering?
                      No problem, first you got to install fglrx or radeonhd or radeon or ati
                      and then configure your Composite flag in xorg.conf, but you can't do
                      it if you don't su/sudo first. Fglrx? Fglrx? WTF is fglrx? Compiz?
                      Metacity? Kwin? People, I just want them kube rotatin'n'shit

                      ACPI on laptops? hald, acpid - there's always something wrong with them,
                      deeper C states on processors have problems, can't have power managenent
                      on laptop IGP's (without fglrx), standby and hibernate woes.

                      OK - most of regular Redmond-style users quit at this point. Me - not.
                      I read, I learned, compiled quite a bit, changed text files, patched,
                      recompiled. Slow and steady - one day I'll even manage to do it all
                      without a blink. Not so soon, though.

                      But... at the end of the day - what do I get, where's the satisfaction?
                      Can't play games - checked. Can't have video under compiz - checked.
                      Can't do web developing right (IEs4linux - not so good, flash?)- checked.
                      Can't play shockwave game online during break - checked. But wait!
                      I can watch my HD H-264 DVB-T TV broadcast and trailers - NOT.

                      I always considered linux to be a kind of OS that is smart about
                      resources. You know - doing things the way they should be done.
                      If you got a specialized hardware to do some kind of work - use it,
                      optimize, optimize, optimize. Come on... an i810 integrated graphics
                      has working XvMC. If it is such a hassle to program a working MPEG2
                      acceleration on a modern graphic processor (and it seems it is), then
                      what's the point? Where is that magical area that linux excels at -
                      compiling stuff? Watching DVD on a console framebuffer? Showing some
                      other friend wobbly windows?

                      I know what's the answer - why don't you do something yourself? Yup,
                      criticism is hard to take when you're doing all the work. I respect
                      that. And I promise I won't give you a "But,...". Just a thought.

                      Comment


                      • #86
                        " I just want them kube rotatin'n'shit
                        "

                        ROTFL

                        Comment


                        • #87
                          [...] then what's the point? Where is that magical area that linux excels at [...]
                          Choice.

                          This is a system that can scale from paltry router hardware (100MHz processor, 4MB flash) all the way to clusters with thousands of processors. You can choose exactly how you want your system to work, what software to run and how the software will look like.

                          I'm not saying this power comes for free - you've seen the price. I'm also not saying this is something most people want or need - but it's there if you need it.

                          However, Linux is steadily getting better, bit by bit. 2008 brought major improvements to wireless and video drivers, the two major pains. Most companies have now released or have promised to release specs for their hardware. The UI is improving (compare KDE 4 to 3 or Gnome 2.24 to 2.16 if you want proof). More programs see ports or can be adapted to run under wine. 2009 will hopefully bring massive improvements to video support - the pieces are already set.

                          We won't see masses of people switching overnight (people never like change, XP vs Vista anyone?), but as Linux becomes more and more viable, the more adapters it will gain. In fact I was quite surprised at how many people switched in my immediate environment, expressed they liked the change and never went back.

                          I agree it gets frustrating at times and I certainly share your pain about video playback, flash support and certain development tools. However, I look forward to the (hypothetical) day when I'll be able to go to a shop and buy a computer with Linux installed by default ("Do you want windows? It's only 80€ more.")

                          Comment


                          • #88
                            As has been stated before, it's a resource issue. Windows has 90% of the desktop market; linux is like 1-2%. I'd love to support video decode and 3d and advanced power management, etc., but these tasks are non-trivial (especially considering how complex gfx hardware is nowadays) and there are only a handful of active developers.

                            Comment


                            • #89
                              Originally posted by bridgman View Post
                              The ideal solution is to have access to the dedicated hardware we all put in our chips to support BluRay playback. Unfortunately that hardware is also wrapped up in bad scary DRM stuff so making it available on Linux is a slow and painful process.
                              It seems as though the graphics hardware manufacturers should spend more time lobbying US legislators, both to repeal the DMCA and preferably to ban DRM outright. It's a boat anchor on the tech industry, it hobbles the pace of technical innovation, and everyone knows it.

                              Seriously... looking at the resources involved in driver development (for all platforms) to deal with DRM in one fashion or another, the lobbying efforts might be time and money better spent... if DRM is taken out of the picture, then anyone could use your fancy dedicated hardware on any system without any lawyers having a hemorrhoid over the whole thing. Which, frankly, would raise the intrinsic value and usefulness of your hardware product.

                              Comment


                              • #90
                                Originally posted by Porter View Post
                                It seems as though the graphics hardware manufacturers should spend more time lobbying US legislators, both to repeal the DMCA and preferably to ban DRM outright. It's a boat anchor on the tech industry, it hobbles the pace of technical innovation, and everyone knows it.

                                Seriously... looking at the resources involved in driver development (for all platforms) to deal with DRM in one fashion or another, the lobbying efforts might be time and money better spent... if DRM is taken out of the picture, then anyone could use your fancy dedicated hardware on any system without any lawyers having a hemorrhoid over the whole thing. Which, frankly, would raise the intrinsic value and usefulness of your hardware product.
                                I am not a lawyer (oh GOD no I am not a lawyer!), but if my interpretation of 17 USC section 117 (the "fair use" section of the U.S. copyright laws) is anywhere near correct, you could argue that DRM is already illegal as it prevents users from backing up media or programs, moving the software to a new machine, or copying for educational and research purposes, all of which are rights granted to the users of the copyrighted works.

                                Comment

                                Working...
                                X