Announcement

Collapse
No announcement yet.

MythTV Adds Support For NVIDIA VDPAU

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by npcomplete View Post
    I don't understand the whole attitude of pointing fingers towards AMD/ATI. It's like everyone's expecting a complete FOSS driver to be delivered by them, when in fact they've only promised to release documention which they've done well on. They never said anything about implementing the open source drivers. flgrx is already fully featured minus the video acceleration part.
    I think I'm the guilty party with that finger-pointing. I don't expect a complete F/OSS driver, but I do want the ability (ie. documentation etc) for the F/OSS community to be able to create one. Hell, I'd even like to help. My frustration is from (a) the information being released too slowly, (b) vital parts of the information being held back because of DRM and/or patent concerns, (c) the "community" failing to create a video acceleration framework for drivers to be built around, and (d) having to use Windows, against my wishes, just to display non-DRM H.264 content on my HTPC.

    Whether any of that is AMD's fault, or even within AMD's power to correct, I don't know.

    But whatever, good job nVidia for taking the initiative.

    Now as far as HW video decode acceleration goes, it's great that nvidia has a working solution. I don't see too much of a problem around a vendor specific interface because if and when the "community" decides on a non-proprietary API, they'll probably just provide a wrapper around VDPAU.
    That wasn't me I welcome any working solution, even a vendor-specific blob.

    But HW decode though has a drawback in general -- you can't do software driven post-decode processing, at least not yet on any platform AFAIK.
    Yep, I've seen that. Apparently DirectX (10.1?) now has a way of getting video out of the GPU into CPU-space, because this is a bit of a general problem. Don't know about Linux, but if OpenGL and other windows can all be composited by eg. compiz, then surely it's possible? It's a very interesting point, I'm going to have to check it out.

    Comment


    • #52
      Originally posted by npcomplete View Post
      But HW decode though has a drawback in general -- you can't do software driven post-decode processing, at least not yet on any platform AFAIK.
      It's still useful though. Content such as Blu-Ray is (almost?) always progressive so there's no need for deinterlacing there. I will soon be buying a new TV for use with MythTV through this machine's second DVI and I will let the TV do the deinterlacing. I will be using this machine for other things at the same time so acceleration would be appreciated.

      Comment


      • #53
        Originally posted by bridgman View Post
        There doesn't really seem to be much interest in XvMC... but I don't think it has been a priority for any of the devs. Xv... saves most of the CPU time during video playback ... Decode acceleration only really seems to be useful when dealing with HD resolutions and formats, but XvMC as defined only handles MPEG2. There are some discussions about coming up with a standard extension to XvMC to support H.264 and VC-1 but so far I think each vendor is going its own way.
        The way I see it (and this is going to be broad and simplistic), the market for HD video (broadcast and cable, and camcorder-recorded HD) is a super-set of the market for gamers. The implication from your posting above is that it is easier to implement HD/MPEG/XvMc acceelration than 3D/gaming acceleration. And yet, the only really usable (for add-in cards, not onboard) XvMC acceleration for years has been from the Nvidia blob. As for the newest generation cards, VDPAU (according to postings here, at nvnews & avsforum), is actually a working product, as opposed to XVBA.

        So instead of waiting for an open-source XvMC extension, and given that NVidia has something out here which is a glass of water for thirsty travelers in the Linux desert, why not just go and implement it in the damn driver? I still shudder to think of my days with the HD2600XT (let alone XvMC, the thing wouldn't start up correctly).
        Last edited by Nexus7; 02 December 2008, 05:43 PM.

        Comment


        • #54
          Originally posted by deanjo View Post
          100% unproven and without basis.
          Sure but :
          There are millions and millions of lines of ugly code in the foss world. There is also pressure on closed source devs, maybe even more, to make good code. A project lead is only going to accept so much crap code before he tells the programmer to hit the road looking for another job. The assumption that companies hire morons for closed code is completely unjustified in the real world.
          I can tell you the same about your statement.... I base my assumption on my experience. I know few programers around here that work for quite big companies or goverment , and well ... I did hear scarry stories about their programming practicies .... Also my contact with software usally proved that open source apps were more stable/relaiable then the closed ones.
          I guess you also base your statement on your experience... so we should agree that our experiences differ here


          lol, seriously, so Nvidia is responsible for foss development laziness and other companies inept attempts at bringing a working solution? The "We suck because NV is so good" is a really weak attempt at justifying the poor state of x.
          I didn't say nvidia was the main problem but for sure their attitude didn't help and had some kind of negative effect on X state.


          Heck bridgeman has even given examples where even with all the resources needed for XvMC support being made public there is still no interest from foss devels picking it up and implementing it.
          There are way more important things then that to implement right now in the open source drivers. If XvMC only supports MPEG2 then it is in fact waste of time to implement it... Really who needs their GPU help to decode mpeg2 ?

          So what if the 4200 is in the legacy tree? It's blobs are still regularly updated. Legacy does not mean forgotten or unsupported. Hell even the orginal TNT which is older then your...
          Last time I checked (which was a long time ago really) I couldn't get Compiz+AIGLX working on riva tnt2... I'm sorry but if the legacy drivers do not get new needed features it is somewhat "forgoten".
          Last edited by val-gaav; 05 December 2008, 09:45 AM.

          Comment


          • #55
            Originally posted by val-gaav View Post
            Sure but :

            I can tell you the same about your statement.... I base my assumption on my experience. I know few programers around here that work for quite big companies or goverment , and well ... I did hear scarry stories about their programming practicies .... Also my contact with software usally proved that open source apps were more stable/relaiable then the closed ones.
            I guess you also base your statement on your experience... so we should agree that our experiences differ here
            I've also worked for large companies, gov't and opensource. I'm talking from 1st hand experience not hearsay.

            I didn't say nvidia was the main problem but for sure their attitude didn't help and had some kind of negative effect on X state.
            The ones that impact the state of X is the X developers. I would argue that had it not been nvidia and their blobs, video would be in a far poorer state then it currently is, slowing the adoption of OS's such as linux.

            Last time I checked (which was a long time ago really) I couldn't get Compiz+AIGLX working on riva tnt2... I'm sorry but if the legacy drivers do not get new needed features it is somewhat "forgoten".
            Not surpising since your TNT2 does not meet the minimum hardware requirements for compiz. Can't make a silk purse out of a sows ear.

            Comment


            • #56
              Originally posted by bridgman View Post
              The Wikipedia UVD page seems to be just plain wrong -- it says on the page that we use UVD+ in 780 (I have never heard of UVD+) but the link it references for that statement says that 780 uses UVD2.
              I hope it's alright with you that I updated the Wikipedia entry for the IGPs to UVD2 using your post as a reference?

              Comment


              • #57
                Sure. Thanks !
                Test signature

                Comment

                Working...
                X