Announcement

Collapse
No announcement yet.

MythTV Adds Support For NVIDIA VDPAU

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by Jorophose View Post
    I think it's still too early to call this one.

    It's great nVidia has opened up VDPAU; I'd love to see a GeForce+VIA Nano platform already! Something based on the GeForce 9 series chipset (aren't those in the works already?) would be badass. Even 8200 or 8300 would be nice, assuming it can do high-bitrate 1080p h.264 (I know it's not needed, but it's good future proofing). At this point, I don't really care about binary vs source, if only because it's not like IGPs with open drivers are any good, and it seems only nVidia is making low-power miniITX-quality IGPs (from what I understand, the 780G is a bit harder to tame. plus, it would be tied to a socketed design).

    I'm hoping AMD can get its shit together, and at least get us XvBA for Christmas on the HD3200 and the HD4000 cards, plus open 3D accel like they've been claiming. There's really nothing from nVidia to compete with the HD4000 series... What, the 9600GT or 9800GT? That old GPU?
    4830 = 8800GT/9800GT
    4870 = GF216

    Nvidia holds it's own quite fine.
    http://www.hexus.net/content/item.php?item=16385&page=8

    Comment


    • #32
      Originally posted by val-gaav View Post
      Well some people missunderstood my comment. I really am not surprised that the average GNU/Linux user may choose nvidia. Sure their driver mostly works, gives best performance etc.

      ... but I'm surprised that developers pick this up too. They are the ones that should know best that blobs are harmfull to Linux.

      Really nvidia is the last one big player that doesn't provide the specs, specs for which devs cried for over the years. Nvidia should be presured to change their stance.
      In the field of video acceleration, I think OSS and Linux have performed very badly, and it's frustrating because it's an area where Microsoft haven't done all that well either, and it could be something where Linux could really shine.

      I mean, you can read the CoreCodecs site for their opinion of DXVA, and if you've built an HTPC based on Windows, you'll know that it's pretty buggy, in inconsistent and seemingly random ways.

      But on Linux, we keep having aborted attempts at replacing or updating XvMC, and we keep waiting for the arrival of "the next big thing" which doesn't seem to ever happen: vaapi hasn't moved in months and Gallium is crawling. Are there others?

      I think the hardware vendors have given up waiting for "the F/OSS community" to get it together and produce a standard. Intel with vaapi obviously, AMD with their "sooper-secret" UVD2 noises (and my 780G will never be supported, thanks AMD...) and now nVidia with VDPAU. The first one with working software will win and the others will have to play catch-up. Seems to be a fairly clear-cut race at the moment.

      Then we look to hardware vendors to open their documentation so that open drivers can be produced:
      - AMD's "opening" has been nothing but pathetic. One single document describing register names but no functional details!?
      - AMD drivers are a nightmare. fglrx is buggy and lacking functionality, and RadeonHD and Radeon are struggling to provide even baseline functionality, let alone usable acceleration.
      - Intel's documentation has been better, if you don't mind that they've left out their newest chipsets (which are the only interesting ones).
      - The Intel X driver is open-source but it's a closed group of developers and they don't seem to have a public roadmap so I can't see how any outsiders can get involved.
      - Via have released some source code and documentation, but they've stated that they won't document the media functionality due to patent concerns.

      It seems that nVidia is the only vendor with either the cojones or the money to provide a working solution for Linux now. They're obviously committed to their closed-source path, and that's their choice.

      We don't have a moral choice here, it's much more basic than that: working video acceleration on Linux (=nVidia) versus non-working drivers and waiting for that next point release...

      I have 3 motherboards with chipset graphics: Intel G45, AMD 780G, nVidia 8200. Guess which is the only one with a stable X driver, let alone video acceleration? Realistically, will that situation change in the next 6 months?

      Comment


      • #33
        Originally posted by RobBrownNZ View Post
        In the field of video acceleration, I think OSS and Linux have performed very badly, and it's frustrating because it's an area where Microsoft haven't done all that well either, and it could be something where Linux could really shine.
        Video is probably the hardest area for Linux to shine, because of the conflict between DRM-mandated secrecy and Linux's open source kernel. All of the hardware vendors are taking a year or two longer to ship video support on Linux than on other OSes.

        Originally posted by RobBrownNZ View Post
        and my 780G will never be supported, thanks AMD...)
        Why do you say this ? I'm not going to talk about specific future plans, but if we did release any video decode acceleration the 780's UVD2 core would probably be the first to be supported.

        Originally posted by RobBrownNZ View Post
        AMD's "opening" has been nothing but pathetic. One single document describing register names but no functional details!?
        Are you talking about the AMD/ATI documentation ? At last count we had 7 documents including a few hundred pages of "how to program it" information :

        http://www.x.org/docs/AMD/

        Not sure, but I'm guessing that the "single document" you mentioned is for the older Geode system-on-a-chip. Where are you looking ?

        Originally posted by RobBrownNZ View Post
        AMD drivers are a nightmare. fglrx is buggy and lacking functionality, and RadeonHD and Radeon are struggling to provide even baseline functionality, let alone usable acceleration.
        Just curious, other than acceleration code for the new 6xx/7xx 3d engine (which is next to come out), which functionality do you think is missing from radeon/radeonhd compared to the other open-source drivers ?
        Last edited by bridgman; 11-30-2008, 10:27 PM.

        Comment


        • #34
          Originally posted by deanjo View Post
          Linux's stable nature does not come from it's opensource nature. That comes from it's great design (linux that is, not X nor other things like audio which is stuck in the dark ages). *nix closed source brethren also enjoy great stability.
          Sure stable nature doesn't come from just one point... but you see people don't run just Linux. They run X, KDE, GNOME, CUPS and a bunch of other things they need. Distros packages all those things and make sure it all works. If there's blob in the system they cannot fix it. Oh and one fairly good nvidia blob doesn't mean all blobs are coded like they should be.

          Think about Windows. It isn't really an unstable system since the NT kernel. What brings problems there ? 3rd party drivers and applications. All those blobs just sometimes don't work well with each other.

          Originally posted by RobBrownNZ View Post
          We don't have a moral choice here, it's much more basic than that: working video acceleration on Linux (=nVidia) versus non-working drivers and waiting for that next point release...
          I really don't have anything against people choosing nvidia if they need features that other drivers don't provide. Right now I'm using radeon driver on a rs690 ATI IGP and I am happy with it. Earlier I used fglrx and it also wasn't bad. So you are a bit excagarating with "non-working drivers" .
          Last edited by val-gaav; 11-30-2008, 10:13 PM.

          Comment


          • #35
            Recursive quoting doesn't seem to be working for me, so I'll paraphrase from bridgman and val-gaav. Also, I don't think that lots of links make for nice reading on a forum, but if you want me to provide links to sources of my (mis)information then let me know!

            Bridgman-
            The 780G point comes from this very site, which said that the AMD UVD stuff would work with UVD2 only. From what I can divine, 780G is UVD(1) and so won't be supported.

            I didn't know about some of those documents, nor did I know about the x.org document repository. Why isn't that stuff visible on developer.amd.com? And does any of that cover the 780G rather than just the older products? (This last question isn't necessarily relevant to AMD; I'm frustrated with Intel because their documentation doesn't cover G45 and from reading the X and fb drivers, there are significant differences between G45 and the available documentation)

            I'm not really comparing radeon/radeonhd with "the other open-source drivers", but with the nVidia driver. As for missing functionality, it's just the classics - GLX_EXT_texture_from_pixmap, OpenGL, even XvMC. All have been missing, buggy, or performance-challenged for a long time in my testing of open-source drivers. (To be fair, I haven't had XvMC since I bought a GeForce 8 series card, nVidia has never supported that, but I guess they were directing their work towards VDPAU instead)

            val-gaav:
            I run KDE on my desktop + development machines and when *any* vendor makes a H.264, 1080p hardware-accelerated solution viable for Linux, I'll probably run XFce or something like that under MythTV. I totally believe that Linux beats the pants off Windows for stability, even with nVidia's blob (it burns me when I have to explain to my wife that the recording of her soap opera is lost because Windows had a random application error).

            I believe your point is that we should strongly discourage the blob mentality for Linux, and I agree totally but I've been waiting for over a year now for a workable H.264 video solution that doesn't need a 3GHz processor and a collection of noisy fans!

            You'd have to admit that nVidia have done a pretty good job of keeping their blob working with the kernel as it's evolved; I once had to wait 3 months for nVidia to update their thunk layer when the kernel stack size limit changed, apart from that they've been very responsive.

            Also I was playing Doom 3, Quake 4 etc on Linux back when fglrx was waaay too buggy for it and the open drivers didn't really do 3D at all. I know that historical points like that don't count for much, times have changed etc; but overall I'll take nVidia's demonstrated commitment to the OS plus the fact that their drivers work, over waiting for open-source drivers.

            It embarrasses me to say it, but I bought the 780G and G45 boards specifically so that I could contribute to the driver development. I hadn't seen any documentation that would allow me to do that, for either product, although maybe bridgman's link might change the situation. Then again, now that I see evidence of a concrete, working solution from nVidia, my motivation is reduced markedly.

            Comment


            • #36
              Originally posted by val-gaav View Post
              They run X, KDE, GNOME, CUPS and a bunch of other things they need. Distros packages all those things and make sure it all works. If there's blob in the system they cannot fix it. Oh and one fairly good nvidia blob doesn't mean all blobs are coded like they should be.
              I'm sorry, but there are plenty of bugs in oss both old and new and they have yet to prove that they will be fixed in any more timely manner then their closed source cousins. Their track record is just as shaky as closed source. The quality of code really has very little to do with it's preferred ideals of oss/closed but more with the competence of the coders doing it . Nvidia has constantly proven that they are up to the task.

              Nvidia enjoys the sucess it has in linux becsause of their ability to control the code. If something breaks, they are the first to tell you that they don't expect someone else to attempt to fix the code for them. We have all seen example in the foss, in video especially where it takes an antagonizing long time to fix stuff let alone get basic functionality.

              You say Nvidia should be presured to change their stance, why so it can enjoy the constant hell of X development, lose functionality and put faith in a crew that constantly has to go back to the drawing board to come up with alternative solutions on a monthly basis? Did it occur to you that the nvidia blobs (even frglx for that matter) enjoy their feature and performance dominance because they didn't have to deal with alot of the pitfalls and limitations of X?

              There is also long outstanding bugs (thinking of S3 and SiS chips from the P2/P3 days that are at least 3 years old ) on older hardware as well but because nobody really uses them anymore those bugs are left unfixed probably because of the low priority so you can't argue that foss guarantee's ongoing support either.

              In theory, foss would work and life would be a bed of roses, in real life though it's track record has been just as spotted as the closed source. Sometimes better, sometimes worse.

              PS. I can't really say I've had a stability problem on windows for years as well.

              Comment


              • #37
                Originally posted by RobBrownNZ View Post
                The 780G point comes from this very site, which said that the AMD UVD stuff would work with UVD2 only. From what I can divine, 780G is UVD(1) and so won't be supported.
                Hi Rob;

                First, the 780 definitely uses UVD2, not UVD1. That's how it earns the "7xx" number - the display and 3d blocks are more like the 6xx generation.

                There have been a couple of references to UVD1 vs UVD2 here. The first was a comment I made about releasing open programming info for UVD. We have not committed to releasing any UVD info (because of the embedded DRM functionality) but I have committed to seeing if we can come up with a way to allow open drivers to use UVD. In that context, I did say that there was "a better chance" of opening up UVD2 than UVD1 because of some internal design differences but that only refers to open documentation and is only my best guess right now.

                Separately, folks have noticed some new files and log messages appearing in the fglrx driver which mention UVD and which probably imply that only UVD2 will be supported. We have not made any announcements in this area, so do keep in mind that all this is just informed speculation so far.

                Originally posted by RobBrownNZ View Post
                I didn't know about some of those documents, nor did I know about the x.org document repository. Why isn't that stuff visible on developer.amd.com? And does any of that cover the 780G rather than just the older products? (This last question isn't necessarily relevant to AMD; I'm frustrated with Intel because their documentation doesn't cover G45 and from reading the X and fb drivers, there are significant differences between G45 and the available documentation)
                All of the same docs are available on developer.amd.com, although even I have a tough time finding them

                http://developer.amd.com/documentati....aspx#open_gpu

                The only things missing from developer.amd.com are the earlier versions of the 5xx 3D doc (we discovered some missing stuff during initial driver development) and the new "kgrids" atombios parser code.

                The 780 programming is very similar to RV610/630/M7x other than the UVD and memory controller blocks. There weren't enough differences to justify a separate document. The IGP parts also have a couple of unique output blocks ("DDIA", I think) which are not yet covered by the public docs, but we have provided that info under NDA to the driver devs and support has already been added to the drivers and released with our approval. When time permits we will bundle up those "doc-lets" and take them through IP review for public release, but pretty much all of that info already appears in the driver code and right now 6xx/7xx 3d engine support is top priority.

                I have some sympathy for the Intel folks working on G45 docs. Preparing and releasing "internals" documentation on new chips is difficult and stressful

                Originally posted by RobBrownNZ View Post
                I'm not really comparing radeon/radeonhd with "the other open-source drivers", but with the nVidia driver. As for missing functionality, it's just the classics - GLX_EXT_texture_from_pixmap, OpenGL, even XvMC. All have been missing, buggy, or performance-challenged for a long time in my testing of open-source drivers. (To be fair, I haven't had XvMC since I bought a GeForce 8 series card, nVidia has never supported that, but I guess they were directing their work towards VDPAU instead)
                Ahh, if you compare any open source driver to the NVidia blob you're going to see some bigger differences. NVidia chose to bypass the DRI framework while the open source drivers follow it completely and fglrx follows it in most respects. There are some deficiencies in that framework which are being addressed now - look up "DRI2", "TTM", "GEM", and "Redirected Direct Rendering" for more information.

                One of the things we realized after restarting our open source driver support was that it was almost impossible for the X framework to make progress unless there were open drivers available for a majority of GPUs, since the only practical way to move forward was for developers to modify the framework and the drivers at the same time -- which wasn't really practical with closed drivers. That little chicken-and-egg problem is gone now, and the X/DRI framework is starting to progress faster than I have seen in a lot of years.

                GLX_EXT_texture_from_pixmap should be working fine these days -- I thought it was a pre-requisite for runnng Compiz but I could be wrong. Without memory management in the kernel (TTM/GEM) it's a bit slow, of course, but that should change soon as the Intel and AMD drivers converge on a common kernel memory management API.

                OpenGL is also limited by the lack of kernel memory management, so right now most of the open drivers are also stuck around GL 1.3 support. Again, that should change fairly quickly as memory management improves. I think the Intel drivers are ahead in that regard. The closed drivers (both NVidia and AMD) have had kernel memory management for a while now so both are already at GL 2.1 or so.

                There doesn't really seem to be much interest in XvMC. We have had enough info in public to write an XvMC driver for at least 6 months now, at least for any GPU up to 5xx or RS690, but I don't think it has been a priority for any of the devs. Xv, or render acceleration, is a different story -- that's what saves most of the CPU time during video playback -- but it has been implemented and working in the open drivers for months. Decode acceleration only really seems to be useful when dealing with HD resolutions and formats, but XvMC as defined only handles MPEG2. There are some discussions about coming up with a standard extension to XvMC to support H.264 and VC-1 but so far I think each vendor is going its own way.

                If you wanted to write an XvMC driver that would probably be an interesting place to get started with driver development. It's simple in principle -- use the motion vectors to set up texture coordinates, draw a textured quad for each affected macroblock (or, I guess, three quads if you treat Y, U and V separately), then pass the results down to the existing Xv code -- but it would take a fair amount of time to come up to speed on XvMC and X programming. It might be cleaner to integrate the motion comp processing into the Xv driver rather than having a separate processing layer for MC -- haven't really looked at it closely. You won't be able to write much code until we release the rest of the 6xx/7xx 3d engine docco, but it would take a while for you to come up to speed on XvMC internals and I think we are getting pretty close to being able to release the next round of code and docs.

                EDIT - I just noticed this is an NVidia thread. How did we end up talking about AMD drivers here ?
                Last edited by bridgman; 12-01-2008, 12:18 AM.

                Comment


                • #38
                  EDIT - I just noticed this is an NVidia thread. How did we end up talking about AMD drivers here ?
                  Because I made a reply to say "thanks nVidia, I think it's great that you've enabled hardware-accelerated video processing that's actually usable" but I suffer from terrible wordiness

                  Thanks for taking the time to reply, bridgman. I realise that you have corporate issues preventing you from releasing documentation, or discussing plans, but to me as an end user these are symptoms of a dysfunctional process, not justifications. Ultimately I don't want to hear about documents under NDA, or drivers waiting for kernel memory management, or what's coming up "soon".

                  I've followed the blogs and mailing lists, I've tried to get GEM/KMS/DRI2 (or any combination of the above) to work, it's just a mess. I waited for 2.6.28 for GEM to be merged, now it's 2.6.29 for KMS etc, and I'm sure it will be 2.6.30 for the next component... until what next? And whether it's Eric Anholt's or Keith Packard's or Jesse Barnes' or Egbert Eich's or Kristian Hoegsberg's git repository I look at, I find parts of the puzzle but no overall plan.

                  I'll keep using AMD's processors, but until there's an AMD driver that gives video acceleration right now, I'll be sticking to nVidia graphics.

                  Couple of other points -
                  - the Wikipedia page (yeah, I know...) on UVD describes 780G having "UVD+" but not "UVD2". I'll take your word for it!
                  - Waiting for AMD to release "6xx/7xx 3d engine docco" is exactly what I'm objecting to. When there are repeated delays and what's delivered is repeatedly not what was promised, credibility suffers.

                  Comment


                  • #39
                    Originally posted by RobBrownNZ View Post
                    Ultimately I don't want to hear about documents under NDA, or drivers waiting for kernel memory management, or what's coming up "soon".
                    Yeah, that's a problem. Some people want to know, other people just get annoyed. I haven't found a good solution for that yet.

                    Originally posted by RobBrownNZ View Post
                    I've followed the blogs and mailing lists, I've tried to get GEM/KMS/DRI2 (or any combination of the above) to work, it's just a mess. I waited for 2.6.28 for GEM to be merged, now it's 2.6.29 for KMS etc, and I'm sure it will be 2.6.30 for the next component... until what next? And whether it's Eric Anholt's or Keith Packard's or Jesse Barnes' or Egbert Eich's or Kristian Hoegsberg's git repository I look at, I find parts of the puzzle but no overall plan.
                    Welcome to community development

                    There really is an overall plan (in the sense that the devs involved have a relatively common view of how everything should fit together at the end) but since this is essentially a community of volunteers there aren't the kind of engineering management practices you would see in most proprietary development. That's a good thing in the sense that it makes room for some extremely talented people who wouldn't be happy in a traditional "managed" development shop, but it also means that both schedules and deliverables are "uncertain" at the best of times.

                    Originally posted by RobBrownNZ View Post
                    Waiting for AMD to release "6xx/7xx 3d engine docco" is exactly what I'm objecting to. When there are repeated delays and what's delivered is repeatedly not what was promised, credibility suffers.
                    The commitment was "here is the *sequence* of deliverables, I'll keep you informed about progress, and here is our best guess for the next deliverable based on what we know today". We have *never* promised delivery on any specific schedule.

                    Where do you think you are seeing these promises ?
                    Last edited by bridgman; 12-01-2008, 02:10 AM.

                    Comment


                    • #40
                      There really is an overall plan (in the sense that the devs involved have a relatively common view of how everything should fit together at the end) but since this is essentially a community of volunteers there aren't the kind of engineering management practices you would see in most proprietary development. That's a good thing in the sense that it makes room for some extremely talented people who wouldn't be happy in a traditional "managed" development shop, but it also means that both schedules and deliverables are "uncertain" at the best of times.
                      Now here's where I get confused. You are an AMD employee, correct? Eich works for Novell, Hoegsberg for Red Hat, Anholt, Packard, and Barnes for Intel. Yet you are all working on this on a voluntary basis? With no corporate commitment behind the work to support their products on Linux? That speaks volumes in itself.

                      The commitment was "here is the *sequence* of deliverables, I'll keep you informed about progress, and here is our best guess for the next deliverable based on what we know today". We have *never* promised delivery on any specific schedule.

                      Where do you think you are seeing these promises ?
                      Damn, I knew you'd pull me up on the word "promise"! I admit that there is a certain amount of expectation and wishful thinking involved in the translation of an announcement to a predicted arrival date, so I won't argue the toss on "promises".

                      I'm definitely flogging a very sick horse here though, you've been very clear and I've said what I wanted to say so it's probably time to drop it Thanks again for your time.

                      Comment


                      • #41
                        Yep, I think you have all the companies right. Some of the volunteering is individual (more than you might think), and some is corporate, but no one company runs the show.

                        I guess the point I'm trying to make is that everyone pitches in where they can and we all try to coordinate along the way, in contrast to a normal "managed" development effort where deliverables are identified, effort is estimated, schedules are set, resources are allocated, tasks are doled out, and one or more people oversee the execution against a detailed, published plan.

                        That kind of management stuff doesn't seem to go over so well in the open source world. You'd think we were killing kittens or something

                        Anyways, nice talking to you.
                        Last edited by bridgman; 12-01-2008, 03:53 AM.

                        Comment


                        • #42
                          Originally posted by deanjo View Post
                          I'm sorry, but there are plenty of bugs in oss both old and new and they have yet to prove that they will be fixed in any more timely manner then their closed source cousins. Their track record is just as shaky as closed source. The quality of code really has very little to do with it's preferred ideals of oss/closed but more with the competence of the coders doing it . Nvidia has constantly proven that they are up to the task.
                          Sure you can find bugs in both worlds. The thing is ultimately I believe you can find more non fixed bugs in the closed source drivers/applications. Sure you can find examples of good closed source apps and bad open source ones, but generally the treand is the other way around. You know the "Many Eyes Makes Bugs Shallow" mantra, but I think it's not just that. When you make your code public you generally want to make it as good as you can to not be emarassed about it latter on. Coding as closed source allows some sloppy programming as noone besides you and your buds from the office will see the code.

                          You say Nvidia should be presured to change their stance, why so it can enjoy the constant hell of X development, lose functionality and put faith in a crew that constantly has to go back to the drawing board to come up with alternative solutions on a monthly basis?
                          I think the current state of X is partially nvidia fault. Releasing a good closed driver that bypasses many X mechanisms made many people to not care about things like DRI2 for example. Without good nvidia driver X might have got many programers that right now do not care about it because nvidia is working just fine for them.
                          Few years ago the situation was that nvidia was really the only choice for a Linux user. That or some old radeon 9200. I think many companies at some point fallowed nvidia example and provided blobs, because it was working so well for nvidia.... but you know so far those companies failed in providing as good blob as nvidia does. If not for this bad exemple maybe we would have had better open source drivers right now.

                          There is also long outstanding bugs (thinking of S3 and SiS chips from the P2/P3 days that are at least 3 years old ) on older hardware as well but because nobody really uses them anymore those bugs are left unfixed probably because of the low priority so you can't argue that foss guarantee's ongoing support either.
                          Well those cards are ancient, so no surprise. How many years have passed since the closed drivers for those were not updated ?
                          On the other hand radeon 9200 or even r100 cards still have nice support and get fixes and new stuff, while my geforce 4200 is in the legacy nvidia tree. You know that geforce is really fine for all things you can do on that box and I do not need to upgrade it, but I get almost no support for it.

                          Comment


                          • #43
                            Originally posted by val-gaav View Post
                            Sure you can find bugs in both worlds. The thing is ultimately I believe you can find more non fixed bugs in the closed source drivers/applications. Sure you can find examples of good closed source apps and bad open source ones, but generally the treand is the other way around.
                            100% unproven and without basis.

                            You know the "Many Eyes Makes Bugs Shallow" mantra, but I think it's not just that. When you make your code public you generally want to make it as good as you can to not be emarassed about it latter on. Coding as closed source allows some sloppy programming as noone besides you and your buds from the office will see the code.
                            There are millions and millions of lines of ugly code in the foss world. There is also pressure on closed source devs, maybe even more, to make good code. A project lead is only going to accept so much crap code before he tells the programmer to hit the road looking for another job. The assumption that companies hire morons for closed code is completely unjustified in the real world.

                            I think the current state of X is partially nvidia fault. Releasing a good closed driver that bypasses many X mechanisms made many people to not care about things like DRI2 for example. Without good nvidia driver X might have got many programers that right now do not care about it because nvidia is working just fine for them.
                            Few years ago the situation was that nvidia was really the only choice for a Linux user. That or some old radeon 9200. I think many companies at some point fallowed nvidia example and provided blobs, because it was working so well for nvidia.... but you know so far those companies failed in providing as good blob as nvidia does. If not for this bad exemple maybe we would have had better open source drivers right now.
                            lol, seriously, so Nvidia is responsible for foss development laziness and other companies inept attempts at bringing a working solution? The "We suck because NV is so good" is a really weak attempt at justifying the poor state of x. Heck bridgeman has even given examples where even with all the resources needed for XvMC support being made public there is still no interest from foss devels picking it up and implementing it.


                            Well those cards are ancient, so no surprise. How many years have passed since the closed drivers for those were not updated ?
                            On the other hand radeon 9200 or even r100 cards still have nice support and get fixes and new stuff, while my geforce 4200 is in the legacy nvidia tree. You know that geforce is really fine for all things you can do on that box and I do not need to upgrade it, but I get almost no support for it.
                            So what if the 4200 is in the legacy tree? It's blobs are still regularly updated. Legacy does not mean forgotten or unsupported. Hell even the orginal TNT which is older then your radeons are still being updated which is more than can be said of other cards from it's era. Last driver available for them was put out 1 month ago. The point being is that the arguement that foss drivers makes sure that there will be ongoing support again "works in theory" but in real life the story is very different.

                            Comment


                            • #44
                              Originally posted by RobBrownNZ View Post
                              Bridgman-
                              The 780G point comes from this very site, which said that the AMD UVD stuff would work with UVD2 only. From what I can divine, 780G is UVD(1) and so won't be supported.
                              I was also under the impression that it is UVD1 only, wikipedia (yes yes yes) also says so and I was searching amd.com and could not find it clearly stated.
                              Now that might be me but it nearly had me buying an nvidia board - again not sure what I am going to do...

                              Comment


                              • #45
                                If we get into the details we just end up confusing everyone, so the amd.com blurb tends to talk about what the chip can do rather than which specific version of UVD (or 3D engine, or display controller, or...) is included.

                                The Wikipedia UVD page seems to be just plain wrong -- it says on the page that we use UVD+ in 780 (I have never heard of UVD+) but the link it references for that statement says that 780 uses UVD2.

                                If you follow the links you get :

                                "AMD ATi UVD2 的顯示卡有 3450 3470 3650 3670
                                AMD ATi UVD 的顯示卡有 2400 2600 3850 3870
                                AMD ATi UVD2 的 IGP MB 有 780G 780GX"

                                AFAIK this is wrong as well, but less wrong than the Wikipedia page. My understanding was that :

                                - 2300 (rv550), 2400 (rv610), 2600 (rv630), 34xx (rv620), 36xx (rv635), 38xx (rv670) all have UVD1
                                - 2900 does not have UVD
                                - 3100-3300 (all the variants in 780/790GX family) have UVD2
                                - 4xxx have UVD2

                                There were incremental improvements along the way in both UVD1 and UVD2 so there are actually more than 2 versions of UVD, but the UVD1/UVD2 split covers the main architectural changes.

                                It's possible that someone leaked early info about the 780 and talked about "UVD+" rather than "UVD2", and that the original site was updated after the info was transcribed to Wikipedia. I'm just guessing though, based on the fact that there are two rows of "UVD2" parts so the last row might have originally said "UVD+". We don't spend time running around correcting leaks and rumours though -- there's too much other work to do first



                                We have not announced anything related to video support with fglrx, and we have no intention of announcing anything that is not ready to use. In the meantime, please let me remind everyone that my comments about UVD2 possibly having a better chance of open source support than UVD1 (because of some internal differences) only relate to open source support and not to anything we might do in fglrx.
                                Last edited by bridgman; 12-01-2008, 03:49 PM.

                                Comment

                                Working...
                                X