Announcement

Collapse
No announcement yet.

MythTV Adds Support For NVIDIA VDPAU

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • korpenkraxar
    replied
    Originally posted by RobBrownNZ View Post
    bridgman, that's a great help. Thanks!

    Now, if you could just let us know when the UVD2 stuff in fglrx will be ready for use in MythTV...
    Yeah that is the million dollar question right now. Well, perhaps not a million dollars but about $80-$100 for an ok low/mid-end discrete jack of all trades card from either maker.

    Damn you bridgman for being so polite, wise and patient in these forums! I can't even read a thread about an actual working HD implementation from Nvidia without seriously considering getting an ATI card instead, after years of suffering from bad fglrx syndrome I am pretty sure your boss is not paying you enough
    Last edited by korpenkraxar; 01 December 2008, 08:38 PM.

    Leave a comment:


  • RobBrownNZ
    replied
    bridgman, that's a great help. Thanks!

    Now, if you could just let us know when the UVD2 stuff in fglrx will be ready for use in MythTV...

    Leave a comment:


  • bridgman
    replied
    If we get into the details we just end up confusing everyone, so the amd.com blurb tends to talk about what the chip can do rather than which specific version of UVD (or 3D engine, or display controller, or...) is included.

    The Wikipedia UVD page seems to be just plain wrong -- it says on the page that we use UVD+ in 780 (I have never heard of UVD+) but the link it references for that statement says that 780 uses UVD2.

    If you follow the links you get :

    "AMD ATi UVD2 的顯示卡有 3450 3470 3650 3670
    AMD ATi UVD 的顯示卡有 2400 2600 3850 3870
    AMD ATi UVD2 的 IGP MB 有 780G 780GX"

    AFAIK this is wrong as well, but less wrong than the Wikipedia page. My understanding was that :

    - 2300 (rv550), 2400 (rv610), 2600 (rv630), 34xx (rv620), 36xx (rv635), 38xx (rv670) all have UVD1
    - 2900 does not have UVD
    - 3100-3300 (all the variants in 780/790GX family) have UVD2
    - 4xxx have UVD2

    There were incremental improvements along the way in both UVD1 and UVD2 so there are actually more than 2 versions of UVD, but the UVD1/UVD2 split covers the main architectural changes.

    It's possible that someone leaked early info about the 780 and talked about "UVD+" rather than "UVD2", and that the original site was updated after the info was transcribed to Wikipedia. I'm just guessing though, based on the fact that there are two rows of "UVD2" parts so the last row might have originally said "UVD+". We don't spend time running around correcting leaks and rumours though -- there's too much other work to do first



    We have not announced anything related to video support with fglrx, and we have no intention of announcing anything that is not ready to use. In the meantime, please let me remind everyone that my comments about UVD2 possibly having a better chance of open source support than UVD1 (because of some internal differences) only relate to open source support and not to anything we might do in fglrx.
    Last edited by bridgman; 01 December 2008, 04:49 PM.

    Leave a comment:


  • Cuppa-Chino
    replied
    Originally posted by RobBrownNZ View Post
    Bridgman-
    The 780G point comes from this very site, which said that the AMD UVD stuff would work with UVD2 only. From what I can divine, 780G is UVD(1) and so won't be supported.
    I was also under the impression that it is UVD1 only, wikipedia (yes yes yes) also says so and I was searching amd.com and could not find it clearly stated.
    Now that might be me but it nearly had me buying an nvidia board - again not sure what I am going to do...

    Leave a comment:


  • deanjo
    replied
    Originally posted by val-gaav View Post
    Sure you can find bugs in both worlds. The thing is ultimately I believe you can find more non fixed bugs in the closed source drivers/applications. Sure you can find examples of good closed source apps and bad open source ones, but generally the treand is the other way around.
    100% unproven and without basis.

    You know the "Many Eyes Makes Bugs Shallow" mantra, but I think it's not just that. When you make your code public you generally want to make it as good as you can to not be emarassed about it latter on. Coding as closed source allows some sloppy programming as noone besides you and your buds from the office will see the code.
    There are millions and millions of lines of ugly code in the foss world. There is also pressure on closed source devs, maybe even more, to make good code. A project lead is only going to accept so much crap code before he tells the programmer to hit the road looking for another job. The assumption that companies hire morons for closed code is completely unjustified in the real world.

    I think the current state of X is partially nvidia fault. Releasing a good closed driver that bypasses many X mechanisms made many people to not care about things like DRI2 for example. Without good nvidia driver X might have got many programers that right now do not care about it because nvidia is working just fine for them.
    Few years ago the situation was that nvidia was really the only choice for a Linux user. That or some old radeon 9200. I think many companies at some point fallowed nvidia example and provided blobs, because it was working so well for nvidia.... but you know so far those companies failed in providing as good blob as nvidia does. If not for this bad exemple maybe we would have had better open source drivers right now.
    lol, seriously, so Nvidia is responsible for foss development laziness and other companies inept attempts at bringing a working solution? The "We suck because NV is so good" is a really weak attempt at justifying the poor state of x. Heck bridgeman has even given examples where even with all the resources needed for XvMC support being made public there is still no interest from foss devels picking it up and implementing it.


    Well those cards are ancient, so no surprise. How many years have passed since the closed drivers for those were not updated ?
    On the other hand radeon 9200 or even r100 cards still have nice support and get fixes and new stuff, while my geforce 4200 is in the legacy nvidia tree. You know that geforce is really fine for all things you can do on that box and I do not need to upgrade it, but I get almost no support for it.
    So what if the 4200 is in the legacy tree? It's blobs are still regularly updated. Legacy does not mean forgotten or unsupported. Hell even the orginal TNT which is older then your radeons are still being updated which is more than can be said of other cards from it's era. Last driver available for them was put out 1 month ago. The point being is that the arguement that foss drivers makes sure that there will be ongoing support again "works in theory" but in real life the story is very different.

    Leave a comment:


  • val-gaav
    replied
    Originally posted by deanjo View Post
    I'm sorry, but there are plenty of bugs in oss both old and new and they have yet to prove that they will be fixed in any more timely manner then their closed source cousins. Their track record is just as shaky as closed source. The quality of code really has very little to do with it's preferred ideals of oss/closed but more with the competence of the coders doing it . Nvidia has constantly proven that they are up to the task.
    Sure you can find bugs in both worlds. The thing is ultimately I believe you can find more non fixed bugs in the closed source drivers/applications. Sure you can find examples of good closed source apps and bad open source ones, but generally the treand is the other way around. You know the "Many Eyes Makes Bugs Shallow" mantra, but I think it's not just that. When you make your code public you generally want to make it as good as you can to not be emarassed about it latter on. Coding as closed source allows some sloppy programming as noone besides you and your buds from the office will see the code.

    You say Nvidia should be presured to change their stance, why so it can enjoy the constant hell of X development, lose functionality and put faith in a crew that constantly has to go back to the drawing board to come up with alternative solutions on a monthly basis?
    I think the current state of X is partially nvidia fault. Releasing a good closed driver that bypasses many X mechanisms made many people to not care about things like DRI2 for example. Without good nvidia driver X might have got many programers that right now do not care about it because nvidia is working just fine for them.
    Few years ago the situation was that nvidia was really the only choice for a Linux user. That or some old radeon 9200. I think many companies at some point fallowed nvidia example and provided blobs, because it was working so well for nvidia.... but you know so far those companies failed in providing as good blob as nvidia does. If not for this bad exemple maybe we would have had better open source drivers right now.

    There is also long outstanding bugs (thinking of S3 and SiS chips from the P2/P3 days that are at least 3 years old ) on older hardware as well but because nobody really uses them anymore those bugs are left unfixed probably because of the low priority so you can't argue that foss guarantee's ongoing support either.
    Well those cards are ancient, so no surprise. How many years have passed since the closed drivers for those were not updated ?
    On the other hand radeon 9200 or even r100 cards still have nice support and get fixes and new stuff, while my geforce 4200 is in the legacy nvidia tree. You know that geforce is really fine for all things you can do on that box and I do not need to upgrade it, but I get almost no support for it.

    Leave a comment:


  • bridgman
    replied
    Yep, I think you have all the companies right. Some of the volunteering is individual (more than you might think), and some is corporate, but no one company runs the show.

    I guess the point I'm trying to make is that everyone pitches in where they can and we all try to coordinate along the way, in contrast to a normal "managed" development effort where deliverables are identified, effort is estimated, schedules are set, resources are allocated, tasks are doled out, and one or more people oversee the execution against a detailed, published plan.

    That kind of management stuff doesn't seem to go over so well in the open source world. You'd think we were killing kittens or something

    Anyways, nice talking to you.
    Last edited by bridgman; 01 December 2008, 04:53 AM.

    Leave a comment:


  • RobBrownNZ
    replied
    There really is an overall plan (in the sense that the devs involved have a relatively common view of how everything should fit together at the end) but since this is essentially a community of volunteers there aren't the kind of engineering management practices you would see in most proprietary development. That's a good thing in the sense that it makes room for some extremely talented people who wouldn't be happy in a traditional "managed" development shop, but it also means that both schedules and deliverables are "uncertain" at the best of times.
    Now here's where I get confused. You are an AMD employee, correct? Eich works for Novell, Hoegsberg for Red Hat, Anholt, Packard, and Barnes for Intel. Yet you are all working on this on a voluntary basis? With no corporate commitment behind the work to support their products on Linux? That speaks volumes in itself.

    The commitment was "here is the *sequence* of deliverables, I'll keep you informed about progress, and here is our best guess for the next deliverable based on what we know today". We have *never* promised delivery on any specific schedule.

    Where do you think you are seeing these promises ?
    Damn, I knew you'd pull me up on the word "promise"! I admit that there is a certain amount of expectation and wishful thinking involved in the translation of an announcement to a predicted arrival date, so I won't argue the toss on "promises".

    I'm definitely flogging a very sick horse here though, you've been very clear and I've said what I wanted to say so it's probably time to drop it Thanks again for your time.

    Leave a comment:


  • bridgman
    replied
    Originally posted by RobBrownNZ View Post
    Ultimately I don't want to hear about documents under NDA, or drivers waiting for kernel memory management, or what's coming up "soon".
    Yeah, that's a problem. Some people want to know, other people just get annoyed. I haven't found a good solution for that yet.

    Originally posted by RobBrownNZ View Post
    I've followed the blogs and mailing lists, I've tried to get GEM/KMS/DRI2 (or any combination of the above) to work, it's just a mess. I waited for 2.6.28 for GEM to be merged, now it's 2.6.29 for KMS etc, and I'm sure it will be 2.6.30 for the next component... until what next? And whether it's Eric Anholt's or Keith Packard's or Jesse Barnes' or Egbert Eich's or Kristian Hoegsberg's git repository I look at, I find parts of the puzzle but no overall plan.
    Welcome to community development

    There really is an overall plan (in the sense that the devs involved have a relatively common view of how everything should fit together at the end) but since this is essentially a community of volunteers there aren't the kind of engineering management practices you would see in most proprietary development. That's a good thing in the sense that it makes room for some extremely talented people who wouldn't be happy in a traditional "managed" development shop, but it also means that both schedules and deliverables are "uncertain" at the best of times.

    Originally posted by RobBrownNZ View Post
    Waiting for AMD to release "6xx/7xx 3d engine docco" is exactly what I'm objecting to. When there are repeated delays and what's delivered is repeatedly not what was promised, credibility suffers.
    The commitment was "here is the *sequence* of deliverables, I'll keep you informed about progress, and here is our best guess for the next deliverable based on what we know today". We have *never* promised delivery on any specific schedule.

    Where do you think you are seeing these promises ?
    Last edited by bridgman; 01 December 2008, 03:10 AM.

    Leave a comment:


  • RobBrownNZ
    replied
    EDIT - I just noticed this is an NVidia thread. How did we end up talking about AMD drivers here ?
    Because I made a reply to say "thanks nVidia, I think it's great that you've enabled hardware-accelerated video processing that's actually usable" but I suffer from terrible wordiness

    Thanks for taking the time to reply, bridgman. I realise that you have corporate issues preventing you from releasing documentation, or discussing plans, but to me as an end user these are symptoms of a dysfunctional process, not justifications. Ultimately I don't want to hear about documents under NDA, or drivers waiting for kernel memory management, or what's coming up "soon".

    I've followed the blogs and mailing lists, I've tried to get GEM/KMS/DRI2 (or any combination of the above) to work, it's just a mess. I waited for 2.6.28 for GEM to be merged, now it's 2.6.29 for KMS etc, and I'm sure it will be 2.6.30 for the next component... until what next? And whether it's Eric Anholt's or Keith Packard's or Jesse Barnes' or Egbert Eich's or Kristian Hoegsberg's git repository I look at, I find parts of the puzzle but no overall plan.

    I'll keep using AMD's processors, but until there's an AMD driver that gives video acceleration right now, I'll be sticking to nVidia graphics.

    Couple of other points -
    - the Wikipedia page (yeah, I know...) on UVD describes 780G having "UVD+" but not "UVD2". I'll take your word for it!
    - Waiting for AMD to release "6xx/7xx 3d engine docco" is exactly what I'm objecting to. When there are repeated delays and what's delivered is repeatedly not what was promised, credibility suffers.

    Leave a comment:

Working...
X