Announcement

Collapse
No announcement yet.

Nouveau vs. NVIDIA Linux Comparison Shows Shortcomings

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Nouveau vs. NVIDIA Linux Comparison Shows Shortcomings

    Phoronix: Nouveau vs. NVIDIA Linux Comparison Shows Shortcomings

    One week after delivering updated Radeon Gallium3D vs. AMD Catalyst benchmarks on Ubuntu Linux, we have to share this morning similar results for the open-source and reverse-engineered "Nouveau" Linux graphics driver compared to the proprietary NVIDIA Linux graphics driver. While the Nouveau driver has come a long way and does support the latest Fermi and Kepler GPUs, it's not without its share of shortcomings. Eleven NVIDIA GeForce graphics cards were used in this latest Phoronix comparison.

    http://www.phoronix.com/vr.php?view=18664

  • brosis
    replied
    Originally posted by XorEaxEax View Post
    You seem very confused. Testing is where, as the name implies, 'testing' of new code is done, it's here (and of course upstream) that bugs are found and (hopefully) fixed.

    Once these found bugs have been fixed and the code seems to work proper, it will be moved into 'stable'. So I seriously doubt that code from testing is 'usually' more stable than code from stable in Debian, it defies logic.

    Do you have anything to back this up with, like some Debian developers telling people that if they want stability they should use 'testing'?
    Bug #692607
    Basically, long fixed upstream in 3.8; but affects all previous "stable" systems.
    So they went though patch backporting(actually a rewrite), which made into Debian much much later than in 3.8+.

    When you use Debian for 3-4 years, it comes clear that whole stable/unstable paradigm does not work.
    You can introduce more bugs, when you fix bugs. True.
    .... But the ideal definition of "stable" is just to find current less-buggy state, several steps from cutting edge, and to give full power on actually fixing bugs in cutting edge.
    Instead, versions are frozen, bugs are found which were already fixed by rewrite of something that is still not in frozen version; so the fix must be rewritten explicitly for frozen version - *puff* you have several versions to support and much more work to do.

    That said, I am debian user if it comes to binary system. But I always like source-based much more, even if its more troublesome. But at least those troubles actually act in direction of working on cutting edge. Not patching of something that was long patched by another patch.

    Leave a comment:


  • Kano
    replied
    I would NOT use testing in the sources.list because it can break immediately after wheezy final release. It is much better to use the codename like wheezy in there - i use wheezy since about 1 year ago because there it was known that it will be frozen soon already. When Debian is in (pre-)freeze state it is often already stable enough for many use cases, especially for desktop systems. The problem is always that the freeze can take very long - if you wait for final you wait definitely too long. I hope jessie will be a bit faster from freeze to release, as there shouldnt be so many transistions to do like from single to multiarch this time.

    Leave a comment:


  • XorEaxEax
    replied
    Originally posted by brosis View Post
    Also, testing versions are usually much more stable than "Stable" ones, due to commited bugfixes.
    You seem very confused. Testing is where, as the name implies, 'testing' of new code is done, it's here (and of course upstream) that bugs are found and (hopefully) fixed.

    Once these found bugs have been fixed and the code seems to work proper, it will be moved into 'stable'. So I seriously doubt that code from testing is 'usually' more stable than code from stable in Debian, it defies logic.

    Do you have anything to back this up with, like some Debian developers telling people that if they want stability they should use 'testing'?

    Leave a comment:


  • curaga
    replied
    Originally posted by brent View Post
    The list is both incomplete and undocumented.
    The only missing registers AFAIK are for (patented) features not yet exposed, and some they don't consider important. BTW Bridgman I still want those performance counters

    Leave a comment:


  • brosis
    replied
    Originally posted by XorEaxEax View Post
    If you have problems with the nouveau drivers you file a bug, and if it was causing massive problems then it would indeed be disabled in mainline, you can be certain of that. Obviously it's not, only retards like you who doesn't understand the difference between development versions and tested versions which is what is pushed through repositories.

    I'll try to explain since you are so f***ing stupid, currently Michael are testing the unreleased kernel 3.9 version which is currently being TESTED for REGRESSIONS, NO distro is using it.

    Then in a while when 3.9 is finally released, still NO DISTRO WILL USE IT IN THEIR STABLE REPOS, it will go through both UPSTREAM and DOWNSTREAM TESTING, and bugs will invariably be found, fixes will be made, and then perhaps by 3.9.1-3.9.3 or something depending on how many bugs/regressions are found, a BLEEDING EDGE distro like Arch may switch to 3.9.X. So when Michael states that the latest 3.9 GIT kernel version he is using for testing has regressions, THAT DOESN'T HAPPEN ON PEOPLE'S MACHINES UNLESS THEY ARE TESTING NEW KERNELS/DEVELOPMENT VERSIONS OF NOUVEAU/MESA/INSERT COMPONENT HERE. Do you UNDERSTAND?


    WTF is 'manpower calculations'? And of course they have very limited manpower given what they try to achive, which is to reverse engineer the functionality contained in a large range of graphic cards. Where the hell do idiots like you come from? Is there a online troll course where the prerequisite is that you have to be really stupid to apply?
    I tested nouveau under Debian Stable.
    Also, testing versions are usually much more stable than "Stable" ones, due to commited bugfixes.

    And I alreadly cleared up things with calim, so rant off. I am not against nouveau, I am against developers behaving like buddies next door and expecting driver quality to be something different.

    Leave a comment:


  • bridgman
    replied
    Originally posted by brent View Post
    For some very old GPUs. There is no documentation for R700 and up and it's hardly useful for contemporary GPUs.
    Actually the main docs are for "r6xx/r7xx" and they are still fairly useful today. The major changes from one generation to the next were in the shader ISA, and new documents have been released for each new generation. Registers do move around (different offsets) but the source code headers cover that fairly well.

    Leave a comment:


  • brent
    replied
    Originally posted by curaga View Post
    @brent
    Curious, I thought I have all these register headers in my kernel source, saying which registers are which.
    That's not documentation. The list is both incomplete and undocumented. Source code almost never qualifies as documentation, and the radeon driver is no exception.

    And a set of PDFs documenting most of them in more detail.
    For some very old GPUs. There is no documentation for R700 and up and it's hardly useful for contemporary GPUs.

    Leave a comment:


  • curaga
    replied
    @brent

    Curious, I thought I have all these register headers in my kernel source, saying which registers are which. And a set of PDFs documenting most of them in more detail.

    Leave a comment:


  • brent
    replied
    Originally posted by DeepDayze View Post
    Agreed but good luck trying to get something as simple as the register command documentation out of nVidia...they'd say they cannot release that due to IP issues.
    Even AMD doesn't release anything like that.

    Leave a comment:

Working...
X