Announcement

Collapse
No announcement yet.

Linux Developers Still Reject NVIDIA Using DMA-BUF

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Hey guys. Nvidia is not losing that much by being unable to use DMA_BUF. The users that will at best have the choice of either decent power management or decent performance may be losing a bit .. Just saying ..! Don't eat me. D:

    Personally I love the opensource movement, and use Linux as my main OS some time now. In an ideal world everything would be open and we would't to compromise anything to have choice .. BUT we're not in that ideal world.
    And I dare say that the model "open everything, then get good support and get everything to work" doesn't seem to work till now .. not so well .. for many years now .. xP
    And where it does it's an exception.
    Instead MAYBE just MAYBE we could settle for good support first, open everything later? Demanding everything to be open is cool and all, but will it work? Or work as well? Don't be narrow minded and think both long and short term.

    And obviously 30-60 fps in a high end card is playable! But crap .. You don't buy a high end card to play at those numbers, you want everything on high levels, cause it will probably not be so well on high settings. And what about not so awesome high end cards?
    And Intel GPUs(at least the ones made till now), which have good support(no personal experience from me though, so take it with a grain of salt), weren't meant for gaming .. Not due to developers or drivers or anything, but from intel and hardware design, they are low end. I'm certain Cryisis plays awesomely on your Intel ..!
    They might have lower performance than windows, but games tend to be unplayable either at 8 or 5 fps, so doesn't even matter much here though.. xD
    So if you want to play games(that many people do), and specifically high end games(which again many people do), not very high end necessarily, you'll need the binary blobs. End of story. That's how it is, and it won't change very soon ..
    If you want to support open drivers ranting and whining and hating on Nvidia .. I doubt they'll help much .. There's many other ways though.

    And yes, nouveau and radeon work. So does an OS I copied pasted from a youtube video and is a few lines of asm. It can boot!! Would you use it? I doubt ...
    I'd probably use radeon instead of catalyst on my current laptop, but power-management is a show stopper .. I know it's AMD not letting code that was written already be published, but I won't fry my laptop still. Or buy a new one now. Let's be a bit realistic in the end.
    Of course there are cards were the open drivers perform very good, comparable to closed ones(a bit worse or even better in certain cases!), but those are not really up to scratch with the latest games.

    If the people who took their time write 20+ pages here wrote some code instead of hate comments for NVIDIA or used this time/typing for something more constructive(don't know what, even click ads on sites to get money to fund the open source development?), we'd live in a somewhat better place .. With better(and open?) drivers ..!!!
    In the end NVIDIA offers decent support. My experience has been much better than AMD .. When radeon gets power management working fine(if not perfect) I'd very likely switch to it. Desktop performance seemed much better than Catalyst last time and I don't care that much for games(which sometimes perform better on radeon than Catalyst under Wine 0_0 ), not to mention Wayland ...!! Until then, it's kind of a no go for me .. :/

    Comment


    • Originally posted by christian_frank View Post
      That's sad and doesn't make the decision less dumb . Thanks to the dev's for NOT taking care about end users ! If their intention was
      to keep people with optimus laptops away from Linux, well done guys ! We surely will not see the "year of the linux desktop"anytime soon with this attitude.
      Direct that attitude towards NVidia instead, they are the ones who are not providing drivers, nor the information necessary for supporting the hardware through open source drivers, despite actually being the ones who get money for this _hardware_.

      Also there is NOTHING preventing them from implementing the necessary functionality for optimus right in their own proprietary driver, which is what they've been doing all along for all the other functionality marked as EXPORT_SYMBOL_GPL. Bottom line is that they are not really interested in supporting optimus for Linux, as they are not really interested in the Linux end user desktop. If they were we'd already have the driver, which is the case with the other proprietary drivers they already supply for their corporate Linux customers (the ones they actually give a shit about).

      This is nothing but NVidia again 'testing the waters' and perhaps trying to cause some controversy because they are a bit pissed due to the 'Fuck you NVidia!' statement from Linus which went viral. It's not as if they really expected a positive reaction, several copyright holders in the targeted areas already expressed their objections before as this is the second time around.

      I'm so glad NVidia's discrete gpu's are a dying breed on the desktop, this goes beyond Linux since there are other operating systems out there which I find very interesting and they don't have a prayer of ever being deemed worthy of NVidia's proprietary efforts. Proprietary drivers are nothing but a goddamn pain in the ass which prevent you from using the hardware you've bought in the settings you so choose.

      Thankfully due to the kernel devs hard stance and the great work of developers doing reverse engineering the proprietary drivers are near obsolete on Linux, meaning that we have a ton of hardware which runs straight-out-of-the box not only on x86 but on lots of other architectures that NVidia themselves would never see fit to support.

      Comment


      • Originally posted by RealNC View Post
        Again: The GPL does not know what a "kernel" is. There's no distinction between a proprietary userspace program using a glibc API, and a proprietary driver using a kernel API.
        Once again, you are extremely confused. The GPL has no exceptions for linking, it is illegal to link proprietary software against the GPL, period.

        Glibc is not GPL. It is LGPL, which is a different license and which allows linking.

        Kernel is under the GPL, but with an extra clause which explicitly allows userspace software to use standardised kernel API calls exposed to userspace.

        This does not cover DMA_BUF, which is an internal in-kernel interface, so Nvidia can't use it. The license is like this, and the license can't be changed now, 20 years and many thousands of developers later.

        Comment


        • I think that some of you should re-read the LKML discussion.

          The concept of "derivative work" as used in the GPL has never been really well defined or tested in court.

          Many kernel developers believe that using DMA_BUF would amount to derivative work, and thus be illegal.

          Renaming "EXPORT_SYMBOL_GPL" to "EXPORT_SYMBOL" doesn't change this. It's murky waters. As long as the Linux licence stays the way it is (i.e. forever), Nvidia using DMA_BUF is probably illegal, and Nvidia can be taken to court by anybody who contributed to the Linux kernel.

          Nobody really wants this. This whole discussion about renaming symbols (this is not the first one Nvidia tried this) is just to give them some legal cover because they can claim that the kernel devs knew about the infringement but did not do anything. It doesn't make it any more legal.

          In opinion of many kernel developers, a binary blob can not use such an internal interface, period. As long as the Kernel is GPL-ed, this can not change.

          I recommend this as background reading: https://lwn.net/Articles/154602/
          Last edited by pingufunkybeat; 14 October 2012, 01:59 PM.

          Comment


          • Originally posted by Rigaldo View Post
            And obviously 30-60 fps in a high end card is playable! But crap .. You don't buy a high end card to play at those numbers, you want everything on high levels,
            Do you know what fps is? If you can play the game with great visual quality at 30 or 60fps depending on what the game targets then it doesn't matter if it can maximally run at 110fps or 190fps unless you want to boast, you're not going to play the game at those speeds anyway.

            Originally posted by Rigaldo View Post
            That's how it is, and it won't change very soon ..
            I see...

            Originally posted by Rigaldo View Post
            And yes, nouveau and radeon work. So does an OS I copied pasted from a youtube video and is a few lines of asm. It can boot!! Would you use it? I doubt ...
            I am using Nouveau, I'm using it when I'm playing games on Linux, I'm using it when I'm doing 3d sculpting in Blender, the performance is perfectly adequate even now and given how far they've already gotten when having to start from scratch and learn through painful reverse engineering I have no doubt they will continue to show great progress.

            Originally posted by Rigaldo View Post
            If the people who took their time write 20+ pages here wrote some code instead of hate comments for NVIDIA
            Meanwhile those who complain about the Linux devs not wanting to support proprietary drivers in the kernel are spending their time in a very fruitful manner, is that what you are saying?

            Comment


            • Originally posted by XorEaxEax View Post
              Do you know what fps is? If you can play the game with great visual quality at 30 or 60fps depending on what the game targets then it doesn't matter if it can maximally run at 110fps or 190fps unless you want to boast, you're not going to play the game at those speeds anyway.


              I see...


              I am using Nouveau, I'm using it when I'm playing games on Linux, I'm using it when I'm doing 3d sculpting in Blender, the performance is perfectly adequate even now and given how far they've already gotten when having to start from scratch and learn through painful reverse engineering I have no doubt they will continue to show great progress.


              Meanwhile those who complain about the Linux devs not wanting to support proprietary drivers in the kernel are spending their time in a very fruitful manner, is that what you are saying?

              I know exactly what fps is. You missed the point. When you get 30-60 fps in cases where windows would have 200, what will you get in cases Windows had 30-60? I'll let you think about it for a while ..
              So would I get that 30-60 fps with nouveau on Linux on my old laptop on game I would get them on Windows?

              I agree that the nouveau guys are beasts and kudos to them. They provide fairly good support for older cards too. Seriously I have a very good opinion about them. But still, I'd probably use nvidia driver if I have a 6xx card for example without thinking much about it. Unless I were developing for nouveau myself at the time. ^_^

              Btw I want the opensource drivers to get better, and I believe they not only will, but they'll also become very competitive if not better than closed ones. Open is likely the future(I hope too) for both hardware and software. But now we're at the present, and that future .. I don't think is very near.
              And from my experience, on my laptop I have now and the previous I found the open drivers pretty much unusable due to temperature issues which I didn't have with open ones, so I could care less about nvidia's stance etc at that point as far as my choice of drivers go, and I think it's logical.
              So I'd like my other choice to kinda work too. Open or closed. Or to have a choice that works fine everywhere, not split functionality between two, and which one it is is, open or closed, I found less important than whether I can have my damned functionality and be able to use it since the damned hardware I bought has it and I paid for it. I could see it another way. I paid and the company tries to implement it, but the Linux devs prevent them? :P
              Get some perspective and don't be narrow minded.

              And I'll say it again, it DOESN'T MATTER if the performance is adequate, when it's significantly less than what the hardware has been proven to be able to give. Cause if it's adequate when it should actually fly, you'll have MANY cases of not being adequate when it should. So you "might be tempted" to use the software that will make that performance be adequate, and it's perfectly logical to use the right tool for the job.

              Also, about the last one, that's not what I'm saying, but who wrote most here? I think it's mostly hate comments by significant margin.

              I'll go back to learning some C programming now ..
              (No, serisouly .. :P)


              **Btw, should I add, I don't know what you define as great visual quality, but my experience says you need pretty high end hardware to have it with open drivers.
              Last edited by Rigaldo; 14 October 2012, 02:58 PM.

              Comment


              • Originally posted by bridgman View Post
                Just so I understand, you're saying that "years ago" we said that we would get rid of Catalyst Linux for HD8000 series ? I've been involved with the open source drivers since the HD2000 days and I really don't remember anything like that.

                I said that you said, that at the time of HD8000 and then, the open driver will be on par in development with the Catalyst. Regardless if you thing that "in time" or "with status" the outcome it's the same = near equal speed + near equal support. You don't said that only they will start at the same time. And also when you said that HD3000 has 2 times the GPGPU performance against HD2000, in what language was it? Let me guess, hmm C++, no Java, maybe @#$%.

                Comment


                • Originally posted by artivision View Post
                  I said that you said, that at the time of HD8000 and then, the open driver will be on par in development with the Catalyst. Regardless if you thing that "in time" or "with status" the outcome it's the same = near equal speed + near equal support. You don't said that only they will start at the same time. And also when you said that HD3000 has 2 times the GPGPU performance against HD2000, in what language was it? Let me guess, hmm C++, no Java, maybe @#$%.
                  Do you have a source for that statement?

                  I remember him saying that he is expecting launch-time support for HD8000, but he has never said that it would have near equal speed or near equal support. For speed, he said he expects a maximum of about 70-80%, for support, he said it depends on the legal and technical review.

                  Comment


                  • Originally posted by artivision View Post
                    I said that you said, that at the time of HD8000 and then, the open driver will be on par in development with the Catalyst.
                    I really don't think anyone from AMD said this, even "years ago".

                    What we did say was that (a) it would be the first generation where open source driver development started at about the same time as Catalyst development, and (b) it would be the first new GPU generation where we were planning to deliver open source driver support by launch time.

                    Originally posted by artivision View Post
                    Regardless if you thing that "in time" or "with status" the outcome it's the same = near equal speed + near equal support. You don't said that only they will start at the same time.
                    No... if I said "in time" then I meant "in time". If I had meant something different I would have said something different.

                    Originally posted by artivision View Post
                    And also when you said that HD3000 has 2 times the GPGPU performance against HD2000, in what language was it? Let me guess, hmm C++, no Java, maybe @#$%.
                    Again, some kind of link here would really help. The big deal with rv670 (HD3850) was the addition of double-precision support. The new process did support higher clocks but I don't remember any claim of 2x performance against HD2xxx. Maybe against r5xx ?
                    Last edited by bridgman; 14 October 2012, 04:18 PM.
                    Test signature

                    Comment


                    • Originally posted by pingufunkybeat View Post
                      Do you have a source for that statement?

                      I remember him saying that he is expecting launch-time support for HD8000, but he has never said that it would have near equal speed or near equal support. For speed, he said he expects a maximum of about 70-80%, for support, he said it depends on the legal and technical review.

                      I said exactly the same. They said that they wont to shorten the development gap between Catalyst and Open, from 1+year or years the todays difference, to less than a year. As I understand 1 generation behind and maximum 20-30% performance difference. We not on OGL3.3 and we have not 70-80% performance because they give as bad optimizers, on purpose. The opposite of Intel. I'm ashamed to say that with X1900 i trust them with their GPGPU lies (they will be many GPGPU apps, from encoders to everything), and i bought 40 of them for an Internet cafe that i had 25%. Then because it was my past job i shell at least 100 different Radeons (from the 8000 true-form model) and less than 20 Gforces. I didn't believe the benchmarks of the time (that a NV7800-24pixelshaders can win a X190048pixelshaders), and now you doing the same (pretending that an HD7000-3.8tflops@32bit is on par with a Kepler-3.2tflops@64bit_or-6.4tflops@32bit). Liars and thieves. Is there anyone from AMD to answer all the above. I will wait for an answer.

                      Comment

                      Working...
                      X