Announcement

Collapse
No announcement yet.

Linux Developers Still Reject NVIDIA Using DMA-BUF

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by Serafean View Post
    This is one of the few multi-page threads I read completely before responding, and I haven't sees once asked the following question : "who is the target audience of Linux?" I don't have the answer, I only have my answer, so all that follows is my personnal view.

    There is none. The point of Linux is creating a great product. When someone wants to use it for something, he is welcome to modify it as seen fit; with the GPL safety that those changes will go public. This has happened on more than once occasion, however Linux on servers/supercomputers comes first to mind. The changes went back to the main branch, and independent devs got interested and contributed too. Now we're seeing the same in the embedded space : DMA-BUF was created to address a specific need for ARM vendors. In the end it was designed more generically than was originally needed, and the enablement of dual graphic solutions is AFAIK more of a side effect. Thanks to the GPL (again) the work went back into the main branch.

    Now NVidia dances in, wants to hook in but oh surprise, it's an internal interface, and you can't link against it from non-GPL code. (Admittedly, this has never been tried in court, but who would want that?). Why is this? because when communicating with the kernel via system calls (exempted from the GPL derived work clause), the kernel can (and will) assure some type of security. With internal interfaces, this protection is irtually impossible. The best described case in the mailing list is a bug that appears somewhere, but traces back into the black box that is the blob, this creates impossible to debug issues, and for people hit by them, an impression that linux is unstable (and that no one is willing to look into their issues). In the end, they are making a decision which should be good for the product.

    To the users (of which I am a subset), well, sorry guys, the qualty of the product should be and is the primary concern; jeopardizing this for a vendor (granted popular) that doesn't play nice doesn't seem like a fair tradeoff.

    This was very personal analysis of the situation, and one more thing :

    I bought my GPU, and I want(should be able to) to use it how I want. Imagine you bought a dishwasher into which you weren't legally allowed to use Calgon tablets, only had to use Sun tablets. The notion of "owner" is slowly fading, all we are is a "licenced user". I say NO! I own it, I use it how I see fit.

    Third time repeated : my view only.

    Agreed, the way things are becoming more tightly integrated, controlling them from dozens of different places (vendors) is unsustainable.

    Serafean


    And i have to add my view, Linux will dominate in less than 2 years. If until the end of 2014, Linux is not the most used OS installed on all devices (including phones, tablets, googles), then I will not write again on any technology site.


    And I'm still waiting an answer from AMD!!!

    Comment


    • Originally posted by asdx
      Most people want to play games, not brag about how many FPS they can get.

      The same is true for people who want to watch movies, I don't care how many FPS I can get when watching a movie, I want to watch the god damn movie.

      Linux with open source graphics is more than adequate for 3D games, and performance will only keep getting better with time.

      Nvidia/fglrx blobs have no future in Linux.
      0_o
      Ok, so you saw the word fps and threw an answer before reading anything else?
      I'll write the same .. AGAIN, please read it now ..

      "If in cases where on other OS you'd get 100 fps or whatever, you get 30-60 on Linux, then in cases you'd get 30-60 on that other OS you'd get significantly less on Linux."

      Hope you understood this part. Besides, it's not like like mos cards would give you 100 fps on Windows, probably just 30-60 there too. And when you have hardware that can fly, are you fine with being adequate or barely making it? Can you say for sure you'll get the SAME framerate, which is not always over the top, often just normal or even less then the 30-60 on the open drivers? No. Not in most cases(maybe there are exceptions of course, but they're still exceptions, when they're the rule we'll talk again about it.)
      That's why they're looking for optimizations on compilation level and other things.

      You sure you don't care for your movie's fps? Seen what can happen when you get less than the video? Oh, I've seen. And can(or could) happen with blobs too. Wouldn't keep my fingers crossed you'll watch any movie with no hiccups. Btw, CPU seems to be important too for my laptop, don't wanna burn it.

      Yes, performance will keep getting better, but it's not that good now. Maybe if more people step up to help it can go faster.

      **Btw, I take it that with games you take such stuff into account: Crysis, Diablo III, The Elder Scrolls VI: Skyrim, GTA IV, etc etc etc ..
      Ya know, many people tend to like those ..
      And might not have an easy time getting them to 100 fps, under Windows/closed drivers on common system, so would you even get 30 on open ones? I they're good enough on performance let's forget about improving it for now and work on all those other areas that require work, but I doubt it's that way. And I speak as a person who doesn't need my maximum performance. But then again, if I paid for some well performing hardware, why want to get sub par performance?


      Btw, Linux most used OS in two years from now .. Too optimistic ... Wish you're right though!!

      Comment


      • Originally posted by Serafean View Post
        Now NVidia dances in, wants to hook in but oh surprise, it's an internal interface, and you can't link against it from non-GPL code. (Admittedly, this has never been tried in court, but who would want that?). Why is this? because when communicating with the kernel via system calls (exempted from the GPL derived work clause), the kernel can (and will) assure some type of security. With internal interfaces, this protection is irtually impossible. The best described case in the mailing list is a bug that appears somewhere, but traces back into the black box that is the blob, this creates impossible to debug issues, and for people hit by them, an impression that linux is unstable (and that no one is willing to look into their issues). In the end, they are making a decision which should be good for the product.
        Then you know what? You, as a user, can decide not to use said companies products anymore. On Windows, I get no end to the complains about Creative and their crap soundcard drivers, so I recommend everyone toward ASUS/Auzentech/HT Omega instead.

        On Linux, you instead get a crappier driver with features reduced. Then people get the impression Linux isn't as good as Windows, and avoid the OS. How is THAT good for Linux?

        Comment


        • Originally posted by RealNC View Post
          Why is it OK that I can run a GPL app on Windows, which links against Microsoft's C library
          You can run and distribute the app because MSFT's C lib license allows you to do so.
          but I can't have a proprietary driver making use of an interface of a GPL kernel?
          You can have an infinite number of such drivers. However, you can't distribute them because the kernel authors' license for the relevant interfaces does not allow you to do so.
          In my eyes, what the kernel devs are doing is plain bigotry. The kernel is a required component and you cannot work around it. If you're not allowing proprietary vendors to use it in order to be able to offer support and stay competitive in the licensing policy of their choice, then you're being a bigot.
          No; The licensing terms offered by parties A(1) through A(n) to party B do not get dictated by party B's subsequent business needs absent some prior contractual arrangement between A(1-n) and B.

          In more MBA-friendly terms: "Waaa my previous short-term, time-saving thus bonus-boosting decisions are now biting me in the ass" is not an argument for overriding copyright.

          Next time, consider deferred gratification and/or sound engineering principles such as "reinventing wheels is bad".
          If NVidia isn't allowed to interface with the kernel, then Google shouldn't be allowed either. Much of the Android stack is proprietary, yet no one sees a problem with the Linux kernel sitting at the center of it. And NVidia isn't actually even modifying the kernel, let alone distribute it. It's not like they have changed the kernel and refuse to GPL their changesets. The only thing they're trying to do is to use an interface.
          Depends, but probably false unless you can point to a GPL-marked kernel interface that Android uses. Go ahead, we'll wait.
          Hypocrites.
          Crybaby/troll.

          Comment


          • Originally posted by gamerk2 View Post
            Then you know what? You, as a user, can decide not to use said companies products anymore. On Windows, I get no end to the complains about Creative and their crap soundcard drivers, so I recommend everyone toward ASUS/Auzentech/HT Omega instead.

            On Linux, you instead get a crappier driver with features reduced. Then people get the impression Linux isn't as good as Windows, and avoid the OS. How is THAT good for Linux?
            You know what, I don't use said companies products... I try my best to put my money where my mouth is, and mostly succeed.
            Did you read/understand what I said in my post? I never said it was good for Linux's desktop market share, I said it was IMO the right decision to keep code quality, stability and security at a very high level (or at least to let everyone see the mistakes).
            You say yourself that you recommend another vendor if one doesn't meet your requirements; do you recommend AMD/Intel for everyone who wants to use linux (who really support linux and OSS)? Or do you recommend nvidia for its pseudo-support of linux (no OSS there)? If the second, then you only want a system free as in "free beer", and don't have a clue about what free as in "free speech" means (or you simply don't care).
            I'm not judging, all I'm saying is that I understand your point of view, but for me (and others) free as in "free speech" is more important, and I'd like you to understand that.

            Serafean

            Comment


            • Originally posted by artivision View Post
              OK agent, I'm this close to $#@! you. The entire page of comments is irrelevant and wrong because are based to you comment that is also wrong. The reference implementation of OpenGL drivers IS OPEN. Anybody is welcome to look at the reference code (there is not actual code, just some precode = how it works + standards for binary compatibility), without the need to Open or Close anything. Its Nvidias fault and AMDs also: With HD5000 AMD it self said that they will try with HD8000 to have only Open drivers. Fuck both. If only i wasn't gamer!
              Ok, if you are right, why AMD does not publish directly its Catalyst driver as open source? Why do not directly port the Windows/Mac drivers? Only because they are dorks?
              Why Intel does not just make a direct porting of their Windows or Mac drivers? Why should they spend money paying programmers who are re-inventig something that Intel alredy has?
              Why, in general, Mesa and not open-source OpenGL drivers?

              You already told it : "there is not actual code, just some precode = how it works + standards for binary compatibility" --->THERE IS NOT THE REAL HOT STUFF. The probably PATENTED stuff...

              Go and $#@! somebody else...

              Comment


              • ... More on OpenGL...

                The public, open, part of OpenGL is how to use a driver that already exist. Not how to program an OpenGL drivere.
                They give you documentation about how to display a cube with OpenGL, what OpenGL call you need, for example. They do not tell you how the OpenGL driver actually draws the cube for you, they do not tell you how the OpenGL call works inside. That is the problem. That is the reason behind the bugs and the sluggishness of open source dirvers...
                Who programs the Mesa drivers is in the same position of the Wine programmers: they know what each library call is supposed to do, but not HOW the result is achieved in the original thing..

                This is what I undestood...

                Comment


                • Originally posted by Gigetto96 View Post
                  Ok, if you are right, why AMD does not publish directly its Catalyst driver as open source? Why do not directly port the Windows/Mac drivers? Only because they are dorks?
                  Why Intel does not just make a direct porting of their Windows or Mac drivers? Why should they spend money paying programmers who are re-inventig something that Intel alredy has?
                  Why, in general, Mesa and not open-source OpenGL drivers?

                  You already told it : "there is not actual code, just some precode = how it works + standards for binary compatibility" --->THERE IS NOT THE REAL HOT STUFF. The probably PATENTED stuff...

                  Go and $#@! somebody else...
                  Get some information before you start spreading misinformation.
                  I won't comment on a reference opengl specification, as I have no knowledge in that area. I do know that some mandatory parts of openGL are covered by patents (namely s3tc and the floating point extension). These two extensions are implemented for mesa, but not merged in master.

                  Binary graphic drivers contain a lot more than just the openGL specification. One thing that comes to mind is video decoding. These are patented technologies, and most likely not even completely designed internally -> meaning they licensed it from a third party. This is where the shitstorm is brewing, and why the blobs won't be opensourced. Another reason is paranoid guarding of some magic. But that's a non-reason.
                  Porting a driver to another platform is almost a complete rewrite (if you don't have the infrastructure ready beforehand) -> this is why Intel doesn't port, and why nvidia and AMD do (their drivers have a shared core). Plus nothing guarantees that Intel already has an opengl implementation, they surely have a DX implementation, maybe opengl runs on top of it... (this is only speculation)

                  Bottom line is :
                  - opensourcing blobs is not possible due to everything they ship (do you really think that an OpenGL implementation could be > 100MB in size, after compilationn?).
                  - With the current state of software patents, mesa will not be able to safely implement all opengl features.

                  Serafean

                  Edit :
                  Who programs the Mesa drivers is in the same position of the Wine programmers: they know what each library call is supposed to do, but not HOW the result is achieved in the original thing..
                  Mesa devs at least don't need to have bug-compatibility And IMO your reason for slugishness is incorrect : you know what you're supposed to do, and you have perfect knowledge of the hardware, so you do it in the optimal way. IMO the true reason is that mesa and its drivers is still catching up on features, and in a development process optimisations come after feature-completeness...
                  Last edited by Serafean; 10-16-2012, 08:54 AM.

                  Comment


                  • The point is, that (also) due to the patents, who develop open source drivers can not have read the code of the blobs. So, they are required to re-invent the wheel.
                    (Intel for sure has OpenGL drivers, because the Macs use OpenGL. And if porting the driver to another platform was almost a complete rewrite, for sure it would be a much faster rewrite than what we are experimenting with the open source driver...)

                    Originally posted by Serafean View Post
                    Get some information before you start spreading misinformation.
                    I won't comment on a reference opengl specification, as I have no knowledge in that area. I do know that some mandatory parts of openGL are covered by patents (namely s3tc and the floating point extension). These two extensions are implemented for mesa, but not merged in master.

                    Binary graphic drivers contain a lot more than just the openGL specification. One thing that comes to mind is video decoding. These are patented technologies, and most likely not even completely designed internally -> meaning they licensed it from a third party. This is where the shitstorm is brewing, and why the blobs won't be opensourced. Another reason is paranoid guarding of some magic. But that's a non-reason.
                    Porting a driver to another platform is almost a complete rewrite (if you don't have the infrastructure ready beforehand) -> this is why Intel doesn't port, and why nvidia and AMD do (their drivers have a shared core). Plus nothing guarantees that Intel already has an opengl implementation, they surely have a DX implementation, maybe opengl runs on top of it... (this is only speculation)

                    Bottom line is :
                    - opensourcing blobs is not possible due to everything they ship (do you really think that an OpenGL implementation could be > 100MB in size, after compilationn?).
                    - With the current state of software patents, mesa will not be able to safely implement all opengl features.

                    Serafean

                    Edit : Mesa devs at least don't need to have bug-compatibility And IMO your reason for slugishness is incorrect : you know what you're supposed to do, and you have perfect knowledge of the hardware, so you do it in the optimal way. IMO the true reason is that mesa and its drivers is still catching up on features, and in a development process optimisations come after feature-completeness...

                    Comment


                    • Originally posted by entropy View Post
                      What I don't get, how would that allow to implement Optimus(TM) between an intel IGP and an Nvidia GPU?
                      So, why couldn't NVidia just re-implement the DMA-BUF interface for their driver, and then recompile the Intel and AMD open source drivers against that interface? (and distribute the Intel and AMD drivers in their own packages, with source.)

                      Wouldn't that resolve the legal licence concerns, provide optimus and shouldn't violate the intel/amd licence? [should be easy to keep the driver current, and assuming they could easily show the bug exists on the regular intel/amd driver, just pass through intel/amd bug reports through to those projects.]

                      [no expert here, don't flame, just inform me thx.]

                      Comment

                      Working...
                      X