Announcement

Collapse
No announcement yet.

Rust-Written Apple DRM Linux Kernel Driver Renders First Cube

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #91
    Originally posted by Dukenukemx View Post

    Yet Nvidia did open source their drivers. How much monetary benefit does Nvidia get from open sourcing their drivers? Nvidia did it because they're often called out for it, including Linus Torvalds. Peer pressure works.[/URL]
    AHAHAHAHAHAHAHAH

    Omg, I almost fell out of my chair. Great example of the cluelessness I was talking about (and also the fact that you don't actually know what you are talking about).

    Firstly, NVidia did not open source their driver. They "opened" a small portion of their driver (which is not even open, hence the quotes, because as I said, the actual driver itself has been embedded in closed firmware in the GSP chip. What is opened is just an interface to that). Their actual driver, i.e. CUDA/OpenGL and all of that stuff is still closed source and inside their blob.

    Sorry to say, but NVidia really doesn't give a s**t about Linus or anyone else in the open source community really. The only pressure they care about is $$$ and because they aren't losing any money from having a close sourced driver they don't care about it. If anything arguably they get more money from their closed sourced blob because their customers care more about having the latest stable driver over multiple kernel versions which is not possible with an in tree driver (which again hasn't changed at all, the part that NVidia open sourced is just buffer/memory management, CUDA which is what NVidia's paying customers care about is still closed and will likely always remain closed source).

    Seriously, get a reality check. You can delude yourself that this "peer pressure" or "Linus giving his middle finger" works, but clearly you have never actually worked with NVidia or know how the company works. Heck I would be surprised if the management there (who are the one that call the shots) even know who Linus is.

    The reason why they "opened" that very tiny part of their driver is so that it works better with the direction that the Linux desktop heading towards (i.e. Wayland/GBM) and to better debug problems which for technical reasons only really works when certain parts of the driver are in the tree (which is what NVidia did).
    Last edited by mdedetrich; 28 September 2022, 01:45 PM.

    Comment


    • #92
      Originally posted by mdedetrich View Post
      AHAHAHAHAHAHAHAH
      Omg, I almost fell out of my chair. Great example of the cluelessness I was talking about (and also the fact that you don't actually know what you are talking about).
      Firstly, NVidia did not open source their driver. They "opened" a small portion of their driver (which is not even open, hence the quotes, because as I said, the actual driver itself has been embedded in closed firmware in the GSP chip. What is opened is just an interface to that). Their actual driver, i.e. CUDA/OpenGL and all of that stuff is still closed source and inside their blob.
      Sorry to say, but NVidia really doesn't give a s**t about Linus or anyone else in the open source community really. The only pressure they care about is $$$ and because they aren't losing any money from having a close sourced driver they don't care about it. If anything arguably they get more money from their closed sourced blob because their customers care more about having the latest stable driver over multiple kernel versions which is not possible with an in tree driver (which again hasn't changed at all, the part that NVidia open sourced is just buffer/memory management, CUDA which is what NVidia's paying customers care about is still closed and will likely always remain closed source).
      Seriously, get a reality check. You can delude yourself that this "peer pressure" or "Linus giving his middle finger" works, but clearly you have never actually worked with NVidia or know how the company works. Heck I would be surprised if the management there (who are the one that call the shots) even know who Linus is.
      The reason why they "opened" that very tiny part of their driver is so that it works better with the direction that the Linux desktop heading towards (i.e. Wayland/GBM) and to better debug problems which for technical reasons only really works when certain parts of the driver are in the tree (which is what NVidia did).
      right absolutly right... but you know what i did observe about Dukenukemx​ even if you tell him the facts how it really is wait 1 week and another apple m2 forum topic and he pops up again and tell lies again...

      what do you think how often people already told him that nvidia did not opensource their drivers and they only did open up a kernel module and moved large part of the driver inside a closed source firmware running on a RISC-V core on the GPU. and all they upen up is a API to this... people told him like 1000 times and 1 week later he tells lies again.

      same with his opinion that apple should opensource the driver and write the linux driver directly then people explain to him why this is not possible because of patent law and copyright law and 1 week later he tells the same lies again.

      he has hin opinion and he sticks to it no matter what we tell him.
      Phantom circuit Sequence Reducer Dyslexia

      Comment


      • #93
        Originally posted by Dukenukemx View Post
        Yet Nvidia did open source their drivers. How much monetary benefit does Nvidia get from open sourcing their drivers? Nvidia did it because they're often called out for it, including Linus Torvalds. Peer pressure works.
        https://www.phoronix.com/news/LPC-20...-Source-NVIDIA
        "Open source" but there's still a binary blob IIRC.

        It's a cheesy attempt to check the box, and as I recall Linus has harsh words for their treatment of the kernel devs.

        Nouveau team are paying attention to Nvidia's open source drivers
        Haven't used NVidia on Linux in years but I have seen the increasingly dire reports of noveau's status. Here's an article from 2021, noting....It's not exactly what you'd call a model of "pressure on NVidia working". I would call this unusable unless FOSS principles are above every other consideration including, you know, doing productive work.
        image.png
        image.png​​
        Last edited by ll1025; 28 September 2022, 02:24 PM.

        Comment


        • #94
          Originally posted by ll1025 View Post
          This thread has become a mess and I think there are a lot of people who actually agree yelling at each other.
          As to the hardware-- yesterday when I made my laptop comparison chart I did get to dig into benchmarks around the M2. I think everyone can agree it's an impressive piece of kit on its own, competing with current-gen Intel / AMD chips. This is something that China has sought to do for years and has never pulled off, so the fact that Apple over a few years has leapfrogged other established SOC manufacturers like Samsung and Qualcomm is impressive.
          And no one can say that the battery life is not impressive either. But the contention made here originally was that the cost for a high-performance laptop makes Asahi and an M2 laptop a no-brainer. What I've seen in benchmarks says that, you'd be getting substantially lesser performance in most price brackets with an Apple M2 vs a PC laptop. It is always hard to do a direct comparison because Apple usually brings an interesting twist to the comparison: high-quality chassis, mid/high-quality screen, interesting connectivity options, and pretty good battery/performance.
          But if you pick a particular 2-3 target metrics (connectivity, performance, screen), you can usually get a PC laptop that focuses more tightly on those with something like a 12900H / NVidia 3070 TI at the same pricepoint. There are some people who really do like the exact combination Apple brings to the table, and you might even be able to argue that under a reasonable scoring system the Apple would walk away with a higher aggregate score across all categories, but it's certainly not a clear win unless battery is your first objective and performance is your second.
          All of that to say that I think most of the M2 "naysayers" here are not denying the efficiency of the chip or its respectable performance, only that it is not the ultimate answer to computing that it is frequently touted as. Intel and AMD have laptop offerings that absolutely demolish the M2 in multicore benchmarks, and for use-cases that can use the compute arguing that you get double the battery life but take 4 times as long to do the task isn't going to be persuasive.
          The person you're responding to is sort of agreeing with you: because Apple is not disclosing their secrets, Asahi devs are having to reverse engineer things which means it will pretty much never have 100% support for the latest Apple chips.
          Patent law does not stop Apple from disclosing their own patents. It does not even prevent Apple from licensing some of their tech to Linux. There are probably a lot of reasons Apple does not help Linux out, possibly because they're afraid that other patent trolls will come sniffing around their open-sourced secrets, possibly because there's very little actual return to Apple. They don't necessarily "want" people to get Macbooks to run Linux, because it either suggests that their own OS is not that great or it endorses an inferior experience (depending on perspective): neither are really great for Apple, whose OS+Hardware+Services full stack is what supposedly makes the brand valuable.
          Sometimes the vendors (AMD, NVidia as you mention) come up with PR reasons they "can't" do it but it generally boils down to "we cannot make a good business case to the executives on why we should spend money and risk on open-sourcing this". At the end of the day that's what businesses care about. Intel has long made the decision that working very closely with FOSS devs is a good idea. NVidia, AMD, and Apple historically have not (though AMD has gotten much better).
          So you both kind of agree: Asahi is never going to provide the full experience of Apple hardware because they will forever be chasing the tails of the newest Mac hardware. This isn't really unique to Apple though, Linux devs often have to write drivers for vendors who don't care.
          In my hardware comparison above I compared Geekbench Metal for Apple to Geekbench Vulkan for PC. Metal is apparently higher performance (AMD hardware gets ~30-50% higher scores in it), but current NVidia / AMD hardware like the NVidia 3060 laptop GPU still double M2's scores even with that slanted comparison.
          It's more, in my opinion, that Apple isn't really served by enabling these benchmarks. They make a product whose selling point is not raw specs but the integration, the status, and the experience. Any time you encounter a vendor selling that kind of product, benchmarks are usually not going to help their cachet. The fact that Metal makes direct comparisons with PC hardware difficult is probably very convenient for Apple.
          he does not agree with me and i do not agree with him.

          he claims battery time tests about watching youtube does not matter because it use the media-engine ASIC for h264/265 and this has no relevance in the future because youtube and netflix move to VP9 and AV1... well i agree that AV1 is the future but i also think that future version of apple M like apple M3 will have VP9 and AV9 decode and encode support.

          same for his narrative that apple m is bad for gaming i told him multible times that if you use 64bit ARM binary and use native apis like Metal/WebGPU and make use of FSR2.1 apple hardware could be good for gaming. also i told him that there is a possibility
          that because of FSR3.0 ASIC upcalling hardware and raytracing hardware acceleration apple could license RDNA3+ from AMD. and this would also fix the GPU driver problem because with RDNA3 they could use the opensource mesa driver from amd.
          companies like samsung and also qualcomm already have ATI/AMD licenses samsung for example has a RDNA2 license.

          the only reason why apple could not license RDNA3 is that nearly all games are x86 binaries and vulkan/directX12 and even if they would put in RDNA3 in their SOC they would still be bad at gaming...



          Phantom circuit Sequence Reducer Dyslexia

          Comment


          • #95
            Originally posted by Dukenukemx View Post
            All I've been saying is that Apple should be involved and held responsible because they aren't. Open sourcing their drivers would be a great move. Everything you disagree with me are words you put in my mouth, and not said directly by me.
            Yet Nvidia did open source their drivers. How much monetary benefit does Nvidia get from open sourcing their drivers? Nvidia did it because they're often called out for it, including Linus Torvalds. Peer pressure works.
            So companies that are wealthy like Apple and Nvidia don't do it because it doesn't make them money. AMD does it because they don't have money. So where does Intel stand in all this because their drivers are also open source. Broken clock indeed.
            Nvidia are still jerks but that's an upgrade from before. Nouveau team are paying attention to Nvidia's open source drivers and will learn from it. This is what we want from Apple so that anyone working on this has an easier time to make faster progress. Apple are jerks too, so the most we could hope for is a similar move like Nvidia.
            https://www.phoronix.com/news/LPC-20...-Source-NVIDIA
            i think you don't get how companies like apple think apple already has no gpu IP they already licensed it ... if they want opensource drivers they just go and license RDNA3 to put in Apple M3 SOCs ... samsung and qualcomm already have AMD/ATI license...

            but there is maybe a reason for not doing it because all the games are x86 closed source binaries and all the games are vulkan and dirextX12... and apple want to promote metal and WebGPU...

            i really don't get why you are interested in this Apple M topic at all because you could just buy a notebook with intel 13900k and Nvidia RTX4000 with DLSS3.0... and be happy about 25% lower power consuming because of Fake Frames of DLSS3,0...

            Apple M is not interesting for you not even if apple puts in VP9/AV1 ASIC and license RDNA3...

            Phantom circuit Sequence Reducer Dyslexia

            Comment


            • #96
              Originally posted by ll1025 View Post
              "Open source" but there's still a binary blob IIRC.
              It's a cheesy attempt to check the box, and as I recall Linus has harsh words for their treatment of the kernel devs.
              Haven't used NVidia on Linux in years but I have seen the increasingly dire reports of noveau's status. Here's an article from 2021, noting....It's not exactly what you'd call a model of "pressure on NVidia working". I would call this unusable unless FOSS principles are above every other consideration including, you know, doing productive work.
              image.png
              image.png​​
              i do not even get why he writes here at apple m topic... he will buy RTX4000 notebook with DLSS3.0 and be happy about fake frames and he as a gamer is happy.

              he is not an apple customer even not if apple puts in VP9/AV1 ASIC even not if apple makes a perfekt linux driver support and not even if apple license RDNA3/FSR3.0 ASIC...

              Phantom circuit Sequence Reducer Dyslexia

              Comment


              • #97
                Originally posted by qarium View Post
                i told him multible times that if you use 64bit ARM binary and use native apis like Metal/WebGPU and make use of FSR2.1 apple hardware could be good for gaming.
                There is no way in which an M2 laptop is "good for gaming":
                • Despite it's very good performance, the M2 is a laptop with a laptop GPU and not a very competitive one: it gets destroyed by even something like the Nvidia 3060 laptop GPU which is on par with a 1080 desktop card from years ago
                • It's an ARM chip, so you've got to deal with compatibility / performance issues there (some things simply wont work)
                • It's MacOS, so you've got to deal with Wine for many games (which may not work)
                • The battery life advantage doesn't exist, because no one games on a laptop, on battery, on the road. You have a desk, with a mouse and a charger at minimum.
                For the price you'd pay for an M2 you could get an actual gaming laptop with an actual dedicated GPU, with an operating system and CPU arch that the games were actually designed to run on.

                You seem to suggest that in some hypothetical world where developers leveraged Apple's fantastic technology they could make great M2 games. Sure, maybe, that's not our world though so the point is moot.
                Last edited by ll1025; 29 September 2022, 07:11 AM.

                Comment


                • #98
                  Originally posted by ll1025 View Post
                  There is no way in which an M2 laptop is "good for gaming":
                  • Despite it's very good performance, the M2 is a laptop with a laptop GPU and not a very competitive one: it gets destroyed by even something like the Nvidia 3060 laptop GPU which is on par with a 1080 desktop card from years ago
                  • It's an ARM chip, so you've got to deal with compatibility / performance issues there (some things simply wont work)
                  • It's MacOS, so you've got to deal with Wine for many games (which may not work)
                  • The battery life advantage doesn't exist, because no one games on a laptop, on battery, on the road. You have a desk, with a mouse and a charger at minimum.
                  For the price you'd pay for an M2 you could get an actual gaming laptop with an actual dedicated GPU, with an operating system and CPU arch that the games were actually designed to run on.
                  You seem to suggest that in some hypothetical world where developers leveraged Apple's fantastic technology they could make great M2 games. Sure, maybe, that's not our world though so the point is moot.
                  right now apple m2 is complete shit for gaming thats a fact.

                  but it is not impossible in some "hypothetical world" the situation could be much better.
                  for example if you have a opensource game or a game with an opensource engine like Godot4.0 games then you use metal/WenGPU and a native 64bit ARM binary and you use FSR2.1 and the result would show that Apple M2 is better for gaming than most people think it is.

                  "It's MacOS"

                  my argument was never focused on MacOS... my arghument is "Linux" focused i don't give a fuck about MacOS...

                  "it gets destroyed by even something like the Nvidia 3060 laptop GPU"

                  you just don't get it apple does not focus to destroy a 3060.. instead apple want a more efficient design means more "FPS per Watt"
                  and it could be possible that a native 64bit ARM game with native Metal/webgpu with FSR2.1 to in the end better in the meaning of FPS per watt than the 3060... right now if you see what the RTX4000 does with DLSS3.0 it is clear that apple m2 can not compete agaist RTX4000 with Fake Frames calculated by DLSS3.0 AI engine.....

                  apple could improve their Apple M products very much by Licensing RDNA3 because the FSR3.0 acceleration hardware does no longer calculate FSR in shaders but instead it use a AI/Matrix core ASIC this alone could improve performance by 8% and lowers the energy consumtion.

                  also the RDNA license would also fix the opensource driver for linux in the Apple M products.

                  you say hypothetical world will not come but for a company like apple they could spend a lot of money to make a hypothetical world a reality.

                  Phantom circuit Sequence Reducer Dyslexia

                  Comment


                  • #99
                    Originally posted by qarium View Post

                    you just don't get it apple does not focus to destroy a 3060.. instead apple want a more efficient design means more "FPS per Watt"
                    and it could be possible that a native 64bit ARM game with native Metal/webgpu with FSR2.1 to in the end better in the meaning of FPS per watt than the 3060.
                    When you compare Vulkan benchmarks on e.g. a 3060 laptop to an M2 running the same benchmark under Metal, the 3060 gets 2-3x its score. And that's with Metal generally producing higher scores on any particular GPU than the equivalent Vulkan benchmark.

                    The M2 is rated for ~3.6 TFLOPS, and the 3060 is rated for 10.9 TFLOPS. There's no getting around that disparity, and the fact is that people who are gaming are going to plug their laptops in to do so. The casual crowd that is fine with fewer FPS in order to run on battery will never be enough to convince publishers to make major games for the Mac, so as long as "watts per FPS" is what youre shooting for you're living in a dreamworld that will never support the business model.

                    they could spend a lot of money to make a hypothetical world a reality.
                    They're not interested in it. They target the creatives, because that's a market they can access. The gaming market is so culturally and technologically bound to Windows and PC gamers that there's not an amount of money that could be thrown at the problem to fix it.

                    Comment


                    • Originally posted by ll1025 View Post
                      When you compare Vulkan benchmarks on e.g. a 3060 laptop to an M2 running the same benchmark under Metal, the 3060 gets 2-3x its score. And that's with Metal generally producing higher scores on any particular GPU than the equivalent Vulkan benchmark.
                      The M2 is rated for ~3.6 TFLOPS, and the 3060 is rated for 10.9 TFLOPS. There's no getting around that disparity, and the fact is that people who are gaming are going to plug their laptops in to do so. The casual crowd that is fine with fewer FPS in order to run on battery will never be enough to convince publishers to make major games for the Mac, so as long as "watts per FPS" is what youre shooting for you're living in a dreamworld that will never support the business model.

                      the 3060 aims for more TFLOPS and more FPS...
                      apple is not interested in such in the gaming area.
                      in my point of view apple is only interested in "watts per FPS" to max out the battery run time.

                      "The casual crowd that is fine with fewer FPS in order to run on battery will never be enough to convince publishers to make major games for the Mac"

                      this could be true... but but who cares ? it looks like apple don't care and the customers who buy apple m2 also don't care.
                      also if you read here in the phoronix forum most people who buy apple m2 are people who want longer battery time.
                      and they buy it even with the fact that apple m2 is horrible for gamers.

                      you can call it a dreamworld but yes apples world is a dreamworld...

                      Originally posted by ll1025 View Post
                      They're not interested in it. They target the creatives, because that's a market they can access. The gaming market is so culturally and technologically bound to Windows and PC gamers that there's not an amount of money that could be thrown at the problem to fix it.
                      they could for example license RDNA3+ for appe M3... this would improve many areas.
                      but as you say they are not interested and i can not say i disagree with you because well yes they don't show interest...

                      but other companies like samsung and qualcomm also make good products by license AMD IP like RDNA2 in the case of samsung and ATI-IP in the case of qualcomm and also qualcomm want RDNA2+ license to.

                      i can only speak for myself if for example apple m3 would support AV1 decode and encode and apple M3 would have RDNA3 gpu this would be a reason for me to really consider to buy an apple product.

                      i am not in favor of the imagine technology PowerVR gpu tech Apple is using right now.

                      but i also admit that historically it was the best "watts per FPS" technology. but today all gpu designs are tile based rendering similar to the orginal PowerVR gpus... so it is historicall and today RDNA3 is the better solution.
                      Phantom circuit Sequence Reducer Dyslexia

                      Comment

                      Working...
                      X