Announcement

Collapse
No announcement yet.

Reverse Engineering, Open-Source Driver Writing Continues For Apple's M1 GPU

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • abott
    replied
    Originally posted by lucrus View Post
    Have a nice day.
    Don't tell others what to do. It's rude.

    Leave a comment:


  • lucrus
    replied
    Originally posted by abott View Post

    And I'll never be "well behaved" when all I do is call out completely moronic posts for how stupid they are usually.
    Being well behaved towards me or others is not a favor you do me or others, but yourself. Anyway, if you don't understand even those basic rules of human relations, there's nothing I can do for you.

    Have a nice day.
    Last edited by lucrus; 02 October 2021, 05:14 PM.

    Leave a comment:


  • abott
    replied
    Originally posted by lucrus View Post

    And you have no idea about how to be well behaved.
    I bet there isn't somebody more anti-apple, even on this forum. I have never and highly likely will never own a piece of their hardware, even the hardware. Idiotic statement here, and on the "fanyboy."

    And I'll never be "well behaved" when all I do is call out completely moronic posts for how stupid they are usually.

    Leave a comment:


  • bridgman
    replied
    Originally posted by OneTimeShot View Post
    No.... but they decided to require signed firmware, and I don't think that was targeted at any other project. It's not like Nvidia charge for their closed source drivers.
    One tweak - the issue as I understand it is the combination of (a) signed firmware which prevents substitution of RE'ed FW and (b) not licensing the firmware in a way that allows redistribution.

    We also use signed firmware but we provide it under a custom license specifically written to allow redistribution in the Linux ecosystem.

    Leave a comment:


  • lucrus
    replied
    Originally posted by kieffer View Post

    I am an user of both Apple and Nvidia hardware and a fanboy of none of them ... what I can tell, is that the CPU part of the M1 is behaving very well: It is the first time (for a couple of decades) a processor is able to exploit its 2 memory channels *at their full capacity* with a single core/thread ! Caches are pretty large in those chips, especially the L1. To me it is an achievement which justifies the single-thread programming style of the Apple ecosystem.
    So you are a fanboy after all.

    Leave a comment:


  • OneTimeShot
    replied
    Originally posted by Charlie68 View Post
    I don't think Nvidia threw Molotov cocktails at Nouveau developers.
    No.... but they decided to require signed firmware, and I don't think that was targeted at any other project. It's not like Nvidia charge for their closed source drivers.

    Apple probably don't care if Linux supports their stuff or not. In the same way as they don't care if Windows supports their stuff. The progress that will be made by the Apple GPU driver (a fairly niche GPU choice for Linux) will only serve to show just how much Nvidia (the dominant GPU choice for users generally) is hindering Nouveau.

    Leave a comment:


  • Charlie68
    replied
    Originally posted by iskra32 View Post

    Obviously yes, but thats not my point. My point is that in 2021 the primary thing limiting Nouveau is Nvidia deliberately stonewalling the project rather than just investing zero effort in FOSS like most companies, which puts Nvidia a league above Broadcom, Mali, Vivante and from what we know Apple in terms of being scum. It also means that the AGX driver has the chance to suck less than nouveau.
    Forgive me if I insist ... either there is documentation and then there are no problems or there is no documentation and you have to make do with Reverse Engineering, there are no alternatives.
    I don't think Nvidia threw Molotov cocktails at Nouveau developers.

    Leave a comment:


  • kieffer
    replied
    Originally posted by Danny3 View Post
    So happy and proud that I have never been an Apple customer !
    As I am now by not being a Nvidia customer anymore.
    IMO, these companies are not to blame, but their users who enable them by giving them money to sit in a cage.
    I am an user of both Apple and Nvidia hardware and a fanboy of none of them ... what I can tell, is that the CPU part of the M1 is behaving very well: It is the first time (for a couple of decades) a processor is able to exploit its 2 memory channels *at their full capacity* with a single core/thread ! Caches are pretty large in those chips, especially the L1. To me it is an achievement which justifies the single-thread programming style of the Apple ecosystem.

    Leave a comment:


  • mdedetrich
    replied
    Originally posted by computerquip View Post

    Vulkan is (at its core) lower-level and also more extensible than Metal. You can probably implement Metal with Vulkan. However, since Apple doesn't support Vulkan, what's ended up occurring is a scenario where we need Vulkan over Metal. Sooo, we ended up with an initiative for a particular subset of Vulkan called "Vulkan Portability" that's designed to be implemented over things like Metal. We don't get the full Vulkan API, and we don't even get a very conformant API either. The core extension provided by the Vulkan Portability initiative is VK_KHR_portability_subset which allows you to identify differences in the implementation caused by being forced to implement over something like Metal (or D3D12 for that matter).

    The implementation of Vulkan which includes support for VK_KHR_portability_subset and is based on Metal is called MoltenVK.
    There is already a translation layer called MoltenVK which converts directly from Vulkan to Metals and according to their README at https://github.com/KhronosGroup/Molt...kan-compliance this theoritically shouldn't be too hard.

    Note that MoltenVK is just an API translation layer so just because MoltenVK cannot implement some functionality on Metal (because their simply isn't an equivalent API call in Metal) doesn't mean that on the hardware level Apple's integrated GPU cannot implement such functionality.

    In any case from what I have heard, Vulkan is so similar to Metal that a lot of game developers consider it a trivial translation so I can understand not implementing Vulkan/Metal because on familiarity of such API's but using the reason that Vulkan is too different to Metal doesn't seem to hold much water.

    Also doing a good/performant implementation is likely going to be much easier in Vulkan than OpenGL. Vulkan being low level here is a real advantage because you don't have to implement some over complex high levle API that does a huge amount of magic/trickery behind the scenes.
    Last edited by mdedetrich; 16 September 2021, 11:02 AM.

    Leave a comment:


  • Yoshi
    replied
    For those who are interested in all the reverse engineering work for M1:

    Leave a comment:

Working...
X