Announcement

Collapse
No announcement yet.

Apple M1 Open-Source GPU Bring-Up Sees An Early Triangle

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Apple M1 Open-Source GPU Bring-Up Sees An Early Triangle

    Phoronix: Apple M1 Open-Source GPU Bring-Up Sees An Early Triangle

    The open-source/Linux Apple M1 work continues to be quite busy this week... The latest is Alyssa Rosenzweig who has been working on reverse-engineering the M1 graphics processor has been able to write some early and primitive code for rendering a triangle...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Good luck because that will be even more undocumented than Nvidia hardware.

    Comment


    • #3
      Originally posted by rmfx View Post
      Good luck because that will be even more undocumented than Nvidia hardware.
      Considering how long these have been available, I'd say this is already pretty good progress. I'm sure Apple's GPU isn't as complex as Nvidia's, too.

      Comment


      • #4
        Originally posted by schmidtbag View Post
        I'm sure Apple's GPU isn't as complex as Nvidia's, too.
        I am surprised any developer is spending their time on either of these two shites but I do understand it does help Linux remain viable on any re-purposed hardware in future and for that both are just as useful as each other.

        Comment


        • #5
          Originally posted by schmidtbag View Post
          Considering how long these have been available, I'd say this is already pretty good progress.
          Yes, while a triangle seems trite, it is an important first step, as seen from previous efforts for GPU bring-up from other vendors over the decades. Big congratulations to Alyssa!

          I'm sure Apple's GPU isn't as complex as Nvidia's, too.
          Given that the GPU is targeted to implement Metal2, which is Vulkan-like, the capabilities/functionality is (sort of) more exposed to applications, so being able to look at how MacOS generates those requests, as Alyssa mentioned doing/comparing with, is a going to be a plus to getting things working.
          Last edited by CommunityMember; 22 January 2021, 04:52 PM.

          Comment


          • #6
            Originally posted by kpedersen View Post
            I am surprised any developer is spending their time ...
            And some people climb mountains just because they can. More (GPU?) power to them if they want, and can, do so.

            Comment


            • #7
              Its a rather simple Tile based GPU, very restrictive but OK for the also very restrictive backwards thought Metal API. Most likely based on the SGI GPUs they licensed for their SoCs.
              The only real benefit would be if that would jumpstart the PowerVR efforts.

              Comment


              • #8
                All this razzle dazzle over the M1, let's see what happens when AMD get their hands on 5nm EUV then we can judge just how much it's down to the process and how much is an Apple reality distortion field. Good luck to their effort, I wouldn't piss on Apple if it were on fire to help put it out.

                Comment


                • #9
                  Originally posted by Slartifartblast View Post
                  All this razzle dazzle over the M1, let's see what happens when AMD get their hands on 5nm EUV then we can judge just how much it's down to the process and how much is an Apple reality distortion field. Good luck to their effort, I wouldn't piss on Apple if it were on fire to help put it out.
                  It performs about exactly as a modern low power x86 chip is expected to perform on 5nm. Not worse, not better.

                  Comment


                  • #10
                    Originally posted by Alexmitter View Post

                    It performs about exactly as a modern low power x86 chip is expected to perform on 5nm. Not worse, not better.
                    As there isn't a 5nm EUV TSMC x86 available that's just a weather prediction 30 days in advance, take with a pinch of salt.

                    Comment

                    Working...
                    X