Announcement

Collapse
No announcement yet.

Apple Announces The M4 Chip With Up To 10 CPU Cores

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by dragon321 View Post
    While M based Macs are indeed very good machines for desktop and certain work, lack of external GPU support or upgradeable RAM (not to mention lack of ECC) is no go for many tasks. Compare that to Ampere Altra Developer Platform that comes with external GPU support and DDR4 RAM slots that supports up to 768 GB of RAM.
    An external GPU is pointless when the built in GPU and NPU cover everything. Upgradeable ram would absolutely obliterate the memory bandwidth and halve the performance.

    ECC would be nice. Really it ought to be standard for every computer, big and small. Blame intel for that.

    Comment


    • #32
      Originally posted by Sonadow View Post

      Alternatives mean crap.

      The amount of polish and intuitiveness on a software tools means a lot more than just having the same functions or features on an alternative.

      Premiere and Vegas look, feel and handle jobs like a clumsy baboon trying to fuck its mate compared to Final Cut Pro. And in turn, the likes of Davinci, Openshot, Kdenlive, Cinelerra etc, look, feel, function, perform and handle jobs like a clumsy baboon trying to fuck its mate compared to the likes of Premiere.
      Davinci Resolve is industry standard, and generally performs better on Mac even when compared to Final Cut. Davinci Resolve even supports ProRes on Windows and Linux. The alternative to ProRes is DNxHR, which matches the quality and file size equally to ProRes. None of Apple's exclusive features are of any interest to Linux users. Apple isn't exactly great at contributing to the open source community, though they are good at taking from it.

      Comment


      • #33
        Originally posted by Developer12 View Post
        An external GPU is pointless when the built in GPU and NPU cover everything. Upgradeable ram would absolutely obliterate the memory bandwidth and halve the performance.
        Nvidia is arguably unbeatable in terms of GPU performance, even when it comes to productivity. The fastest computers are using slotted DDR5 memory and not LPDDR.

        Comment


        • #34
          Originally posted by Dukenukemx View Post
          Davinci Resolve is industry standard, and generally performs better on Mac even when compared to Final Cut. Davinci Resolve even supports ProRes on Windows and Linux. The alternative to ProRes is DNxHR, which matches the quality and file size equally to ProRes. None of Apple's exclusive features are of any interest to Linux users. Apple isn't exactly great at contributing to the open source community, though they are good at taking from it.
          "Industry standard" baboon-level shit is still shit. And nobody cares for Linux users, much less Mac users.

          Apple gives its users what they expect. Commercial software producers are producing software for Windows and macOS and consider them tier-one targets with the appropriate support and development effort, and deem Linux tier-garbage with the appropriate level of resource allocation. And that's all that matters.

          Comment


          • #35
            Originally posted by Dukenukemx View Post
            Nvidia is arguably unbeatable in terms of GPU performance, even when it comes to productivity. The fastest computers are using slotted DDR5 memory and not LPDDR.
            Do you have ANY idea how many channels of DDR5 it would take to match the bandwidth of an M1, let alone an M3? Kiss your laptops and tablets goodbye.

            Nvidia is unbeatable if you just want to throw more electricity at the problem. Might as well strap 8 4090's to your M1 so you can train your own 120 billion parameter LLM. /s

            With how powerful the integrated GPUs and NPUs are in an Mx chip, there really is no application that warrants an external GPU. All of them will be bottlenecked by something else first.

            Comment


            • #36
              Originally posted by avis View Post
              "Cores" for the AI engine and the GPU make zero sense as they are inherently parallel. I don't understand why Apple's marketing insists on them. They could and should use teraflops/iops/whatever instead.
              In both cases, “core” is the unit of low latency sharing. For a GPU this is obvious - you can share data via registers, or via L1 or via Scratchpad, all “per-core” concepts.
              The same is conceptually true of ANE, though the details differ a a lot.

              Comment


              • #37
                Originally posted by Developer12 View Post
                Do you have ANY idea how many channels of DDR5 it would take to match the bandwidth of an M1, let alone an M3? Kiss your laptops and tablets goodbye.
                CPU's and GPU's have different requirements for performance. CPU's prefer lower latency instead of insane amounts of bandwidth, while GPU's prefer bandwidth over low latency. Why you think AMD and Intel can outperform Apple's M series with the use of DDR5? LPDDR5 memory is a cheaper alternative to DDR5 and GDRR6. It is neither as lower latency or as high bandwidth. Why you think the RTX 4090 makes Apple's GPU's look slow? Sadly, Nvidia's RTX 4090 makes Apple's hardware look expensive. You know how many 4090's you can buy instead of a $10K Mac Pro? You know how many times faster a much cheaper PC is compared to Apple's? In Linus's test, it was 6x faster.

                Nvidia is unbeatable if you just want to throw more electricity at the problem. Might as well strap 8 4090's to your M1 so you can train your own 120 billion parameter LLM. /s
                A single RTX 4090 is several times faster than anything Apple has right now, which justifies the power demand. Again, RTX 4090 is several times faster than anything Apple has now. Nvidia is so much the gold standard that everyone is training their AI on Nvidia hardware. Nvidia's stonks went up while Apple's dropped which prompted them to perform A Historic buy back.

                With how powerful the integrated GPUs and NPUs are in an Mx chip, there really is no application that warrants an external GPU. All of them will be bottlenecked by something else first.
                The NPU's are not faster than a GPU, at least on Intel Meteor Lake chips. They are however much more energy efficient than using a GPU. Linus shows this near the end of his video.
                Last edited by Dukenukemx; 08 May 2024, 02:45 AM.

                Comment


                • #38
                  Originally posted by skeevy420 View Post
                  It makes perfect sense when you think about their target audience: People with slightly above average income with average to low intelligence. People like teens and trendy people. Well-to-do people aren't necessarily smarter and core count has been a common way to distinguish between better and worse product lines with PC CPUs for nearly decades now.
                  This reminds me of the cliché of "smarter" Linux users: "But the system doesn't run smoothly on my 15-year-old Celeron anymore..." On the other hand, I don't always understand why so many Linux users use such old systems. Using hardware for a long time is a positive thing, that's what the second-hand market is for. Or do Linux users have no money at all? Or do they not want Linux on their new or main PC? My Ryzen 7840 mini-PC has its own m.2 SSD for Windows and Linux and feels at home in both worlds.

                  Comment


                  • #39
                    Originally posted by Developer12 View Post

                    An external GPU is pointless when the built in GPU and NPU cover everything. Upgradeable ram would absolutely obliterate the memory bandwidth and halve the performance.

                    ECC would be nice. Really it ought to be standard for every computer, big and small. Blame intel for that.
                    GPU in M3 Pro is laughably bad. In native apps it is comparable to RTX4060, in non native (most of the games and older apps) it trades blows with an intel UHD730. CPU is only strong in benchmarks, rust and js development is 1.5-3x slower than 13700K, docker is not native and 2-4x slower. There is only one usecase that I found where macs can be better - running AI with 64-128gb of unified memory (but at the cost of 2-3 RTX4090s).

                    Comment


                    • #40
                      Originally posted by Drep View Post

                      GPU in M3 Pro is laughably bad. In native apps it is comparable to RTX4060, in non native (most of the games and older apps) it trades blows with an intel UHD730. CPU is only strong in benchmarks, rust and js development is 1.5-3x slower than 13700K, docker is not native and 2-4x slower. There is only one usecase that I found where macs can be better - running AI with 64-128gb of unified memory (but at the cost of 2-3 RTX4090s).
                      Are you even making a distinction between CPU and GPU performance.

                      Comment

                      Working...
                      X