Announcement

Collapse
No announcement yet.

Apple Announces The M4 Chip With Up To 10 CPU Cores

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Originally posted by Jumbotron View Post

    I have a MacBook Pro 2023 model with the exact specs 18gb Ram and 512 SSD. It’s my first Mac ever at the tender age of 60 after computing on IBM hardware even before the original IBM PC and DOS 1.0. And I can unequivocally state that if you paid the amount of money I did and I know you did because you said so, just to game on a MacBook Pro in Wine or emulation mode, not even to mention native ports, you sir are a COSMICALLY ignorant moron. I would drop the cosmic but keep the ignorant moron if you paid more than the price of an X-Box or a PlayStation 5 for any hardware, Windows or Linux, just to game. LOL….
    And where did I say that I bought macbook just to game? I bought it to check that I don't miss anything because last time I used macOS 10.8.5 on iMac.
    I don't think it is too much of a stretch to expect from the computer at this price with so much hype to excel at least at something that I do: gaming, rust programming, js programming, running AI, modeling in fusion360, reading and researching. Slow hardware, glossy screen, atrocious keyboard, buggy and uncustomizable OS made sure that it will be bad at everything I try to do with it. But it still did its job of showing me that I didn't miss anything and can sell it without a second thought.

    Comment


    • #62
      Originally posted by Developer12 View Post

      Then that isn't an argument for getting an eGPU.
      Arguments for getting eGPU for me is an upgradability and cost. You need to pay for mac studio or M3 Max macbook just to have a passable GPU and be able to connect 3-4 monitors. You also will need to buy overpriced configurations again if you want to upgrade RAM or CPU to not lose your video outputs/GPU power.

      Intel mini PCs and motherboards already come with 3-4 video outputs, but I also have freedom to use eGPUs and discrete GPUs that I already have.

      Comment


      • #63
        Originally posted by Dukenukemx View Post
        It's likely that Apple's GPU is a derivative of Imaginations PowerVR. The Asahi Linux team figured this out. You can also see that Apple still pays patents to Imagination over their PowerVR. Their GPU's aren't exactly catching up to Nvidia when their M3's got hardware accelerated mesh shading and Ray-Tracing, which are features Nvidia had in 2018, while AMD had these in 2020.

        Apple doesn't support ARMv9 with SVE2. I don't think it supports SVE1. That means Apple doesn't have anything equivalent to AVX512 and therefore AMD's Zen4.

        AMD already matched Apple with Dragon Range last year. There's a reason why Mac sales are low, and yes lower than the entire industry. Why you think Apple is sticking their M2's and now M4's into tablets? I can't see why the $3,500 Vision Pro has an M2 instead of an M3 other than Apple has too many of them. They can't sell these chips, and TSMC probably requires them to make a certain amount or otherwise lose their use of 3nm. So naturally Apple is putting M-series chips into devices where they have no business, like tablets. Especially with the iPadOS instead of MacOS. Dare I say Linux, but apparently nobody here who owns Apple devices actually use Linux as a daily driver.

        You may not like it, but it will happen. When Apple released their M1's I said that Apple has a 3-4 generations of AMD and Intel before they catch up in power efficiency and battery performance. AMD did it with Dragon Range, and Intel is doing it with Meteor Lake. Meteor Lake which is technically inferior to Apple's M3 is selling so much that Intel can't keep up the demand. You can buy a Meteor Lake based laptop and do everything on it, plus play games. You can even run Linux on it, which is something people here don't seem to like to do. You get something like 5 hours of mild use on these laptops, compared to Apple's and AMD's several hours but that's plenty for most people. Meteor Lake is built on 7nm, while the GPU is on TSMC's 5nm. Eventually Intel will get their manufacturing to match TSMC and then eventually beat that of TSMC's. Don't hold your breath on anything better than 3nm from TSMC for a while. As it is Apple doesn't even support AV1 encoding, but AMD, Intel, and Nvidia do support AV1 encoding. As far as I know, the M4 doesn't have AV1 encoding. M4's don't have Wifi7 either, plus 2 external screens seems exclusive to the M3 Macbook Air. It must be costing Apple a fortune to develop chips that make up a small fraction of the laptop market, while also having a massive decline in Macbook sales. At some point it just makes sense for Apple to let someone else make chips for them. Don't be surprised if M-series chips end up in iPhones next.
        If you had been following Apple’s ARM based chip development you would have known that Apple at one time owned a substantial minority stake in Imagination itself and they straight up welded Imagination GPUs onto their early A series SoCs on their early iPhone models. As your link states correctly Apple still licenses Imagination VR tech but its a little murky now about the GPU internals as what I have read is that Apple’s GPIUs are a clean sheet design that uses Imagination’s VR ISA in similar fashion as Apple’s CPU is a clean sheet design that just so happens to use the ARM ISA like every other ARM SoC derivative that isn’t straight up ARM itself.

        As far as claiming that Apple is behind rolling out features like Ray Tracing and AV1 encode, although the M3 and M4 GPU have AV1 decode….sure. But that’s not the point in Apple being more sophisticated and advanced than Intel or AMD or Nvidia as all three are incapable of producing a monolithic fan-less SoC with an onboard desktop performant GPU and in the case of the M4 a class leading NPU capable of 38 TOPS which Intel won’t have until later this year if not next year all tied together with heterogeneous zero copy memory protocol. The only company ever to succeed in that ( well, never the fanless part even in a desktop much less a laptop, was AMD with Kaveri back over 10 years ago and they abandoned that Fusion project when Lisa Su came in. Now Intel and AMD are forced to produce chiplet packages or SoP ( System on Package ) because they can’t make a monolithic die that meets thermal requirements and performance requirements as seen in Apple designs . Because of this Microsoft was forced to engage with Qualcomm when Microsoft decided to do what they have done since the original Macintosh and MacOS and that is to simply copy whatever Apple is doing and come up with a Surface branded MacBook Pro clone with a version of Windows re-engineered to run on the ARM ISA. Only Qualcomm with Nuvia engineering is capable of producing a fan-less desktop performant monolithic SoC that can come close to the performance of Apple’s M series chips. Only Nvidia as one of the Big Three and the only one who makes ARM based CPUs and SoCs is capable and even then they won’t because they make their trillions on servers, HPC and AI supercomputers and secondly I really doubt they could cut down an RTX GPU to meet the thermal specs while still matching or exceeding Apples GPU performance specs. Nvidia has never been known as a power efficient tech leader.

        As far as Apple SoCs not using ARMs SVE….so what. No other company puts SVE vector units in their consumer SoCs as far as I know because of die space and thermal concerns. The Fugalku supercomputer and Nvidia’s AI SoCs….sure. But there who cares about size or thermals . Apple SoCs still use ARMs NEON vector units in all their cores both Performance and Efficiency. In addition to these CPU vector units there is the GPU for vector math as all GPUs are capable of and in addition each Apple SoC has a dedicated matrix math coprocessor called AMX. Intel has one as well but they don’t let their consumer chips have them, only Xeons. In addition Intel only lets AVX512 instructions in Xeons and are being forced for the same reasons above to cut down AVX512 down to AVX10 which combines some AVX256 and 512 institutions with some weird scheme to execute some AVX512 instructions in an interleaving kind of way to fit in an AVX256 pipeline. I couldn’t bother to care to invest much brain time on it as I have completely given up on Intel and their shenanigans and incompetence. Be that as it may, the fact that Apple SoC don’t have ARMs SVE units is a nothing burger. However to end this conversation there are some rumors out there in reverse engineering hacker land that Apple’s new M4 perhaps had ARMs SME in it as they have capture flags and calls for SME instructions. If found to be true then this indicates that Apple’s M4 has moved into ARM ISA 9 territory as SME is a superset of ARM SVE and SVE2. But since there is no documentation from Apple so far it’s just conjecture at this point .

        Comment


        • #64
          Originally posted by Jumbotron View Post
          As far as claiming that Apple is behind rolling out features like Ray Tracing and AV1 encode, although the M3 and M4 GPU have AV1 decode….sure.
          I said encode not decode. If you're using these devices for productivity then anything running AMD/Intel/Nvidia will be better than even Apple's yet to be released M4.
          But that’s not the point in Apple being more sophisticated and advanced than Intel or AMD or Nvidia as all three are incapable of producing a monolithic fan-less SoC with an onboard desktop performant GPU
          What part of Apple's Macbook Air M3 running at 113C did you miss? It works fanless if you don't mind it thermal throttling. Also a monolithic chip is not a good thing. If you know anything about manufacturing you know that big large chips are likely to have defects. You can either disable parts of the chip or lower the clock speed, where with chiplet you can pick and choose the best and glue them together. Meanwhile AMD, Intel, and even Nvidia are going chiplet. It's obviously who has the more advanced hardware here. Not only that but Apple is closed source, while the industry is moving towards open source. How much of an advantage does Apple have with closed source?
          and in the case of the M4 a class leading NPU capable of 38 TOPS which Intel won’t have until later this year if not next year all tied together with heterogeneous zero copy memory protocol.
          I'd wait for benchmarks. The A17Pro already does 34 TOPS. AMD's next mobile CPU supposed to do 76 TOPS. Not all TOPS are the same. I also don't know what use these TPU's have for computers today.
          Now Intel and AMD are forced to produce chiplet packages or SoP ( System on Package ) because they can’t make a monolithic die that meets thermal requirements and performance requirements as seen in Apple designs .
          Chiplets are an advantage not disadvantage. Again bigger chips means more defects may occur. AMD does have a problem with power consumption due to chiplet, but they said they already found a solution. Intel's Tiles are so close that these problems aren't realistic for them. We will likely see AMD go with Tiles instead of chipet for the future.
          Because of this Microsoft was forced to engage with Qualcomm when Microsoft decided to do what they have done since the original Macintosh and MacOS and that is to simply copy whatever Apple is doing and come up with a Surface branded MacBook Pro clone with a version of Windows re-engineered to run on the ARM ISA. Only Qualcomm with Nuvia engineering is capable of producing a fan-less desktop performant monolithic SoC that can come close to the performance of Apple’s M series chips. Only Nvidia as one of the Big Three and the only one who makes ARM based CPUs and SoCs is capable and even then they won’t because they make their trillions on servers, HPC and AI supercomputers and secondly I really doubt they could cut down an RTX GPU to meet the thermal specs while still matching or exceeding Apples GPU performance specs. Nvidia has never been known as a power efficient tech leader.
          If you know anything about Microsoft and ARM then you know how many times they failed. Windows RT was a failure because Microsoft required software to come from the App Store and there was no x86 backwards compatibility. Microsoft tried again with their Surface devices, which sold terribly. Microsoft will of course try again because they want a closed ecosystem like Apple. ARM gives companies an opportunity to close up their ecosystem and become like Apple.
          As far as Apple SoCs not using ARMs SVE….so what. No other company puts SVE vector units in their consumer SoCs as far as I know because of die space and thermal concerns.
          You don't see this problem with AVX-512 and AMD SoCs. The Ryzen Zen4 based 7000 series all now comes with AVX-512. You don't hear about problems with thermals.

          Comment


          • #65
            Originally posted by Drep View Post

            Arguments for getting eGPU for me is an upgradability and cost. You need to pay for mac studio or M3 Max macbook just to have a passable GPU and be able to connect 3-4 monitors. You also will need to buy overpriced configurations again if you want to upgrade RAM or CPU to not lose your video outputs/GPU power.

            Intel mini PCs and motherboards already come with 3-4 video outputs, but I also have freedom to use eGPUs and discrete GPUs that I already have.
            Welcome to apple. Upgradability and low cost are never on the menu.

            Comment


            • #66
              Originally posted by ezst036 View Post

              Link?


              This was in Oct 2023, I have to believe that things are progressing nicely.

              Comment


              • #67
                Originally posted by Dukenukemx View Post
                Davinci Resolve is industry standard, and generally performs better on Mac even when compared to Final Cut. Davinci Resolve even supports ProRes on Windows and Linux. The alternative to ProRes is DNxHR, which matches the quality and file size equally to ProRes. None of Apple's exclusive features are of any interest to Linux users. Apple isn't exactly great at contributing to the open source community, though they are good at taking from it.
                I beg to differ, I am a Linux user and use it more often than Windows but I would love to have hardware accelerated ProRes encode and decode, a Mac exclusive feature.

                Comment


                • #68
                  Originally posted by timofonic View Post
                  Because Linux isn't just an OS; it's a statement.
                  That right there is the biggest problem with many Linux users, it's a statement not a tool to get the job done.

                  I personally don't make statements with the tools I use, I use whatever I feel is the right tool for the job.

                  Originally posted by timofonic View Post
                  In the end, choosing between Linux and macOS is like choosing between a DIY rocket kit and a first-class ticket to the moon. Both will get you to the stars, but only one lets you tinker with the thrusters. zz0.pfyiakfuidzz
                  Minor problem, both will not get you to the stars, the first class professional trip has a great chance of getting you the stars, in the DIY you will become like star dust as it blows up.

                  Comment


                  • #69
                    Originally posted by Dukenukemx View Post
                    ProRes is Apple's software and they never ported it to anything but Apple devices. You have alternatives on Windows though I don't know about Linux. Anything Apple can do, you can do more on PC and better, at 1/4 the price.
                    ProRes is supported on Windows and Linux, and i use it all the time, but I have heard of people having issues with ProRes produced by ffmpeg, though I have not had any issue; i have had issues with DNxHR produced by ffmpeg on Linux.

                    Here's the thing with ProRes, many high end digital cameras record both RAW and ProRes at the same time and there are good reasons for doing so:

                    Codecs don't need to be hard. Really. This mega-post explains everything you need to know in order to choose the right codec for every project.


                    If you plan on working in movie or tv production professionally, you can not escape ProRes, so you might as well have at least one Mac in your production pipeline.

                    Once you make the decision to have at least one Mac then you may as well simplify your life and do everything with Macs.

                    Comment


                    • #70
                      Originally posted by Dukenukemx View Post
                      Videos
                      Did you know Linux claims he was offered 100M for his channel and he turned it down?

                      If someone offered me 100M they wouldn't have a chance to finish their sentence before I had signed the agreement.

                      Comment

                      Working...
                      X