Announcement

Collapse
No announcement yet.

Apple Announces The M1 Pro / M1 Max, Asahi Linux Starts Eyeing Their Bring-Up

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by lucrus View Post

    So how much is it making you save on your electricity bills? Let's say 5 bucks a month? Hardly more than that. But a full M1 based Apple computer costs you some 700-800 bucks more than a Ryzen based one (optimistically speaking), allowing a return on investment in just... 12 years? By that time your shining M1 system will be programmatically obsoleted by Apple for sure.

    I don't think watts per dollar is a selling point for this kind of hardware. It's good for many other reasons, it's good because it's Apple (just in case you like status symbols), but it's not something to compare to any Ryzen out there when you take into account money. Apples to oranges (wow, it fits, Ryzen logo is orange and the M1 is ... well... Apple! )
    I just facepalmed right now, you do know we are talking about a laptop right? The more energy that a CPU uses, the more heat it creates (this is physics/electrical engineering) which needs to be cooled and in a laptop that is a very big problem.

    Its even starting to become a problem on the desktop, Ian Cutress just did a video at https://www.youtube.com/watch?v=016CcStnsUw that CPU/GPU's are already starting to hit the limit of how effectively we can cool them with consumer cooling due to skyrocketing TDP's lately (thank Intel/NVidia for that one).

    Comment


    • #42
      Originally posted by GruenSein View Post

      Sorry to say it but this kind of perspective is purely ignorant. While you might never recoup the extra price of an Apple device by saving on your electricity bill, there are plenty of good reasons for the power consumption to be important beyond that. First and foremost, a device like the new MBPs would simply not be possible with a much higher power draw. It basically gives you workstation performance on the go while still offering a long battery life, relatively silent operation and a lightweight device since it does not need huge chunks of copper to keep it cool.
      Please, it's not that I don't understand all these (obvious) points, but I'm talking of something else: what's the point of comparing a M1 to a Ryzen 5700?

      Please read previous posts and get a clue of the subject before calling someone ignorant, thanks.

      Comment


      • #43
        Originally posted by mdedetrich View Post

        I just facepalmed right now, you do know we are talking about a laptop right? The more energy that a CPU uses, the more heat it creates (this is physics/electrical engineering) which needs to be cooled and in a laptop that is a very big problem.

        Its even starting to become a problem on the desktop, Ian Cutress just did a video at https://www.youtube.com/watch?v=016CcStnsUw that CPU/GPU's are already starting to hit the limit of how effectively we can cool them with consumer cooling due to skyrocketing TDP's lately (thank Intel/NVidia for that one).
        Please read previous posts, and read the one here above too.

        Comment


        • #44
          Saw this comment on Anandtech

          apple just announced the fastest windows laptop, m1 max variant with vm windows 11 😂😂

          Comment


          • #45
            Originally posted by spykes View Post
            I really hope Intel will manage to catchup on TSMC at some point, otherwise X86 PC will never catchup on Apple ARM.
            IMO X86 is done. Too bloated and too hot. Intel has been done since their chip flaws have become a weekly spectacle. I welcome our future ARM overlords.

            Comment


            • #46
              Originally posted by uid313 View Post
              I agree. I think even Apples previous generation SoC will beat the competition.



              Yeah, sure that plays part in it, but I heard that the ARMv8 (AArch64) ISA is a fantastic one. I heard ARM really nailed it with that ISA, that it very well designed, better than anything else. That it is so effective, and have so high instructions per clock cycle (IPC), that x86 being a old, legacy architecture can never reach that high IPC.

              While the ARMv8 (AArch64) ISA is indeed well designed and it also has enough unallocated opcode space that could be used to add some features that are still missing, the main reason why it allows a higher IPC than the Intel/AMD ISA is very simple and it does not have much to do with how well designed the ISA is.

              The AArch64 instructions are fixed length 32-bit instructions while the Intel/AMD instructions are variable length and it is not easy to determine the length of an instruction.

              Because of this single difference, decoding 8 AArch64 instructions simultaneously, like Apple does, is quite simple, while the Intel/AMD CPUs have been limited for decades to decoding only up to 4 instructions per cycle. The Intel Alder Lake, to be launched these days will be the first to surpass this threshold, but it is unlikely that an Intel/AMD CPU will ever succeed to decode as many instructions per cycle as an AArch64 CPU.

              This disadvantage is partially compensated in Intel/AMD CPUs by the micro-operation cache with predecoded instructions, but that adds to the cost and it works only for loops and frequently-invoked functions.


              The fact that the ARM AArch64 ISA is well designed has a different impact, it ensures that for implementing a given task the number of required instructions will not be too large, so that the program size will not be too large in comparison with an ISA with variable-length instructions.

              Because the Intel/AMD ISA started 50 years ago (with the Datapoint 2200) and then it accumulated thousands of changes that had to be mostly compatible with the older variants, the current encoding of the Intel/AMD instructions is quite inefficient (with many bits lost in instruction prefixes), so despite the fact that one has variable-length instructions and the other has fixed-length instructions, the program length for both is not very different in most cases.


              Besides allowing it to approximately match the program size of an equivalent Intel/AMD program, the fact that the ARM AArch64 ISA is well designed ensures that an ARM program will need significantly less instructions for a task than other ISAs that have fixed-length instructions, also allowing easy simultaneous decoding, but which are poorly designed, e.g. RISC V.

              So the poorly designed fixed-length ISAs would have the same IPC as ARM AArch64, but the total work per cycle would be less, because they need more instructions for the same task.





              Comment


              • #47
                It's almost poetic that we are reading this announcement because so many supported the Intel monopoly even after they stopped designing CPUs for a decade. Now that people see how good things can be when companies actually redesigns CPUs and people are not throwing money at the same design over and over... Everyone's just like: Yes! Time to support the next monopoly!!! LOL The wheel turns again.

                Originally posted by coder View Post
                Volks-CPU?

                Apple doesn't make anything for "everyone". They only make products for people with money.
                True but also not just the volk with money, it's for the ignorant volk with money.

                I used a (company owned) Macbook Pro as a daily driver for a few years. There's a lot good things that I can talk about regarding my experience. I'll try to remind people of what they do not seem to be talking about instead.
                • Apple in full awareness used child labour. Apple took 3 years to cut ties with the company that used child labour.
                • Developers are forced to program the devices in the way that Apple decides and forces inherent development costs.
                • Influencers, both technical and the cultural hipsters are free marketing puppets. The technical ones are not very accurate.
                • Apple puts self destruction traps on their devices that intentionally damage itself when 3rd party repairs the device.
                • Customers are forced to use the devices they way that Apple decides you should use it.
                • Apple blocks software that prevents it's own telemetry from coming through. E.g. Little Snitch v4 (v5+ uses watered-down kernel interface)
                • CSAM-scanning... like really... WHY WOULD GOVERNMENT OR THE POWERFUL NOT FORCE APPLE TO ABUSE THIS?
                • Apple supports companies that data-mine your profiles over the internet for money like RocketReach.
                • Worst of all ignorant because they don't know how much damage they do by rewarding monopolistic behaviour.
                https://www.businessinsider.com/appl...-costs-2020-12
                I’ve always been impressed by how clever Apple can get when trying to protect its repair revenue. A new report from MacRumors doesn’t disappoint.

                Governments were already discussing how to misuse CSAM scanning technology even before Apple announced its plans, say security researchers ...



                Just try to comprehend if we only had Google Chrome as the only browser without any competition. We would have seen **** like this much earlier... Chrome trying to become an anti-virus without letting any of their users know: https://www.reddit.com/r/privacy/com...your_computer/

                I'm recommending people consider what they are doing by supporting Apple or any other company that breaking international human rights laws. Supporting monopolistic behaviour is also bad obviously but if you're getting awesome hardware and you're more productive on Apple products I'll leave the more challenging ethical concerns. If all Apple users properly considered these things and are still happy with supporting Apple then it's fine.

                Comment


                • #48
                  Originally posted by nranger View Post
                  AMD, Intel, and Nvidia are chipmakers. For them, wafer starts in the fab directly correlate to revenue and profits based on how much performance and how much profit they can squeeze from each square millimeter of silicon. That means they have different forces on their design criteria than Apple. Their designs push right to the tippy-top of the volt-frequency efficiency curve for each manufacturing node to get the highest clock speeds. They add pipeline stages and use just enough cache to feed logic units at those high clock speeds. They leave much platform engineering to ODMs and OEMs, and as a result don't push SoC integration as hard with things like accelerators and super-wide on-package DRAM.
                  Sorry, I don't agree that they're simply not incentivized to try as hard. They do face more legacy and a broader set of requirements (such as max memory capacity) that could what's keeping them from doing things like in-package DRAM.

                  Originally posted by nranger View Post
                  Apple meanwhile is looking at the whole system to turn a profit. For them, they can jump to the latest manufacturing node, expend a larger transistor budget, and settle down in the efficiency curve and maximize perf/watt.
                  Apple has been maximizing perf/watt, because they've been exclusively targeting mobile until now. So, it was a much higher priority for them than the others.

                  However, I think you're correct that they're behaving more like games console makers, who can afford to partially subsidize the hardware (or specifically the CPUs, in this case) and make it back elsewhere.

                  Comment


                  • #49
                    Originally posted by Jabberwocky View Post
                    I'm recommending people consider what they are doing by supporting Apple
                    All good points.

                    I look at these separately from appreciating their technical achievements, but you cannot separate the two when deciding whether to actually give Apple your financial support!

                    Comment


                    • #50
                      Originally posted by sedsearch View Post
                      Apple M1 related posts invite a long trail of retarded comments
                      "Judge not, lest ye be judged."

                      Originally posted by sedsearch View Post
                      Why are people so massively surprised at M1/Max/Pro and why are they even comparing it ti Intel/AMD/Arm CPUs which by definition are different devices? For every configuration of RAM/CPU, Apple have to fabricate it from scratch. All three M1 are different in size. Makes for an even greater pile of electronic waste.

                      One could debate whether building CPU+GPU+RAM+Encode/Decoder on a single chip is the right computing device, but it is not the same device as a lone CPU. AMD has been building better integrated GPU as APU which already performs quite well. Perhaps for smartphones such an architecture will be shortly seen in market from Qualcomm/Samsung/Huawei. But such a device makes no sense for anything configurable.
                      What Apple built is an APU. It's functionally equivalent to mainstream Intel CPUs, AMD APUs, and console chips. If you think it's wrong to put the GPU & video blocks on the same die as the CPU cores, then you should level the same complaints at all of them.

                      And I don't know about the latest AMD APUs, but Intel typically uses at least 2 different die sizes for its mainstream CPU product range. For instance, in Comet Lake, they had a 10-core die and a 6-core die.

                      As for DRAM, that's not on the same die! It's merely in-package. DRAM is way too big to put on die. That's why they literally have to stack multiple dies (usually 8) to fit it in package. Did it ever occur to you that HBM is using basically the same DRAM cell design as you see on modern DDR4 DIMMs? When you put the equivalent memory in package that otherwise occupies a couple of DIMMs, it doesn't magically shrink. That's where the stacking comes in.

                      Originally posted by sedsearch View Post
                      A logical extension of such a device is building everything on a single chip - including storage, Wifi controller, Bluetooth controller, Audio/Video stuff, DSP, and all that which can be made on a single chip(I don't know the details of all electronics that is needed in a motherboard). It will be a truly single chip computer,
                      Ever heard the term "SoC"? It means System-on-(a)-Chip. This is exactly what phones, tablets and most Laptops use.

                      Originally posted by sedsearch View Post
                      with storage read/write at probably 100 Gb/s.
                      With storage, you run into another density problem. Also, the benefit of putting it in-package isn't there. NVMe has ample headroom, performance-wise, meaning in-package offers no benefit.

                      Plus, NAND tends to fail faster than CPUs, and people will want to upgrade it without replacing the entire device. So, laptops would always want to have it separate.

                      Finally, NAND flash doesn't like high temperatures. NVMe drives will throttle, if you get them too hot. That's why you see some performance models with big heatsinks.

                      Comment

                      Working...
                      X