Announcement

Collapse
No announcement yet.

Apple Announces The M1 Pro / M1 Max, Asahi Linux Starts Eyeing Their Bring-Up

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Developer12
    replied
    Originally posted by coder View Post
    Huh??? Apple has been designing their own ARM cores for over a decade! You don't just come out of nowhere and make a chip like the M1 on your first try! Is that what you thought happened?
    I don't have to "think" it happened. It's a known part of the M1 microarchitecture. It is very well documented at this point, along with other things like the size of the uop cache, the number of pipeline stages in the functional units, and a bunch of other crap. I read technical articles on it over a year ago.

    All you have to do is google "apple m1 8-way decoding"

    Originally posted by coder View Post
    It's true that ARM's own cores are always a balance of performance and other factors. They traditionally haven't been able to target them as narrowly as Apple has. However, ARM's new V-series server/workstation cores look set to change that. The V1's are X1-derived, IIRC. However, I think the V2 might be a more purpose-built speed demon.
    No disagreements there. One can only hope.

    Leave a comment:


  • nranger
    replied
    Originally posted by coder View Post

    However, I think you're correct that they're behaving more like games console makers, who can afford to partially subsidize the hardware (or specifically the CPUs, in this case) and make it back elsewhere.
    That's a very good point. Apple supports older iPhones with updates longer, largely because their 30% cut of app-store revenue makes it profitable to do so. It makes sense to make the "ownership marketing" experience consistent across all their products. The more their walled garden keeps consumers in the ecosystem, the more opportunity for Apple to make money selling them a watch, a display, an iPad, a copy of Final Cut, etc, etc.

    Leave a comment:


  • coder
    replied
    Originally posted by tildearrow View Post
    Currently (two machines, one is a dinosaur desktop turned into server and another is a more powerful server): ~300Wh = 7200W/day = 216kW/month = $37.80/month
    M1: ~15Wh = 360W/day
    That's a pretty heavy workload. Did you measure at the wall? I currently have 2 PCs + 2 monitors + 8-port KVM switch + wifi router + other accessories idling at only 144 W.

    Leave a comment:


  • coder
    replied
    Originally posted by ermo View Post
    I am guessing that AMD is watching the ARM space closely these days. If I'm not mistaken, they sold their erstwhile mobile division to Qualcomm back in the day?
    So long ago (January 2009) that it's merely a historical footnote. The part they sold wasn't their core GPU team, but rather the Bitboys, Oy acquisition they made in2006.

    Originally posted by ermo View Post
    I also just confirmed that AMD has licenced its Radeon IP to not just Qualcomm but also MediaTek and Samsung in recent years, all of whom have non-trivial ARM portfolios.
    Source?

    I never heard about Qualcomm licensing anything, and their Adreno GPUs remain quite competitive.

    MediaTek is in bed with Nvidia, recently announcing they'd be releasing SoCs containing Nvidia GPU IP. Even promising PC-level gaming capabilities, in future iterations. It's be a major shift in direction for them to jump ship and move to AMD.

    I've not heard of anyone in the mobile space licensing AMD GPU IP other than Samsung, which was initially described as more of a partnership (i.e. Samsung making its own customizations to RDNA, rather than taking AMD's IP as-is or paying AMD to do it like Sony and MS do).

    Originally posted by ermo View Post
    For intel, I wonder if they can afford not to?
    Not once ARM hits a tipping-point. Intel will be ready with their own ARM CPUs, but we won't hear a peep about it, before then. The last thing they want to do is create any doubt among their existing customers around their long-term commitment to x86-64. If you knew that even the mighty Intel no longer believed in the future of x86, maybe you'd switch to an existing ARM solution, rather than wait until Intel gets into the game.

    Leave a comment:


  • coder
    replied
    Originally posted by sdack View Post
    Apple just seems to throw together existing technologies into single chips, calls it a day, but does not innovate any actual new technologies. This they leave to other companies.
    Are you aware they designed these cores from scratch? Their CPU cores are far-and-away the most advanced in the world, as evidenced by their massive IPC advantage, even over other ARM cores. This isn't new, either. They've been leading the ARM core race pretty much since the start. And, for several generations, have even passed x86 cores in sophistication.

    I think their GPUs are still inheriting a lot from Imagination IP they've licensed, although we know they've been customizing the GPU cores, as well.

    Originally posted by sdack View Post
    So it is interesting to see that there is no word on persistent memory technologies in the design presentations, while Intel is pushing it and Intel will no longer put their faith into DDR alone, but support HBM as well.
    I don't follow how it is you're trying to tie together persistent memory, DDR, and HBM in the same sentence.

    Intel is adding support for DDR5 in Alder Lake and Sapphire Rapids. Yes, the latter will also have a HBM option, but it won't be in the majority of Sapphire Rapids CPUs. We don't even know when the HBM versions will launch. Also, Knights Landing had 16 GB of HMC in-package, already like 5-6 years ago. So, that aspect is more like a return-to-form than blazing a completely new direction, for them.

    Originally posted by sdack View Post
    This just tells me that Intel understands the need for faster memory, but also that data needs to be stored somewhere and how to avoid bottlenecks further down in the coming architectures.
    Exactly what tells you that? Did you know Micron even sold off the fab where Optane memory is being produced? It's not clear how Intel will continue to make the stuff. I'm sure they'll find a way, if they want to, but the future of Optane is not clear.

    Originally posted by sdack View Post
    Or ask yourself, what is the point of 200 or 400GB/s memory transfer rates, when you only have 64GB of RAM and need half a minute to load and save your work?
    If I would be asking myself that, I'd first be asking myself why I'm still using a slow mechanical HDD!

    Seriously, why do you think it takes anyone 30 seconds to load/save anything? Even loading a multi GB video clip from a NVMe drive would take just a couple seconds. And if it doesn't that's because the program is doing some processing on it - nothing that faster storage would help.

    But, getting back to the 200-400 GB/sec, you know that's entirely for the benefit of the GPU, right? The CPU cores would be quite happy at 50 GB/sec. Maybe there's marginal benefit going to 100 GB/sec, but when Threadripper Pro came out, people looked at the marginal benefit of going from 4-channel to 8-channel DDR4, on a 64 core CPU, and it was surprisingly small, in most cases!

    Also, the 64 GB is shared with GPU, OS, other apps, etc. A single program can't realistically use much more than ~32-48 GB without swapping. They could probably go about 2-4x higher, if they tacked on some external DDR and play a shell game with faulting pages in/out of HBM, sort of like using it as a L4 cache.

    Originally posted by sdack View Post
    These new M1 Apples are certainly interesting and a hot topic, but I am not much impressed by it. After following 40 years of changes in computer architecture is this nothing more than a modern SoC design. I am still more often looking towards Intel (and AMD) to see what technologies are coming next.
    It's basically a console-style SoC, with HBM and an even bigger GPU than they have!

    What's impressive about it are, as with any Apple Soc, the CPU cores, themselves. What's more is to have this much compute power in a thin-and-light laptop form factor. It will enable laptops in a class entirely of their own.

    Leave a comment:


  • ermo
    replied
    Originally posted by coder View Post
    x86 will never catch the best ARM cores on perf/W or perf/mm^2 (and thereby perf/$). I doubt even IPC, since most ARM cores target lower clocks, which enables a longer critical-path and therefore higher IPC (and further perf/W advantage). That x86 cores can stay competitive is mostly by virtue of higher power-budgets and higher clock targets.

    I'm quite certain that both Intel and AMD have post-x86 cores in development. Jim Keller's last project at AMD was the K12 -- a custom, in-house ARM core that I think they wisely chose not to bring to market, due to financial constraints and the ARM server market being too immature at the time.
    Jim Keller also had nice things to say about the AArch64/ARMv8 ISA IIRC. According to "rumours on the internet", he was none too pleased by AMD scrapping his K12 design in favour of Zen and this might have been a factor in him leaving.

    That said, I am guessing that AMD is watching the ARM space closely these days. If I'm not mistaken, they sold their erstwhile mobile division to Qualcomm back in the day? If nothing else, Adreno is an anagram of Radeon. I also just confirmed that AMD has licenced its Radeon IP to not just Qualcomm but also MediaTek and Samsung in recent years, all of whom have non-trivial ARM portfolios.

    For intel, I wonder if they can afford not to? Their grip on x86_64 (sometimes referred to by its proper AMD64 name) these days seems mostly to do with having the financial resources to do R&D on new instruction set extensions w/accompanying software that takes advantage of said extensions and selling a massive amount of chips to OEMs in both the consumer, business and server market?

    Leave a comment:


  • tildearrow
    replied
    Originally posted by lucrus View Post

    So how much is it making you save on your electricity bills? Let's say 5 bucks a month? Hardly more than that. But a full M1 based Apple computer costs you some 700-800 bucks more than a Ryzen based one (optimistically speaking), allowing a return on investment in just... 12 years? By that time your shining M1 system will be programmatically obsoleted by Apple for sure.

    I don't think watts per dollar is a selling point for this kind of hardware. It's good for many other reasons, it's good because it's Apple (just in case you like status symbols), but it's not something to compare to any Ryzen out there when you take into account money. Apples to oranges (wow, it fits, Ryzen logo is orange and the M1 is ... well... Apple! )
    Currently (two machines, one is a dinosaur desktop turned into server and another is a more powerful server): ~300Wh = 7200W/day = 216kW/month = $37.80/month
    M1: ~15Wh = 360W/day = 10.8kW/month = $1.89/month

    I would save $35.91 per month...

    Minimum cost of good server that matches M1 (while using more power) is around $1200

    Leave a comment:


  • sdack
    replied
    Apple just seems to throw together existing technologies into single chips, calls it a day, but does not innovate any actual new technologies. This they leave to other companies.

    So it is interesting to see that there is no word on persistent memory technologies in the design presentations, while Intel is pushing it and Intel will no longer put their faith into DDR alone, but support HBM as well.

    This just tells me that Intel understands the need for faster memory, but also that data needs to be stored somewhere and how to avoid bottlenecks further down in the coming architectures. Or ask yourself, what is the point of 200 or 400GB/s memory transfer rates, when you only have 64GB of RAM and need half a minute to load and save your work?

    These new M1 Apples are certainly interesting and a hot topic, but I am not much impressed by it. After following 40 years of changes in computer architecture is this nothing more than a modern SoC design. I am still more often looking towards Intel (and AMD) to see what technologies are coming next.
    Last edited by sdack; 19 October 2021, 01:04 PM.

    Leave a comment:


  • coder
    replied
    Originally posted by lucrus View Post
    Please, it's not that I don't understand all these (obvious) points, but I'm talking of something else: what's the point of comparing a M1 to a Ryzen 5700?
    If the code you run on them is the same, then it's a relevant point of comparison.

    Even when Apple's CPU were limited to just their phones, it was still interesting to see the progression of their sophistication, if purely from a technical perspective.

    Leave a comment:


  • coder
    replied
    Originally posted by sedsearch View Post
    Apple M1 related posts invite a long trail of retarded comments
    "Judge not, lest ye be judged."

    Originally posted by sedsearch View Post
    Why are people so massively surprised at M1/Max/Pro and why are they even comparing it ti Intel/AMD/Arm CPUs which by definition are different devices? For every configuration of RAM/CPU, Apple have to fabricate it from scratch. All three M1 are different in size. Makes for an even greater pile of electronic waste.

    One could debate whether building CPU+GPU+RAM+Encode/Decoder on a single chip is the right computing device, but it is not the same device as a lone CPU. AMD has been building better integrated GPU as APU which already performs quite well. Perhaps for smartphones such an architecture will be shortly seen in market from Qualcomm/Samsung/Huawei. But such a device makes no sense for anything configurable.
    What Apple built is an APU. It's functionally equivalent to mainstream Intel CPUs, AMD APUs, and console chips. If you think it's wrong to put the GPU & video blocks on the same die as the CPU cores, then you should level the same complaints at all of them.

    And I don't know about the latest AMD APUs, but Intel typically uses at least 2 different die sizes for its mainstream CPU product range. For instance, in Comet Lake, they had a 10-core die and a 6-core die.

    As for DRAM, that's not on the same die! It's merely in-package. DRAM is way too big to put on die. That's why they literally have to stack multiple dies (usually 8) to fit it in package. Did it ever occur to you that HBM is using basically the same DRAM cell design as you see on modern DDR4 DIMMs? When you put the equivalent memory in package that otherwise occupies a couple of DIMMs, it doesn't magically shrink. That's where the stacking comes in.

    Originally posted by sedsearch View Post
    A logical extension of such a device is building everything on a single chip - including storage, Wifi controller, Bluetooth controller, Audio/Video stuff, DSP, and all that which can be made on a single chip(I don't know the details of all electronics that is needed in a motherboard). It will be a truly single chip computer,
    Ever heard the term "SoC"? It means System-on-(a)-Chip. This is exactly what phones, tablets and most Laptops use.

    Originally posted by sedsearch View Post
    with storage read/write at probably 100 Gb/s.
    With storage, you run into another density problem. Also, the benefit of putting it in-package isn't there. NVMe has ample headroom, performance-wise, meaning in-package offers no benefit.

    Plus, NAND tends to fail faster than CPUs, and people will want to upgrade it without replacing the entire device. So, laptops would always want to have it separate.

    Finally, NAND flash doesn't like high temperatures. NVMe drives will throttle, if you get them too hot. That's why you see some performance models with big heatsinks.

    Leave a comment:

Working...
X