No announcement yet.

Apple Confirms Their Future Desktops + Laptops Will Use In-House CPUs

  • Filter
  • Time
  • Show
Clear All
new posts

  • #81
    So Mac gaming was just totally screwed again


    • #82
      Originally posted by shmerl View Post

      Nope, they have no future on the desktop. They want to dummify everything to become like their mobile koolaid. Good riddance I'd say. There is no reason to use them for any desktop user, which is good for us - more Apple refugees will come to Linux.
      If only you'd be right, but you'll see you won't be. I really wish you were though.
      Take the average Joe. Does your average Joe care about all this? Not at all. They might even consider it a benefit. "Look, my Mac can play iOS games/run Facebook/Instagram now!". They don't even know or care what Linux is and what are its benefits. And developers will have to support Apple's move if they want market share, even if they like it or not.
      There will be refugees that will come to other OSes, but do you really think that they will come to Linux? Not a chance. Take Windows XP & 7. How many people migrated to Linux and how many people migrated to Windows 10? Even if some refugees come to us, our market share won't go past 3% until we get out shit together with GUIs & graphics & productivity, because that's where the money is at.


      • #83
        Originally posted by FireBurn View Post
        So Mac gaming was just totally screwed again
        No. Didn't you hear? It will support iOS apps.

        So you will have such classics as Angry Birds, Candy Crush and a the (almost criminally) monetised Dungeon Keeper.

        This might provide the push needed to get Wine on aarch64 to support x86 via some qemu-static translation technology. There just wasn't quite enough incentive for the Raspberry Pi. However it runs faster than you think because the kernel and display stack doesn't need emulating, just the program instructions.
        Last edited by kpedersen; 06-23-2020, 05:32 AM.


        • #84
          Originally posted by Snaipersky View Post
          To pile on about concerns with ARM, I've yet to see an ARM chip hit 3GHz. They're more than happy to keep slapping more cores in a chip, but there's little attention paid to single core performance.
          @ 10:32am ~ > sudo cat /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_max_freq
          @ 10:32am ~ > sudo cat /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_min_freq
          @ 10:32am ~ > sudo cat /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_cur_freq
          @ 10:33am ~ > ls /sys/devices/system/cpu/
          total 0
          0 cpu0/ 0 cpu8/ 0 cpu16/ 0 cpu24/ 0 cpufreq/ 0 possible
          0 cpu1/ 0 cpu9/ 0 cpu17/ 0 cpu25/ 0 cpuidle/ 0 power/
          0 cpu2/ 0 cpu10/ 0 cpu18/ 0 cpu26/ 0 hotplug/ 0 present
          0 cpu3/ 0 cpu11/ 0 cpu19/ 0 cpu27/ 0 isolated 0 uevent
          0 cpu4/ 0 cpu12/ 0 cpu20/ 0 cpu28/ 0 kernel_max 0 vulnerabilities/
          0 cpu5/ 0 cpu13/ 0 cpu21/ 0 cpu29/ 0 modalias
          0 cpu6/ 0 cpu14/ 0 cpu22/ 0 cpu30/ 0 offline
          0 cpu7/ 0 cpu15/ 0 cpu23/ 0 cpu31/ 0 online
          @ 10:33am ~ > cat /proc/cpuinfo| grep Feat | head -1
          Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
          And every core is the same...


          • #85
            Originally posted by andre30correia View Post
            only in benchmarks, in real life they suck. A RISC cpu will never work like a CISC one.
            you are clueless about CPUs and have no idea what you are talking about. Not only look at RISC 20 years ago, Sgi MIPS64, Sun UltraSPARC, but modern x86 are all RISC internally anyway. and x86 CISC is pure insane garbage.


            • #86
              Originally posted by vladpetric View Post
              I'm also comparing RPi4 with a 5-year old Haswell, which is built on 22nm FWIW.
              This is not apples vs apples. RPI4 A72 is optimised for silicon area and power.


              The 4 cores of the raspberry pi 4 take up less silicon area than the single Haswell core. In fact the complete raspery pi 4 soc silicon can fit inside the area of a Haswell core. If you were talking equal silicon cost spend a 2 core Haswell has to go head to head in performance with a 8 core A72 with a 4 core A53 on the side. So a single Haswell core vs the complete RPI4 A72 cpu how well is that doing in a multi threaded workload.

              A72 are not optimised for fast single threaded performance they are optimised for high desisty low power usage multi threaded performance.

              X1 just released is more optimised for single thread performance than the A78 just released. The A7x line are all optimised for silicon area and power usage.


              There is still work between marvell and arm that is not released a stock design yet. That is a correct count 384 thread from a 96 core cpu. This is hyper threading on overdrive.

              Thunderx3 core cpu would be quite nice workstation/desktop.
     They have done a Thunderx2 in the past.

              There are quite a few server chip designs of arm that could be quite useful on the desktop and arm is working on releasing stock designs of these. X86 equal silicon area IPC is always worse than the arm chip. Now is really simple to larger x86 chip against a smaller arm chip and draw the wrong outcome that arm IPC is horrible. There are arm chips out there with the same silicon area cores as a x86 core these in IPC in fact kick x86 into the ground.


              • #87
                Originally posted by pal666 View Post
                it was jobs who gave apple a leg up with reality distortion field, since their chips are not highest performing and not most power efficient arms
                all top supercomputers are gpu based, they have tiny amount of cpus for servicing gpus
                That is not true at the moment. Supercomputer Fugaku the one that just took number 1 TOP100 has not had it GPUs plugged in yet. This was the shake down test benchmark just done. This has Fujitsu A64FX on mass in fact beating all the top supercomputers in power vs performance yes this includes the one that are GPU based with a tiny amount of cpus for servicing gpus.

                What makes the Fujitsu A64FX interesting that it taken number one is that it kicked the ass of all those following the model of GPU based with tiny amount of cpus.

                With trump being a pain in but on exports a lot of parties wanting super computers are relooking at if they need GPUs as they may run into issues with the likes of Trump blocking supply of GPU units.


                • #88
                  Originally posted by kravemir View Post

                  True, the few W difference in idle consumption ain't gonna save it,... Anyway, it's interesting to have low idle consumption for long on-battery time. However, ARM doesn't have any BIOS/EFI, so it's lock-in product.
                  My 32 core Arm workstation that runs ubuntu and boots via uefi even with a whole bios menu "hit del to enter" does... My previous 64 core (256 thread) Arm workstation also did this (though the hit del to enter was on serial only - the new one does it on the video card too...)


                  • #89
                    Originally posted by starshipeleven View Post
                    To be fair, that's where most of their revenue is. It makes sense to migrate what is possible to a "mobile-like" laptop and dump the rest in the river.
                    But to be fair, they have their revenue where they want it to be: a locked services/subscription based business called iOS. Now they drop the Mac (as a computer) not because it's not relevant in revenue, but because they want it to have the same business model as iOS.


                    • #90
                      So, I know people are going to go ballistic on me again but I thought, I might offer a long time mac user's perspective. Just to recap my situation: I work in academia, frequently use HPC systems for my work, and develop my own code to do so. Why I am currently using a Mac is quite simple: I can develop my code on my Mac without any issues. It is easy to get all compilers, libraries, build systems to work which I will later use on HPC machines. At the same time, it offers "proprietary stuff" which I simply need for work. Be it MS Office, Adobe apps and so on. And I know that this is going to be unpopular but I like the hardware and the UI.

                      That being said, I am uncertain about the future of the Mac. In terms of Apple made CPUs, I have relatively little worries. Their iPad Pro processors seem quite capable. I remember reading comparisons to mobile i7 CPUs a few years ago and those were close (but only based on geekbench). Also, I really do not think that recompiling for ARM should pose a big problem for actively developed software. And then there is the binary translation which apparently runs at install-time implying no runtime overhead. How well this translation will work, remains to be seen.

                      What worries me a lot more than the switch to ARM CPUs is the direction of macOS. I can see benefits, of course, like potentially converging uses of phone and so on. However, my guess is that OpenGL will finally be removed for those new ARM-devices (as in: No driver, not even a bad one). That is something that would hinder my work since I rely on ParaView and to my knowledge, there is no Vulkan-on-Metal or Metal version yet. And even though I personally do not use it, the OpenCL situation is similar. Another cornerstone of compatibility of scientific software is toolkit support. With the focus on Catalyst and the redesign, it remains to be seen whether this support will remain intact. From my point of view, those are the real issues.