Announcement

Collapse
No announcement yet.

Ubuntu Developers Get it Up And Running On Apple's M1 With Early Parallels Desktop Build

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by duby229 View Post
    I'm wondering if AMD dropping Arm was maybe a bad move after all? I mean don't get me wrong, Zen has been an astounding success, but Arm seems like the wave of the future.
    AMD really had no choice. ARM as an architecture was not really ready to take over x86 in the lucrative world of big dollar server, HPC and Supercomputing contracts that makes Intel and Nvidia the billions that they do. Lisa Su LITERALLY was tasked with saving AMD from either bankruptcy or being bought out by some other firm and raped for their IP and then broken up.

    And she has succeeded beyond everyone's wildest imagination. By going full tilt boogie, balls to the wall, forever more x86.

    Now....everything is turning on a dime. I have said this 1000's of times and NO ONE has been able to prove me wrong. It is MUCH easier to add powerful features to the ARM ISA and Architecture and STILL keep your power draw down to acceptable levels for consumer computing than for x86 to SHED features that DECREASE compute capability to meet stringent mobile and portable personal computing needs.

    But that is also true for the Hyperscalers and the Cloud guys. They are getting eaten alive on power bills for all their big iron x86 racks chock a block full of Intel Xeons, AMD EPYCs and Nvidia GPU boxes.

    There is a reason now that the #1 Supercomputer in the Top500 list of Supercomputers in terms of compute power is ARM based. That is also the reason that the very same Supercomputer is ALSO #1 in the Top500 GREEN Supercomputers.

    x86 is now legacy. Intel and AMD will never be able to get a raw performace increase vector NOR a Compute Power per Watt performance increase vector than the ARM world can. Nor can Intel or AMD match the heterogeneity of their SoCs with what ARM has now. AMD tried in the mid 2000s and gave up after Bristol Ridge on 2016. Intel has NEVER had one. Nvidia has done it.....by....you guessed it....becoming an ARM licensee and now the OWNER of ARM.

    The Age of ARM is here. From the smallest embedded IoT device and wearable, to every Automobile, to every industrial component, to every Smart Phone and Smart Device....and soon every PC, laptop, Chromebook all the way up to Servers and Supers.

    Nvidia did what AMD should have done but were nearly bankrupt at the time so they couldn't. AMD should have bought ARM. But it's too late now. Intel now is a GPU manufacturer like AMD. Nvidia is now a CPU manufacturer like Intel and AMD. AMD is on a tear with EPYC and Zen but like the early Athlons and Opterons during Intel's DISASTEROUS Pentium 4 and Itanium days, AMD's days are numbered. Yes...Intel is out of the Server market for at least 2 years with a viable product because they STILL don't know how to make a 10nm CPU at scale....much less 7nm or less.

    But an Intel with its back against the wall and facing existential threats from ARM, Nvidia AND AMD is still a VERY DANGEROUS Intel. WIth their move to a BIG.LITTLE heterogeneous Arch like ARM AND having a decent GPU arch now after 50 years of being only a CPU company AND having a very competent compute API in OneAPI means that AMD probably has about 3 years left to put the pedal to the metal before Intel pulls their head out of their asses and bitch slaps AMD back. And then AMD has to fall back to just making CPUs that are simply cheaper than Intel without having the extreme power per dollar metric they now enjoy over Intel.

    By then....AMD will be mightily squeezed between an insurgent Intel on the x86 side and the leader of the Age of ARM with Nvidia on the other side. The consumer world by 2025 will be firmly ensconced with ARM, particularly on the budget end which AMD used to rule and Intel which will always use the Marketing kickback scheme to manufacturers to defray costs to make sure that most non ARM consumer gear has the "Intel Inside" sticker on the case or laptop firmly attached with Super Glue.

    Which means AMD is going to be in a world of hurt by the end of the decade.

    Comment


    • #22
      Originally posted by bug77 View Post

      I'd be really, really surprised if Apple doesn't pull a stunt and makes running anything other than macOS impossible. At some point.
      Apple have stated repeatedly that this is not their goal, and have given instructions for various levels of “foreign OS” support and booting.
      Believe what you want, but at least acknowledge that it is based on a fantasy of Apple rather than actual facts.

      I have a more technical question which is: If Metal access is provided to the hypervisor environment, why can whatever GUI acceleration Linux wants not be built on that? Why is it considered essential to write to the real GPU rather than the virtual GPU provided by Metal?

      Comment


      • #23
        Originally posted by coder View Post
        I realize that people use Apple as the gold standard for phone performance, but is anyone really competing with them? I get the impression that people buy iPhones more for cultural reasons than anything else. As long as Android phones can deliver a good experience, does it really matter if an iPhone has a bit more raw performance?

        Apple has always lead on performance and I don't expect anyone will catch them in the foreseeable future, but it hasn't really seemed to matter or hurt Android sales that much.
        And sometimes it is not just about raw performance. I will not get an iPhone - nothing personal against it - because in my leisure and my work I am notoriously hard on phones and I do not want to deal with the added irritation and size of some protective case. I buy military/construction spec phones that can survive under water and three meter drops. In addition, I only use phones for calls, video chats, and authentication apps. I do not need high performance, I need ruggedness.

        As far as the culture reasons - there may be some truth in that - IIRC iPhone text messaging shows texts in different colors if you receive a message from anything other than an iPhone. Maybe that has changed though.
        GOD is REAL unless declared as an INTEGER.

        Comment


        • #24
          Originally posted by Jumbotron View Post

          Everyone competes with and copies Apple because they have set the design and usability standards for personal computing for over 40 years. Apple quite literally has been and still is Microsoft's R&D Center. Microsoft is the most Chinese style American computing company in history because they have NEVER innovated, only copied Apple. And if it wasn't Apple they were copying they were simply buying up smaller more innovative companies just to extinguish their tech.

          As far as Android....the only reason they are so successful is Android is the Microsoft of mobile OS's in the sense that it can go on any cheap piece of crap hardware and still make a buck. Even more so since Google gives Android away for free. But let's be clear, as far as Android phones go worldwide, the vast majority of them are equipped with SoCs that are roughly the performance equivalent of an iPhone 6....perhaps even as far back as a 5 series. Sure...for the vast majority of world Android users that's just fine for what they use their phone for. Only Geeks are concerned with paper benchmarks concerning the numbers and the speeds and feeds as the Brits say.

          But Apple leads the way. And the industry MUST follow. That's called competition. There is MORE than a dose of pride involved.
          It goes both ways! Many of the most interesting ideas in iPhone (eg SW distribution through an App Store, or the early security stuff) were ideas from MS Research that were never productized. I used to have a reference for this but I can no longer find it.
          That’s a fairly extreme example (of course Xerox PARC is the other) that FIRST!!! is over-rated, while execution is dramatically underrated....

          Comment


          • #25
            Originally posted by coder View Post
            ATI bought BitBoys and AMD later sold it to Qualcomm. That's where the Adreno GPUs in Snapdragon SoCs came from, and fun fact: Adreno is an anagram of Radeon!




            Oh, and the Chinese now own & control Imagination: https://semiaccurate.com/2020/04/04/...-technologies/

            I'll bet Apple is sad they didn't just buy them. I mean, they were pocket change for a company with a couple $Trillion market cap!
            I agree with everything you said until Apple buying Imagination. There was absolutely no need. Apple renegotiated a lucrative deal with Imagination for IP while at the same time poaching engineers from Imagination when it looked like they were going under before the Chinese bought them. Now Apple doesn't really need their IP anymore as with ARM, they start with a basic Imagination Core and customize the shit out of it so that it hardly looks like anything Imagination engineered in the first place.

            And coming up soon, when the Qualcomm contract ends for Cell Modems, Apple will slip in their own home brewed modems they bought from Intel after Intel finally admitted that just like not being able to make 10nm CPU's they were not able to make a competent 4G or 5G modem and they decided to chuck it to Apple for a cool 1 Billion dollars and promptly left the modem business in shame.

            When that happens, Apple will kick Qualcomm to the curb and have a complete Apple only Silicon stack. CPU, GPU, NPU, DSP, Visual Core and Modem. In other words....Apple WILL be Qualcomm but just for them.

            Comment


            • #26
              By late next year, the woes of the x86 consumer world get ever deeper and Microsoft will have to double or triple whatever effort they already are doing with their in house ARM chips for Microsoft Servers and their own Surface line.



              " For future versions of the MacBook Pro and iMac, Apple is working on a chip with as many as 16 high-performance cores and four high-efficiency cores. In contrast, the current M1 possesses the same number of efficiency cores but only four performance cores. Given that the M1 offers a huge step up over its Intel predecessors, this upcoming chip could be even more impressive.

              It does not stop there. According to Bloomberg, the chip in the next Mac Pro could contain as many as 32 high-performance cores, in addition to its high-efficiency cores. Such a level of power would be unprecedented in a Mac. The current Mac Pro can be configured to come with a 28-core Intel Xeon W processor, but even that would likely come nowhere near the chip Apple has in its workshop.

              Unlike the standard CPU and GPU combination found in most computers, where the processor and graphics chip are kept separate, Apple’s M1 chip contains both in a single package known as a system on a chip (SoC). Bloomberg reports that Apple is also working to ramp up graphics performance in future Apple-designed SoCs. Where the M1 comes in variants with either seven- or eight-core graphics processors, Apple’s road map contains both 16-core and 32-core graphics processors. Finally, in late 2021 or early 2022, the company is hoping to release 64-core and 128-core graphics chips in its high-end machines, marking a phenomenal leap in performance.

              https://www.digitaltrends.com/comput...p-future-macs/


              Let that sink in all you folks who were poo pooing the M1 as "just a phone SoC. It Apple's first attempt at a laptop SoC could outperform almost every Intel or AMD chip in a fanless case, what could you possibly think Apple could do with less thermal and package constraints in an iMac much less a huge honking Desktop Case with plenty of ventilation?

              What's more....over at Semiaccurate.com, Charlie Demerjian left a cryptic post about what he has learned about these upcoming Apple SoC. The rest of his observations are behind a paywall but he did say that what Apple is doing to these new SoC is something he only envisioned happening a few years down the road. Suffice to say, my speculation is there will be a LOT of die stacking of EVERYTHING on these new SoCs including embedding the RAM directly on the die instead of on the package outside the die and maybe in addition to all the massive amounts of L1, L2, L3 and System level cache that Apple employs on the M1 which is the most in the entire industry both ARM and x86, but I speculate they may add SRAM to the die as well.

              Or perhaps we see the first consumer use of MRAM. If that is the case not only could we could we see the first 20 hour+ laptop, and since MRAM is only slightly slower than SRAM , we could see the fastest consumer computing products in the history of the industry and by a 5-10 year margin at that.

              That's just my speculation on top of Charlie's. Here's his sort of cryptic take....

              " We all know about the Apple M1 CPU, but what can we expect from next year’s M2 CPU? SemiAccurate has talked to a bunch of insiders and found out some interesting details about the next fruit-themed laptop part.

              The current M1 is a slightly tarted up A14 or about what you would expect it to be. This isn’t to say it is a bad chip, far from it, it is a very good CPU but it doesn’t break any ground. The next Apple laptop CPU, likely called the M2, and it’s A15 ‘device’ counterpart are said to be the ‘real deal’. When questioned on that comment, SemiAccurate’s sources said that it would significantly outperform Intel’s current high end Skylake parts and do so at a small fraction of the power.

              How this would be achieved is the interesting part, especially considering the disparate requirements of a phone and a full laptop. A15/M2 be more oriented toward the laptop side of the equation rather than just slightly tarting up the current phone chip for starters, but won’t deviate much from the current architectural paradigms. It will also be based on a TSMC 5nm variant, basically an updated PDK, and packaged using InFO.

              All of this is pretty well known, some blindingly obvious, some only rumored, but all of it confirmed by our sources. Where things get interesting is how Apple is using these technologies to step out of the proverbial box and do something SemiAccurate didn’t expect to see for another few years. "

              https://semiaccurate.com/2020/12/07/...upcoming-cpus/
              Last edited by Jumbotron; 19 December 2020, 03:31 AM.

              Comment


              • #27
                Originally posted by name99 View Post

                It goes both ways! Many of the most interesting ideas in iPhone (eg SW distribution through an App Store, or the early security stuff) were ideas from MS Research that were never productized. I used to have a reference for this but I can no longer find it.
                That’s a fairly extreme example (of course Xerox PARC is the other) that FIRST!!! is over-rated, while execution is dramatically underrated....
                Well...you still made my point with Apple. Even when they DON"T invent something, they are still FIRST and LEAD the way with IMPLEMENTATION.

                Comment


                • #28
                  Should paralells be any better than wine, virtualbox, qemu etc?

                  Comment


                  • #29
                    Originally posted by Jumbotron View Post

                    AMD really had no choice. ARM as an architecture was not really ready to take over x86 in the lucrative world of big dollar server, HPC and Supercomputing contracts that makes Intel and Nvidia the billions that they do. Lisa Su LITERALLY was tasked with saving AMD from either bankruptcy or being bought out by some other firm and raped for their IP and then broken up.

                    And she has succeeded beyond everyone's wildest imagination. By going full tilt boogie, balls to the wall, forever more x86.

                    Now....everything is turning on a dime. I have said this 1000's of times and NO ONE has been able to prove me wrong. It is MUCH easier to add powerful features to the ARM ISA and Architecture and STILL keep your power draw down to acceptable levels for consumer computing than for x86 to SHED features that DECREASE compute capability to meet stringent mobile and portable personal computing needs.

                    But that is also true for the Hyperscalers and the Cloud guys. They are getting eaten alive on power bills for all their big iron x86 racks chock a block full of Intel Xeons, AMD EPYCs and Nvidia GPU boxes.

                    There is a reason now that the #1 Supercomputer in the Top500 list of Supercomputers in terms of compute power is ARM based. That is also the reason that the very same Supercomputer is ALSO #1 in the Top500 GREEN Supercomputers.

                    x86 is now legacy. Intel and AMD will never be able to get a raw performace increase vector NOR a Compute Power per Watt performance increase vector than the ARM world can. Nor can Intel or AMD match the heterogeneity of their SoCs with what ARM has now. AMD tried in the mid 2000s and gave up after Bristol Ridge on 2016. Intel has NEVER had one. Nvidia has done it.....by....you guessed it....becoming an ARM licensee and now the OWNER of ARM.

                    The Age of ARM is here. From the smallest embedded IoT device and wearable, to every Automobile, to every industrial component, to every Smart Phone and Smart Device....and soon every PC, laptop, Chromebook all the way up to Servers and Supers.

                    Nvidia did what AMD should have done but were nearly bankrupt at the time so they couldn't. AMD should have bought ARM. But it's too late now. Intel now is a GPU manufacturer like AMD. Nvidia is now a CPU manufacturer like Intel and AMD. AMD is on a tear with EPYC and Zen but like the early Athlons and Opterons during Intel's DISASTEROUS Pentium 4 and Itanium days, AMD's days are numbered. Yes...Intel is out of the Server market for at least 2 years with a viable product because they STILL don't know how to make a 10nm CPU at scale....much less 7nm or less.

                    But an Intel with its back against the wall and facing existential threats from ARM, Nvidia AND AMD is still a VERY DANGEROUS Intel. WIth their move to a BIG.LITTLE heterogeneous Arch like ARM AND having a decent GPU arch now after 50 years of being only a CPU company AND having a very competent compute API in OneAPI means that AMD probably has about 3 years left to put the pedal to the metal before Intel pulls their head out of their asses and bitch slaps AMD back. And then AMD has to fall back to just making CPUs that are simply cheaper than Intel without having the extreme power per dollar metric they now enjoy over Intel.

                    By then....AMD will be mightily squeezed between an insurgent Intel on the x86 side and the leader of the Age of ARM with Nvidia on the other side. The consumer world by 2025 will be firmly ensconced with ARM, particularly on the budget end which AMD used to rule and Intel which will always use the Marketing kickback scheme to manufacturers to defray costs to make sure that most non ARM consumer gear has the "Intel Inside" sticker on the case or laptop firmly attached with Super Glue.

                    Which means AMD is going to be in a world of hurt by the end of the decade.
                    If Lisa Su is smart, and we have seen that she just might be, she may be eyeing for an exit strategy out of x86. And now, if you look at available options, guess what?.. There is something far more compelling than ARM! That's right, RISC-V! OFC there is no buyout or anything, but AMD can essentially start designing their own RISC-V cores, just like apple is doing woth ARM. And from the looks of it, ARM has reached its peak as a conpany, while RISC-V is going to take about 5 years to become a competing architecture. Just enough time for a design to be made... With no licensing fees...

                    Comment


                    • #30
                      Apple's move to create an ARM based chips on their own was really smart. The same can be said for NVIDIA's deal to buy ARM. x86 will not be dead soon, there is too many software out there for it and the emulation layer is still not in optimal shape (at least for Windows 10). Linux systems do not need this emulation layer for basic use - if you are fine with native apps. But I don't think there is much room for major developments beside x86 and ARM at the moment, at least not for the main actors. But other countries which want to get more independant might invest into RISC-V. Might take just a bit longer to evolve. Other ARM developers seem to be challenged by Apple`s M1 chip and I hope alternative chips provide similar performance soon.
                      Last edited by Kano; 19 December 2020, 06:47 AM.

                      Comment

                      Working...
                      X