Announcement

Collapse
No announcement yet.

Intel's Abandoned "Many Integrated Core" Architecture Being Removed With Linux 5.10

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel's Abandoned "Many Integrated Core" Architecture Being Removed With Linux 5.10

    Phoronix: Intel's Abandoned "Many Integrated Core" Architecture Being Removed With Linux 5.10

    While Linux 5.10-rc2 is coming later today and a week past the merge window, a notable late pull request sent in this morning by Greg Kroah-Hartman is removing the Intel MIC (Many Integrated Core) architecture drivers, a.k.a. Xeon Phi...

    http://www.phoronix.com/scan.php?pag...ping-Intel-MIC

  • #2
    LOL at "misconception" number 5 on this page:
    https://www.pugetsystems.com/labs/hp...nceptions-508/

    I remember buying one of these Phi turds back in 2014. Intel made these so hard to use. it was like developing for CUDA except more difficult to program for, less useful, and with so-so performance. They talked a big talk about it being x86 cores under the hood, but ultimately that didn't matter because you could only use them with Intel's proprietary compiler. A waste of time, I ended up selling my firesale $189 Phi card to someone on e bay for around $400 LOL. Good riddance.
    Last edited by torsionbar28; 01 November 2020, 10:21 AM.

    Comment


    • #3
      At this point, what is the bother for a 1 T/Flop card and 8GB of RAM

      You'd have better luck with a GPU and cuda/opencl, all supported out of the box.

      Comment


      • #4
        So you have to remember, Larabbee/Knights Landing/Xeon Phi was Intel's desperate attempt to have a narrative to market to the HPC segment in the face of Nvidia successfully selling the notion that GPU's could actually compute more than game graphics and you didn't really need all those very expensive and hard to program FPGA's. Also Intel was panicking because they saw in AMD's GCN Radeon cards an upcoming credible contender to Nvidia which would begin to do two things to Intel that they could not abide by.

        1: Start getting squeezed out of a VERY lucrative market
        2: And as they would be squeezed out of that market profit margins would also take a big hit in the price war between Nvidia and AMD.

        So the suits at Intel looked around and exclaimed...."BLOODY HELL!! WE DON'T HAVE A GPU TO SPEAK OF!! WHY IS THAT??

        < answer from engineers >....because for 50 years you've told us we're a CPU company and you ordered us to make CPU's

        < SUITS >....ahem....right. WELL....WE HAVE LOTS OF CASH. LET'S GO POACH HALF OF AMD'S GPU DIVISION, PRONTO !!

        And the rest is history, as they say.

        The Xeon Phi is an interesting bit of kit. It's basically 80+ Intel Atoms lashed together in a card. Think of the cores in a GPU. Thousands of units now. Intel was trying to take low powered Atom CPU's and replicate that as a CPU derived compute card in a way similar to GPU's. Because....well...Intel and GPU's just weren't a thing. They tried to sell the concept that...HEY IT'S THE SAME OL INTEL CPU'S YOU KNOW AND LOVE AND WE HAVE ALL THE BEST CPU TOOLS.....YADA YADA YADA YADA.

        But it wasn't like programming for all those iSomethingmeaningless general purpose CPUs. I think the Chinese at one time had a super in the Top 5 or 10 using Xeon Phi's but once people got a hold of them they realized....."UMMMM, YEAH...WE'D RATHER HAVE NVIDIA AND CUDA THANK YOU VERY MUCH.

        It also tickles me to no end to find out now that all the data fobbing shenanigans Intel had to pull off to get data into and around inside that thing led to security nightmares. HMMMmmmm....where have we heard that before? ** COUGH COUGH ** <heartbleed, spectre> ** COUGH COUGH **



        Comment


        • #5
          Why can't we have a 1000 core RISC-V processor?

          Why is Intel so late with having a mix of faster and slower cores when AMD had the big.LITTLE architecture 9 years ago?

          Comment


          • #6
            Originally posted by torsionbar28 View Post
            LOL at "misconception" number 5 on this page:
            https://www.pugetsystems.com/labs/hp...nceptions-508/

            I remember buying one of these Phi turds back in 2014. Intel made these so hard to use. it was like developing for CUDA except more difficult to program for, less useful, and with so-so performance. They talked a big talk about it being x86 cores under the hood, but ultimately that didn't matter because you could only use them with Intel's proprietary compiler. A waste of time,
            Thanks for sharing. I had wondered about that.

            Comment


            • #7
              Originally posted by Jumbotron View Post
              So you have to remember, Larabbee/Knights Landing/Xeon Phi was Intel's desperate attempt to have a narrative to market to the HPC segment in the face of Nvidia successfully selling the notion that GPU's could actually compute more than game graphics
              I don't disagree with the broader narrative of your post, but a former employee of Intel's graphics group told me they were actually very close to basing Phi on a real GPU architecture!

              They apparently had several rounds of competitions between their CPU group and the group building their iGPUs, in order to decide which to go with. Being a CPU company, that team had an undeniable advantage, and I heard mention of several dilbert-esque moments in the competition. I wonder just how long it took for them to start regretting their final decision.

              BTW, don't forget that Intel's iGPUs existed as chipset graphics, even before they started getting integrated in the CPUs!

              Comment


              • #8
                Originally posted by uid313 View Post
                Why can't we have a 1000 core RISC-V processor?
                I'm sure I heard someone working on a 4096(?) core RISC-V for AI or HPC or something. I'm not sure that makes a lot of sense - especially for AI - but that's a separate discussion.

                Originally posted by uid313 View Post
                Why is Intel so late with having a mix of faster and slower cores when AMD had the big.LITTLE architecture 9 years ago?
                Probably because they are only recently starting to face a credible threat from ARM in the ultralight laptop market, where weight and battery life are supremely important. Al you probably know, Lakefield is a Big.Little CPU, but what does any of that have to do with this?

                Comment


                • #9
                  Phi failed to deliver the competitive TFLOPS from the beginning, but it seems Intel didn't learn a lesson: Xe still failed to deliver competitive TFLOPS from what i can see.

                  Comment


                  • #10
                    Originally posted by coder View Post
                    I'm sure I heard someone working on a 4096(?) core RISC-V for AI or HPC or something. I'm not sure that makes a lot of sense - especially for AI - but that's a separate discussion.
                    Ah, yes the Manticore. I hope something happens with it, and that it becomes a success.

                    Originally posted by coder View Post
                    Probably because they are only recently starting to face a credible threat from ARM in the ultralight laptop market, where weight and battery life are supremely important. Al you probably know, Lakefield is a Big.Little CPU, but what does any of that have to do with this?
                    Well this article was about "many integrated cores" so it made me think about mixing strong and weak cores. Eihter way, the Lakefield sucks, I think it has only 5 cores, 4 weak and 1 strong, as opposed to ARM processors with 8 core, 4 weak and 4 strong. Also the Lakefield cores have difficult instruction sets on the weak and strong cores, so it is difficult to program for, because certain instructions are only available on the strong cores.

                    Comment

                    Working...
                    X