Announcement

Collapse
No announcement yet.

Intel's Abandoned "Many Integrated Core" Architecture Being Removed With Linux 5.10

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    There is a strange karma in the air when within 1 week AMD buys out Xilinx and Intel removes Phi support.

    Comment


    • #12
      Originally posted by uid313 View Post
      Eihter way, the Lakefield sucks, I think it has only 5 cores, 4 weak and 1 strong, as opposed to ARM processors with 8 core, 4 weak and 4 strong. Also the Lakefield cores have difficult instruction sets on the weak and strong cores, so it is difficult to program for, because certain instructions are only available on the strong cores.
      Not getting into a debate over Lakefield as a whole (I think it's interesting, especially in light of the Gen11 graphics and eDRAM), but this actually claims that AVX is disabled on the big core, so that all threads *can* be seamlessly migrated:

      Comment


      • #13
        Originally posted by edwaleni View Post
        There is a strange karma in the air when within 1 week AMD buys out Xilinx and Intel removes Phi support.
        Phi was already dead and discontinued for years. And all they're removing is support for the PCIe cards - the second-gen, socketed version is presumably still supported.

        In other news, intel is already shipping its first-gen Xe GPUs to laptop makers, so you can add that to your Karma brew.

        Comment


        • #14
          Originally posted by coder View Post
          I don't disagree with the broader narrative of your post, but a former employee of Intel's graphics group told me they were actually very close to basing Phi on a real GPU architecture!

          They apparently had several rounds of competitions between their CPU group and the group building their iGPUs, in order to decide which to go with. Being a CPU company, that team had an undeniable advantage, and I heard mention of several dilbert-esque moments in the competition. I wonder just how long it took for them to start regretting their final decision.

          BTW, don't forget that Intel's iGPUs existed as chipset graphics, even before they started getting integrated in the CPUs!
          Watcha' bet that the iGPU / chipset GPU they were trying to base a GPU oriented Xeon Phi around was the Iris Pro from Broadwell ??

          Comment


          • #15
            I’m not surprised that Phi failed, from my perspective it had little to do with performance but rather the stupidity of making such a chip a slave to the system processor. AMD has taken the right approach to multi ore processors with their Ryzen and tread ripper series. In a nut shell let the customer decide how many cores they need in combo with their favorite OS.

            Comment


            • #16
              Originally posted by wizard69 View Post
              I’m not surprised that Phi failed, from my perspective it had little to do with performance but rather the stupidity of making such a chip a slave to the system processor.
              Well, the second generation of Xeon Phi primarily sold as a standalone, socketed processor that ran the host OS. I think that blows a slightly gaping hole in your theory, there.

              Originally posted by wizard69 View Post
              AMD has taken the right approach to multi ore processors with their Ryzen and tread ripper series. In a nut shell let the customer decide how many cores they need in combo with their favorite OS.
              AMD doesn't pretend these are a viable alternative to GPUs. That's where Intel went wrong -- trying to compete with GPUs using x86. Their new Xe product line shows Intel finally learned that lesson, but at tremendous cost.

              Comment


              • #17
                I remember using these for my Master's Thesis at DTU in using the Full Approximation Scheme on a parallel architecture. I remember developing a polynomial smoother to help in using non-uniform grids with the method. Specifically it was a Intel®Xeon PhiCoprocessor 5110P (8GB, 1.053 GHz, 60 core),

                The method was pretty fast, solving fairly stiff differential equations with 65 million nodes in less than a minute if I recall.

                Unfortunately the Xeon Phi had huge problems when I went beyond 35 cores I think it was: Memory congestion. So I couldn't use all the physical cores. My method had a complexity of O(n), where n was the number of grid points. The more cores I added the worse the result got despite using very well optimised Intel BLAS libraries, and when I added hyperthreading the whole thing actually got even slower than without it. The threads were tripping over one another waiting to sync up with memory. As a result it was barely faster than using a regular desktop CPU from the Haswell series, and in some cases slower.

                So yeah, I'm not going to miss it.

                Comment


                • #18
                  I still wish to see for once a Larrabee card displaying a picture.
                  The whole idea was very similar to the CU's in a GCN/Navi Card, but Kinda CISC instead of purposefully graphic focused RISC. Some very cool things could have been done with it, but it is the equivalent of using a Atomic Bomb as a Hammer.

                  Comment


                  • #19
                    As someone that has actually tested and gone over the Xeon Phi MIC codebase in the kernel on real hardware. I don't have kind words to say. Except: thank goodness.
                    This MIC module is nasty and if someone actually wants to use it they could almost more easily A) adapt intels own code to the latest kernel, or B) rewrite it as a DKMS module from scratch.
                    The kernel code while objectively cleaner than the open sources modules written by Intel is what can be summarized as a open source 'PR piece'.
                    It's in an utterly useless state and lacking a large amount of work to actually use the cards.
                    You could barely load firmware and half way get an OS image into the card.
                    But actually using it wasn't possible. For that you need the actual intel mic stack.

                    Comment


                    • #20
                      Originally posted by coder View Post
                      I'm sure I heard someone working on a 4096(?) core RISC-V for AI or HPC or something. I'm not sure that makes a lot of sense - especially for AI - but that's a separate discussion.
                      I guess you mean Esperanto, https://www.esperanto.ai/

                      Comment

                      Working...
                      X