Announcement

Collapse
No announcement yet.

Intel Announces Gaudi 3 AI Accelerator, Intel Xeon 6 Brand

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel Announces Gaudi 3 AI Accelerator, Intel Xeon 6 Brand

    Phoronix: Intel Announces Gaudi 3 AI Accelerator, Intel Xeon 6 Brand

    Intel is using its Vision 2024 conference in Arizona today to announce the Gaudi 3 AI accelerator. With Gaudi 3 comes some rather bold AI claims from Intel: 50% on average better inference and 40% on average better power efficiency than the NVIDIA H100. All while costing "a fraction" of the NVIDIA H100. Gaudi 3 sounds quite promising and will be interesting to see how its adopted in the marketplace. In addition, Intel also is disclosing the new Xeon 6 branding for their upcoming server processors formerly codenamed Sierra Forest and Granite Rapids.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Handy for intel that AI is a thing so the xeon line can be released with a whisper

    Comment


    • #3
      Much needed competition.

      Comment


      • #4
        Originally posted by milkylainen View Post
        Much needed competition.
        While it lasts. Word from inside Intel is that Gaudi3 is the last pure non GPU AI accelerator from Intel and that sometime in 2025 or early 2026 they are going to bake some of Gaudi’s IP into a new server class GPU called Falcon Shore. Now…Falcon Shore was supposed to be Intel’s XPU or in other words Intel’s version of AMDs MI300 or Nvidia’s H100 or the newer Blackwell. But that’s getting pushed back to late 2026 or early 2027. Rumor has it that that’s to give Intel time to perfect Falcon Shore with Gaudi goodness before integrating it with the next generation Xeon/Core Ultra architecture to form this now delayed XPU. It all just seems like the Knights Landing / Crossing debacle where Intel tried to market what looked like a GPU but was a card made up of 80+ Intel Atom cores with AVX/AVX2 and tried to convince the market to stop using GPUs for compute because …well…we don’t make GPUs . Obviously it flopped so Intel decided to buy away half of AMDs GPU department to finally make the very “meh” Xe / Arc GPUs. Then they dropped their own in house pure play AI accelerator to buy up the Gaudi tech only to dump it into what eventually will be a super duper APU. But they can’t even get that out on time.

        Comment


        • #5
          Originally posted by geerge View Post
          Handy for intel that AI is a thing so the xeon line can be released with a whisper
          never in the last 40 years was intel so beaten at their own ISA-WAR game ... AMD AVX512 implemenation of double pump their 256bit avx unit beats the shit out of intel.

          and right now AVX10 with AVX10-256 and avx10-512 right now is a flop and their big.little design flops against AMD's ZEN4+ZEN4c and ZEN5+ZEN5c

          the intel ARC gpus where honestly only competitive against radeon 6400/6500 because of no AV1 decode unit and intel ARC PCIe cards had only value for the buyers if it was the cheapest option with very bad quality of the drivers.

          the only success they could really point out was his intel 155 cpu who intel could beat AMD's Radeon 780 per watt but only in GPU workloads and the 10nm CPU is pure shit against AMD's 4nm and 5nm chips...

          and as Jumbotron​ is writing

          Originally posted by Jumbotron View Post
          While it lasts. Word from inside Intel is that Gaudi3 is the last pure non GPU AI accelerator from Intel and that sometime in 2025 or early 2026 they are going to bake some of Gaudi’s IP into a new server class GPU called Falcon Shore. Now…Falcon Shore was supposed to be Intel’s XPU or in other words Intel’s version of AMDs MI300 or Nvidia’s H100 or the newer Blackwell. But that’s getting pushed back to late 2026 or early 2027. Rumor has it that that’s to give Intel time to perfect Falcon Shore with Gaudi goodness before integrating it with the next generation Xeon/Core Ultra architecture to form this now delayed XPU. It all just seems like the Knights Landing / Crossing debacle where Intel tried to market what looked like a GPU but was a card made up of 80+ Intel Atom cores with AVX/AVX2 and tried to convince the market to stop using GPUs for compute because …well…we don’t make GPUs . Obviously it flopped so Intel decided to buy away half of AMDs GPU department to finally make the very “meh” Xe / Arc GPUs. Then they dropped their own in house pure play AI accelerator to buy up the Gaudi tech only to dump it into what eventually will be a super duper APU. But they can’t even get that out on time.


          to me this looks like Intel know they failed with the Inel ARC GPUs in the Compute space so their only option was to drop their own shitty 10nm node go with TSMC 5nm and make a AI only chip without any other functionality was limits the usage very much.

          all other big successfull players like nvidia and amd have much more wider feature set and many customers could think that they maybe lose some functiony they maybe need later of they go with this AI only solution.

          and the biggest downside i can see is that its HPC/Compute AI space only the PCIe cards of Gaudi for customers outside the HPC space come much later. so tinycorb is out of luck with that.
          Phantom circuit Sequence Reducer Dyslexia

          Comment


          • #6
            Originally posted by qarium View Post

            never in the last 40 years was intel so beaten at their own ISA-WAR game ... AMD AVX512 implemenation of double pump their 256bit avx unit beats the shit out of intel.

            and right now AVX10 with AVX10-256 and avx10-512 right now is a flop and their big.little design flops against AMD's ZEN4+ZEN4c and ZEN5+ZEN5c

            the intel ARC gpus where honestly only competitive against radeon 6400/6500 because of no AV1 decode unit and intel ARC PCIe cards had only value for the buyers if it was the cheapest option with very bad quality of the drivers.

            the only success they could really point out was his intel 155 cpu who intel could beat AMD's Radeon 780 per watt but only in GPU workloads and the 10nm CPU is pure shit against AMD's 4nm and 5nm chips...

            and as Jumbotron​ is writing



            to me this looks like Intel know they failed with the Inel ARC GPUs in the Compute space so their only option was to drop their own shitty 10nm node go with TSMC 5nm and make a AI only chip without any other functionality was limits the usage very much.

            all other big successfull players like nvidia and amd have much more wider feature set and many customers could think that they maybe lose some functiony they maybe need later of they go with this AI only solution.

            and the biggest downside i can see is that its HPC/Compute AI space only the PCIe cards of Gaudi for customers outside the HPC space come much later. so tinycorb is out of luck with that.
            There is a reason why Intel decided to become a fab for everyone else ( not to mention successfully lobbied the US government for billions in subsidies to pay for it ). When Capitalists ask for Socialism that’s ok but Bernie Sanders asks for a national minimum wage of 15 dollars per hour and a living wage for all workers and everyone loses their fucking minds. But hey….I’m going to get a ever smaller process on my ever shittier Intel core.

            Comment


            • #7
              Originally posted by Jumbotron View Post
              There is a reason why Intel decided to become a fab for everyone else ( not to mention successfully lobbied the US government for billions in subsidies to pay for it ). When Capitalists ask for Socialism that’s ok but Bernie Sanders asks for a national minimum wage of 15 dollars per hour and a living wage for all workers and everyone loses their fucking minds. But hey….I’m going to get a ever smaller process on my ever shittier Intel core.
              they also did this subsidies fraud also in germany.. thy did get over 10 billion dollars to build a 30 million dollars fab in germany.

              what many people do not know but 20 years ago intel did same in germany and after they did get the tax payers subsidies they did let the fab in germany rot ...

              see here:

              https://de.wikipedia.org/wiki/Chipfa...rankfurt_(Oder)

              "When Capitalists ask for Socialism that’s ok but Bernie Sanders asks for a national minimum wage of 15 dollars per hour and a living wage for all workers and everyone loses their fucking minds."

              you are absolutly right these Keynesian economics capitalism always ask for socialism for them the big money players but they are against the idea that small people get subsidies to.

              but honestly the minimum wage will always be zero becaue if you set mimium wage to 15 dollars they will replace the job with a robot and result is your personal minimum wage is again 0€ 0 dollars.
              Phantom circuit Sequence Reducer Dyslexia

              Comment


              • #8
                Originally posted by qarium View Post
                ...

                the intel ARC gpus where honestly only competitive against radeon 6400/6500 because of no AV1 decode unit and intel ARC PCIe cards had only value for the buyers if it was the cheapest option with very bad quality of the drivers.

                the only success they could really point out was his intel 155 cpu who intel could beat AMD's Radeon 780 per watt but only in GPU workloads and the 10nm CPU is pure shit against AMD's 4nm and 5nm chips...

                ...
                1st gen arc was never going to be the tits, but a necessary step. As long as they don't ditch/nerf it for AI, the 2nd and 3rd gen will be the ones that should be more competitive and further up the stack. The way 1st gen arc drivers have matured is insane. intel has failed in many areas but arc isn't one of them.

                Comment


                • #9
                  Originally posted by geerge View Post
                  1st gen arc was never going to be the tits, but a necessary step. As long as they don't ditch/nerf it for AI, the 2nd and 3rd gen will be the ones that should be more competitive and further up the stack. The way 1st gen arc drivers have matured is insane. intel has failed in many areas but arc isn't one of them.
                  right. believe it or not but i have a Intel ARC 380 and the first 6 month with fedora 35/36 i think it only had 1024x800 pixel...

                  ok thats long history soon i will perform more tests for games but it looks like currept problem is that intel arc still lags behind vulkan extensions and some games maybe do not run with wine/proton..

                  right now intel arc is not competition to AMD and also not Nvidia because they have every different market segments

                  Nvidia is mostly on the premium market segment the highest price and most features
                  AMD has the best bank for the bug market segment means best price-performance ratio
                  and Intel has the market segment for people who just want the cheapest price for the hardware they buy.

                  why do i have a Intel ARC 380 ? clearly only because it was the cheapest card around...
                  Phantom circuit Sequence Reducer Dyslexia

                  Comment

                  Working...
                  X