Announcement

Collapse
No announcement yet.

Understanding Skylake's Compute Architecture

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Understanding Skylake's Compute Architecture

    Phoronix: Understanding Skylake's Compute Architecture

    Intel has published some documentation concerning the compute architecture for the Intel Skylake "Gen9" hardware...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    That was a pretty good read. There are a few minor things that Intel decided to gloss over, but it seems to me that it has most of the capability that nVidia and AMD architectures implement. I don't think many people will have problems with it, it should be able to run just about any code that can be written. But I do see some bottlenecks.

    Overall I'd say it's pretty good.

    Comment


    • #3
      What about AMD FreeSync support (aka VESA Adaptive-Sync) ?

      Comment


      • #4
        Originally posted by duby229 View Post
        That was a pretty good read. There are a few minor things that Intel decided to gloss over, but it seems to me that it has most of the capability that nVidia and AMD architectures implement. I don't think many people will have problems with it, it should be able to run just about any code that can be written. But I do see some bottlenecks.

        Overall I'd say it's pretty good.
        you said nothing really

        the (cpu) part they call "IMT" IS what AMD is making now
        i think it's a bad idea

        Comment


        • #5
          I don't understand what you are trying to say. The PDF was about Intel's GPU architecture. I'm not saying it's better, I'm just saying it's good. With Vulkan and DX12 coming along, that GPU should be able to execute just abut anything thrown at it.

          EDIT: A quick google search didn't find anything about IMT, if you have a link I can read that would help me to understand better.

          EDIT2: It seems that it is an acronym for Interleaved Multi-Threading. Which in this case is a hardware feature that is implemented at the instruction scheduler before the execution units. It sounds interesting because the name itself implies that it will be capable of scaling workloads across execution units. Which is something that SMT is not capable of doing, but with trade off that it needs additional hardware and a more complex scheduler.

          Interleaving is not an "end all, be all" solution. It will add quite a bit of additional latency to the pipeline. (anyway it will definitely be a whole lot more effective on a GPU than it could possibly be on a CPU. The facts are that GPU code is highly parallel and CPU code isn't.)
          Last edited by duby229; 17 August 2015, 11:37 AM.

          Comment


          • #6
            yes, my bad
            i misread a thing

            amd and nvidia gpus don't quite work like that, though

            Comment


            • #7
              I wonder if some of this are the first parts of an evolutionary move on Intel's part to drive some generic forms of the Xeon Phi architecture over to the Skylake architecture? I understand that the Xeon Phi is not a GPU but a many CPU core compute accelerator, however, schedulers are schedulers in a certain generic sense whether they be for the CPU or GPU or both. You certainly need a "manly" scheduler for the 72 core Xeon Phi. And as even Intel beefs up its GPU portion of its APU ( where AMD certainly has a head start particularly on the GPU, not to mention the integration side of things seeing as how Intel STILL to this day has not combined its GPU on the same die as its CPU ) my thought is perhaps in lieu of that what Intel could be doing is taking some of the more generic architectural advances born from Xeon Phi and are slowly but surely beginning to implement them in Skylake, vis a vis the GPU.

              Comment


              • #8
                Xeon Phi owes it's legacy to a project called Larrabee. It was supposed to be a GPU that used x86 pipelines. It was a total failure. There is basically 0 chance that it will ever be capable of competing on equal grounds with a real GPU. It's not the x86 cores that make it useful, it's just those 512bit SIMD units, and that's it. (IMO it was the dumbest thing I ever heard at that time. It was the second worst failure they ever had right behind IA64. Of course Intel is free to waste billions of dollars every few years if they really want to, but....)

                See the thing is that from a high level perspective DX12 and Vulkan sits on the GPU at a very similar level that x86 does on the CPU. Everybody knew that graphics and compute would be unified eventually. Of course it's not possible to bootstrap a linux kernel to a Vulkan interface, so x86 will still be around for a little while longer. It's only going to be 1 or maybe 2 more leaps before x86 isn't needed at all anymore. Intel is gonna have to do something, but they have a bit more time to do it.
                Last edited by duby229; 17 August 2015, 09:52 PM.

                Comment

                Working...
                X