Announcement

Collapse
No announcement yet.

Compute Express Link 2.0 Specification Published

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Compute Express Link 2.0 Specification Published

    Phoronix: Compute Express Link 2.0 Specification Published

    Just a year after the Compute Express Link 1.0 and 1.1 interconnect specifications were published, CXL 2.0 is being announced this morning for this high-speed, data center minded specification built atop the PCI Express interface...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Shiny!

    Comment


    • #3
      Quite an interconnect week: First PCI 6.0 reached 0.7 status and now CXL 2.0 is ready. I am sure that SC20 this week has something to do with it.

      Comment


      • #4
        Any notable devices that use CXL? Differs from ThunderBolt or USB4 capabilities in some way?

        Comment


        • #5
          Because in large part due to the length of time it took for the PCIe4 standard to come out, you saw all these different interconnect protocols come out vying for attention and to be picked as the next gen interconnect technology.

          There was and still is CXL, as mentioned above, led by Intel as a replacement for Omni-Path but less proprietary as it is a superset of PCIe5 , but also CCIX, led by ARM and Xilinx but also AMD. There is OpenCAPI led by IBM. There is Gen-Z which is led by HP and partly came about from their Machine work. But also Dell was involved and yes....AMD as well.

          On top of that AMD has their Ifinity Fabric, which is a superset of HyperTransport, but has now branched out to be an entire interconnect architecture built into CPUs and GPUs called Infinity Architecture so that you have zero copy of memory and cache coherency across the entire board to different compute chips ranging from the CPU, GPU, DSP and FPGA.

          Which begs the question. Now that it looks like the x86 world has chosen CXL to be the standard cache coherent interconnect for all things non AMD, and Gen-Z will be the standard to tie racks of servers together...even though Gen-Z was designed to ALSO be a cache coherent on board interconnect not just for externally tying racks together....where does this leave AMD's Infinity Fabric?

          Will AMD products simply forego CXL altogether and stay all Infinity Fabric throughout the board and between compute chips and memory before going out via Gen Z to various racks?

          Or will AMD only use IF across their compute chips and memory and then use CXL for everything else on board like a better version of PCIe5 before going out Gen-Z to various racks?

          Also...what about Xilinx? They are on the consortium involved with CCIX which is the cache coherent interconnect tech spearheaded by ARM. Now that AMD owns Xilinx will AMD now connect Xilinx FPGA's with Infinity Fabric on their products while on an ARM platform they will be connected via CCIX?

          Of course...another question arises as well now that CXL is the x86 standard. What happens to Nvidia's NVLink? Particularly now that they own ARM which pushes CCiX?

          It's all a bit hazy to say the least. Or maybe it's just me.

          Comment


          • #6
            Originally posted by polarathene View Post
            Any notable devices that use CXL? Differs from ThunderBolt or USB4 capabilities in some way?
            CXL has nothing to do with USB4 / Thunderbolt.
            Its an interconnect extension, on top of PCI5, that enables cache coherent communication between high-speed devices (AI, networking, GPU, etc), CPUs, RAM and storage.
            It is mostly designed to cater for the HPC server market needs (High density servers with high speed network interconnect).
            Its very unlikely it will be used in any laptop / desktop / workstation any time soon.

            - Gilboa
            oVirt-HV1: Intel S2600C0, 2xE5-2658V2, 128GB, 8x2TB, 4x480GB SSD, GTX1080 (to-VM), Dell U3219Q, U2415, U2412M.
            oVirt-HV2: Intel S2400GP2, 2xE5-2448L, 120GB, 8x2TB, 4x480GB SSD, GTX730 (to-VM).
            oVirt-HV3: Gigabyte B85M-HD3, E3-1245V3, 32GB, 4x1TB, 2x480GB SSD, GTX980 (to-VM).
            Devel-2: Asus H110M-K, i5-6500, 16GB, 3x1TB + 128GB-SSD, F33.

            Comment

            Working...
            X