Announcement

Collapse
No announcement yet.

Linux Seeing Support For The HyperBus

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Linux Seeing Support For The HyperBus

    Phoronix: Linux Seeing Support For The HyperBus

    The Linux kernel is in the process of receiving support for the HyperBus, a high performance DDR bus interface used for connecting the processor/controller/ASIC to "HyperFlash" flash memory or "HyperRAM" DRAM...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    whoa, read output of 333MB/s for HyperRAM.

    As a comparison, a random DDR3 RAM bank has a read output around 5000 MB/s (an order of magnitude more).

    Comment


    • #3
      Originally posted by starshipeleven View Post
      whoa, read output of 333MB/s for HyperRAM.

      As a comparison, a random DDR3 RAM bank has a read output around 5000 MB/s (an order of magnitude more).
      TL;DR: If you're not familiar with this bus, you probably don't need to go learn about it as you're not likely to be using it in a system. But keep this article handy in case you ever do run across one.

      These chips are for much smaller processors that you're probably used to. Think ARM Cortex-M cores and MIPS cores. Lots of these chips have 8 bit parallel interfaces with a ton of address pins, so the B/W isn't all that high and the pin count is very high. Lower pin count->smaller packages, less board space, easier routing, etc.

      It could also be useful for chips outgrowing SPI/DIO/QSPI interfaces that can do ~50MB/s on around a half dozen signals.

      It's hard to picture these chips being used in PCs or similar systems. The LPC bus is already there taking up the niche that these might occupy. The SPI FLASH for the BIOS/EFI isn't really a limitation as it's only going to load 4MB or so of data once at boot. If it can do QSPI, then that's just 80ms or so. Taking that down to 0 wouldn't change boot times measurably.

      This bus isn't likely to replace eMMC (or similar) on low end chips as the Cypress flash parts probably aren't going to be cost competetive with the wide market of eMMC chips. The only time it would make sense is if a chip is using both DRAM and flash on a HyperBus as it would save a lot of pins--so the cost difference of the flash might make up for the savings in package size and PCB area/complexity.

      Comment


      • #4
        Originally posted by willmore View Post

        TL;DR: If you're not familiar with this bus, you probably don't need to go learn about it as you're not likely to be using it in a system. But keep this article handy in case you ever do run across one.

        These chips are for much smaller processors that you're probably used to. Think ARM Cortex-M cores and MIPS cores. Lots of these chips have 8 bit parallel interfaces with a ton of address pins, so the B/W isn't all that high and the pin count is very high. Lower pin count->smaller packages, less board space, easier routing, etc.

        It could also be useful for chips outgrowing SPI/DIO/QSPI interfaces that can do ~50MB/s on around a half dozen signals.

        It's hard to picture these chips being used in PCs or similar systems. The LPC bus is already there taking up the niche that these might occupy. The SPI FLASH for the BIOS/EFI isn't really a limitation as it's only going to load 4MB or so of data once at boot. If it can do QSPI, then that's just 80ms or so. Taking that down to 0 wouldn't change boot times measurably.

        This bus isn't likely to replace eMMC (or similar) on low end chips as the Cypress flash parts probably aren't going to be cost competetive with the wide market of eMMC chips. The only time it would make sense is if a chip is using both DRAM and flash on a HyperBus as it would save a lot of pins--so the cost difference of the flash might make up for the savings in package size and PCB area/complexity.
        I totally agree, but still fail to see the niche. Most semi-complex CPUs nowdays have both PCIe, SATA on fast builtin serdes trancievers... plus full width localbus etc.
        Also, what about eSPI? Straight AHCI@Serdes? Or PCIe@Serdes? I don't really see the point with this. Just for boot? As you said, pulling the kernel from a fast SPI or a localbus won't matter much for total system boot times.

        It fills some middle ground, but still requires some hefty 12 pins to achieve max speed.
        And it's still a parallel type DRAM bus. Sure, a bit easier to route, due to lower clock speeds.. but it's still meh.

        Comment


        • #5
          Originally posted by willmore View Post
          These chips are for much smaller processors that you're probably used to. Think ARM Cortex-M cores and MIPS cores. Lots of these chips have 8 bit parallel interfaces with a ton of address pins, so the B/W isn't all that high and the pin count is very high. Lower pin count->smaller packages, less board space, easier routing, etc.

          It could also be useful for chips outgrowing SPI/DIO/QSPI interfaces that can do ~50MB/s on around a half dozen signals.
          Yes, but even there they are not all that impressive. QPI has long gone DDR andd can do IIRC half of this - something like 160MB/s, through half the pins ( 6 for QPI).

          All in all not that big of a deal. These HyperRam based memory components will, I suspect always stay as a bit of a specialty items, especially since not many (if any)names will choose to licence the bus.

          IOW, nice to have just in case, but in general, who cares...



          Last edited by Brane215; 21 February 2019, 03:32 AM.

          Comment

          Working...
          X