Announcement

Collapse
No announcement yet.

The Linux Kernel Begins Preparing Support For SD Express Cards

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • The Linux Kernel Begins Preparing Support For SD Express Cards

    Phoronix: The Linux Kernel Begins Preparing Support For SD Express Cards

    Announced earlier this year was the SD Express specification offering around 4x the speed of existing SD cards thanks to leveraging PCI Express 4.0 (or otherwise PCI Express 3.0 fallback) and the NVMe 1.4 protocol. The Linux kernel has begun preparing for SD Express compatibility...

    http://www.phoronix.com/scan.php?pag...-Express-Start

  • #2
    SD Express was announced in June 2018 alongside SDUC (up to 128 TiB capacity, from 2 TiB), as part of the Secure Digital 7.0 specification, allowing up to 985 MB/s.

    SD 8.0 allows up to quadruple the previous speed (3938 MB/s) using 2x PCIe 4.0 lanes, or double (1970 MB/s) using 1x PCIe 4.0 or 2x PCIe 3.0.

    It would be neat to see something other than NAND in these cards, so that those top speeds could actually be hit and sustained.

    Comment


    • #3
      Originally posted by jaxa View Post

      It would be neat to see something other than NAND in these cards, so that those top speeds could actually be hit and sustained.
      great. Bolting PCIe on everything seems to be popular way to solve every problem nowadays ( USB3 etc).
      Are there existing cards and devices that use them ?
      Perhaps those new ass-kicking Canon's mirrorless R5/R6 ?

      EDIT: Errm, no. R5&R6 has CFExpress. Which is yet another standard that has capitulated by going PCIe route.
      CFexpress looks almost the same as SDexpress, barring physical and connector differences. At higher brackets, both offer PCIe4 x2 with 4GB/s bandwidth.
      Does that mean that new drivers will cover both standarrds ?
      Last edited by Brane215; 07-25-2020, 01:06 AM.

      Comment


      • #4
        This development keeps picking away at the foundation of DRAM itself. DRAM looked fast with only spinning disks as the lower level storage system (which had abysmal latencies due to their mechanical nature), but with SSDs hitting speeds of now gigabytes per second is DRAM coming under pressure as being the main memory system, sitting between massive on-die CPU caches and the SSDs.

        A 500GB M.2 SSD with speeds of 5GB/s read and 2.5GB/s write now costs as little as £100, which is about the price of 32GB of DDR4 and close to the peak transfer speed DDR2 had. Not accounting for latencies of course, but this is still an impressive gain.

        I hope it continues and allows us to rework the concept of booting a computer as we currently know it, and enables us to have fully resident OSes and applications, where DRAM is only a cache, and devices can be turned off and on in a second, and the delays caused by bootup and shutdown become a thing of the past.

        Comment


        • #5
          Originally posted by Brane215 View Post

          great. Bolting PCIe on everything seems to be popular way to solve every problem nowadays ( USB3 etc).
          Are there existing cards and devices that use them ?
          Perhaps those new ass-kicking Canon's mirrorless R5/R6 ?

          EDIT: Errm, no. R5&R6 has CFExpress. Which is yet another standard that has capitulated by going PCIe route.
          CFexpress looks almost the same as SDexpress, barring physical and connector differences. At higher brackets, both offer PCIe4 x2 with 4GB/s bandwidth.
          Does that mean that new drivers will cover both standarrds ?
          The photo and video industry has passed on SDExpress. The original design put forth by the SDCard organization was a panic attack response to what was going on with CompactFlash and it just wasn't what the industry wanted and it was made with no input at all from the major players, who sat down and designed both CFast and CFExpress to replace SDHC/SDXC and it's successor SDExpress, in professional and high-end consumer products. The recently updated design is more of the same, but it was intended to be actually competitive with CFExpress. SDExpress might see some implementation in consumer devices down the line, simply because it's backwards compatible with SDXC, which is commonly found in laptops and desktops, making it useful for low-end photo and video cameras. Implementing native SDExpress in laptops and desktops is a bit difficult because doing so requires dedicated PCIe lanes, but there will be external card reader designs that do PCIe over USB/Thunderbolt.

          Comment


          • #6
            Originally posted by sdack View Post
            This development keeps picking away at the foundation of DRAM itself. DRAM looked fast with only spinning disks as the lower level storage system (which had abysmal latencies due to their mechanical nature), but with SSDs hitting speeds of now gigabytes per second is DRAM coming under pressure as being the main memory system, sitting between massive on-die CPU caches and the SSDs.

            A 500GB M.2 SSD with speeds of 5GB/s read and 2.5GB/s write now costs as little as £100, which is about the price of 32GB of DDR4 and close to the peak transfer speed DDR2 had. Not accounting for latencies of course, but this is still an impressive gain.

            I hope it continues and allows us to rework the concept of booting a computer as we currently know it, and enables us to have fully resident OSes and applications, where DRAM is only a cache, and devices can be turned off and on in a second, and the delays caused by bootup and shutdown become a thing of the past.
            Check access times. Whole point of DRAM is relatively uniform access time. With NAND FLASH this highly depends on relative position WRT to previous access.
            It can be short for successive cells and MUCH longer when you need preceeding data.
            You also don't haved write endurance. Disk can take IIRC 10E18 writes to the same area without a problem.
            QLC FLASH can't take more than few thousand.
            Fro RAM replacement something else is needed.
            Caching is already done to some extent through swap, be it on spinning disk on SSD, but that, as said, has its limitations.

            Besdes, RAM is cheap. Usually there is no need to fully cache it. You just plop as much as you need and if you have ocasional spill, there is swap for that.

            Last edited by Brane215; 07-25-2020, 05:45 PM.

            Comment


            • #7
              Originally posted by Brane215 View Post
              Check access times. ...
              You don't seem to know what latency means and you also didn't read how a 500GB SSD costs as much as 32GB of DRAM. Nor do you seem to get the benefit I was talking about.
              Last edited by sdack; 07-26-2020, 09:08 AM.

              Comment


              • #8
                Originally posted by sdack View Post
                You don't seem to know what latency means and you also didn't read how a 500GB SSD costs as much as 32GB of DRAM. Nor do you seem to get the benefit I was talking about.
                That in itself means nothing.
                SSD is unusable for RAM replacement or RAM caching in general.


                Comment


                • #9
                  Originally posted by Brane215 View Post
                  That in itself means nothing.
                  SSD is unusable for RAM replacement or RAM caching in general.
                  For sure, even with SLC, the cells would be cycled out in no time. With the current crop of consumer grade TLC and QLC, a new SSD's life would be measured in mere days if used in place of system RAM.

                  Comment


                  • #10
                    Originally posted by torsionbar28 View Post
                    For sure, even with SLC, the cells would be cycled out in no time. With the current crop of consumer grade TLC and QLC, a new SSD's life would be measured in mere days if used in place of system RAM.
                    It also would have:
                    - catastrophic write time ( has to write whole page on write each access
                    - catastrophic and not well predictable access time ( depends on previous access), which could easily range in 1:1000 range. With "1" being still much slower than RAM access time.

                    Comment

                    Working...
                    X