Announcement

Collapse
No announcement yet.

The Linux Kernel Begins Preparing Support For SD Express Cards

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Brane215 View Post
    That in itself means nothing.
    SSD is unusable for RAM replacement or RAM caching in general.
    And so you didn't read where I wrote to use DRAM as a cache either.

    Are you dyslexic?

    Comment


    • #12
      Originally posted by Brane215 View Post
      great. Bolting PCIe on everything seems to be popular way to solve every problem nowadays ( USB3 etc).
      NSA needs an easy way to access DMA for attacks.

      Comment


      • #13
        Originally posted by sdack View Post
        This development keeps picking away at the foundation of DRAM itself. DRAM looked fast with only spinning disks as the lower level storage system (which had abysmal latencies due to their mechanical nature), but with SSDs hitting speeds of now gigabytes per second is DRAM coming under pressure as being the main memory system, sitting between massive on-die CPU caches and the SSDs.

        A 500GB M.2 SSD with speeds of 5GB/s read and 2.5GB/s write now costs as little as £100, which is about the price of 32GB of DDR4 and close to the peak transfer speed DDR2 had. Not accounting for latencies of course, but this is still an impressive gain.

        I hope it continues and allows us to rework the concept of booting a computer as we currently know it, and enables us to have fully resident OSes and applications, where DRAM is only a cache, and devices can be turned off and on in a second, and the delays caused by bootup and shutdown become a thing of the past.
        Yeah I for sure look forward to seeing developers rewrite core parts of their applications. I like the smell of suffering

        Comment


        • #14
          Originally posted by starshipeleven View Post
          Yeah I for sure look forward to seeing developers rewrite core parts of their applications. I like the smell of suffering
          Well, if its all about bandwidth - ie how much data can you move in second, as sdack sees it, he can have winning combo even today - M.2 on a vibrator.

          Comment


          • #15
            Originally posted by Brane215 View Post
            Well, if its all about bandwidth - ie how much data can you move in second, as sdack sees it,
            No he is talking about re-engineering software to use non-volatile memory for their executable code and DRAM only for cache, instead of throwing everything into DRAM and running from that only.

            Comment


            • #16
              Originally posted by starshipeleven View Post
              No he is talking about re-engineering software to use non-volatile memory for their executable code and DRAM only for cache, instead of throwing everything into DRAM and running from that only.
              But he is using bandwidths as only criteria. NAND FLASH has critical drawback for this application ( besides write endurance) - NAND cell is optimized toward density and as a consequence, it can't be adrdessed as NOR cell could be. You can only access them sequentially ( or reset to zero cell IIRC).

              Which means that your access time would be strongly dependent on previous location access. To the ratio of about 1:1000 or so.

              With upper level cache hierarchy you usually see around 1:10 access time ratio, which is pretty constant.

              WRT to running code form FLASH it thin this already can be done. I think I've saw it somewhere ( XIP - eXecute In Place ?).

              Running from NAND FLASH, to the first glance, shouldn't be that hard. One would need just an ability to see selected bunch of NAND flash pages somewhere in memory map and if access itself is paged ( so that one can see only subset of memory device pages), VM subsystem should be able to lick its finger and flip the page as needed...


              Comment


              • #17
                Originally posted by Brane215 View Post
                But he is using bandwidths as only criteria.
                None is using raw NAND anywhere performance matters.

                SSDs have a controller providing a FTL that is an abstraction FAR more complex than faking itself as bit/word addressable, so it can be executed in place like NOR

                Comment


                • #18
                  Originally posted by starshipeleven View Post
                  None is using raw NAND anywhere performance matters.

                  SSDs have a controller providing a FTL that is an abstraction FAR more complex than faking itself as bit/word addressable, so it can be executed in place like NOR
                  None of those tricks are applicable here, and certainly not to a level that would make this kind of approach as game changer.
                  For this, new kind of memory would be needed. And, while at it, perhaps an architecture change could be used - like have memory with limited processing onboard ( wide vector operations with page size, fast pattern search etc).
                  It all depends on what the memory cell is.

                  As I understand, process twaeks of DRAM are not optimal for such logic so DRAM is kept simple. Mass of vertical cavity caps with little else...



                  Comment


                  • #19
                    Originally posted by Brane215 View Post
                    None of those tricks are applicable here, and certainly not to a level that would make this kind of approach as game changer.
                    Modern SSDs are orders of magnitude faster than any kind of SPI flash (the kinds of flash you can execute in place), and making them addressable by byte/word isn't harder than emulating a block device on NAND, which is what they do at the moment.

                    For this, new kind of memory would be needed.
                    already done years ago, SSD-based DIMMs are called NVDIMM and it's also a specification https://en.wikipedia.org/wiki/NVDIMM

                    Comment


                    • #20
                      Originally posted by starshipeleven View Post
                      Modern SSDs are orders of magnitude faster than any kind of SPI flash (the kinds of flash you can execute in place), and making them addressable by byte/word isn't harder than emulating a block device on NAND, which is what they do at the moment.

                      already done years ago, SSD-based DIMMs are called NVDIMM and it's also a specification https://en.wikipedia.org/wiki/NVDIMM
                      NVDIMMs were to be bassed on Intel's X-Point. Which didn't quite take off as planned, since technically it failed war short of stated goals.

                      Also, SSD can't effectively emulate byte adressability 1:1, without giant overhead. There is no way to change say 16 bytes in a 4KiB page without rewriting it, one way or another. Yes, you can do CoW, but that has painful limits, and stil doesn't solve wear& tear, not write time problem. There is also access time problem, which is on average far worse than 10x slower that RAM _and_ its far from uniform.

                      Comment

                      Working...
                      X