Announcement

Collapse
No announcement yet.

7.4M IOPS Achieved Per-Core With Newest Linux Patches

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by piorunz View Post
    While I was formatting my Fujitsu 0.000099 TB hard drive from 1991 last night. I am restoring a dead retro PC. Yes my friends it's a 104 MB hard drive. It took 20 minutes. 😁😁😁
    I got a couple PATA SSDs, probably about 8 years ago. Maybe you can still find them on ebay? I don't know if they'll work with any PATA controller card with an ISA or EISA interface, though. If your PC had PCI slot, then you'd be in luck. You can even find PCI-based SATA controller cards! I think I got a free one included with an early SATA HDD, in fact.

    Comment


    • #22
      Originally posted by torsionbar28 View Post
      Optane costs 1/2 what DRAM does, while offering similar performance characteristics,
      DRAM is an order of magnitude lower-latency and has several orders of magnitude better write endurance.

      Also, the SSD he used is a PCIe card, not a NVDIMM. That's what people doing big-time in-memory databases will probably be using.

      Comment


      • #23
        Originally posted by nils_ View Post
        I paid 1800€ for the 800GB P5800X.
        For what purpose?

        I was looking at Optane drives, but the 800p was only PCIe 3.0 x2. I thought it would make a good boot drive, and prices weren't too bad a couple years ago. I decided to pass, hoping they'd replace it with a faster version, but instead they seem to be mostly withdrawing Optane from the consumer market.

        Comment


        • #24
          Originally posted by coder View Post
          I got a couple PATA SSDs, probably about 8 years ago. Maybe you can still find them on ebay? I don't know if they'll work with any PATA controller card with an ISA or EISA interface, though. If your PC had PCI slot, then you'd be in luck. You can even find PCI-based SATA controller cards! I think I got a free one included with an early SATA HDD, in fact.
          Yes I am aware I can upgrade this PC. HDD will be tricky though, because BIOS only supports only CHS HDDs, you have to enter number of cylinders, heads, sectors and landing zone by hand. Meaning most likely no booting from SATA drive, unless it can emulate 2 GB CHS drive in BIOS, which I doubt it will.
          But that's not the point, I have, or I can acquire, plenty of hardware. I want to rebuilt it as it was, 1991 style. 104 MB HDD, no CD, Windows 95 installed via Floppy to HDD trick. 4 MB of RAM, 1MB VRAM. It will be rebuild as it was and re-sold as a retro - with 80386 CPU - PC.

          Comment


          • #25
            Originally posted by coder View Post
            For what purpose?

            I was looking at Optane drives, but the 800p was only PCIe 3.0 x2. I thought it would make a good boot drive, and prices weren't too bad a couple years ago. I decided to pass, hoping they'd replace it with a faster version, but instead they seem to be mostly withdrawing Optane from the consumer market.
            The consumer drives aren't particularly useful and haven't been refreshed for PCIe4.0. I'm using the DC edition mainly as a test platform for high performance databases.

            Comment


            • #26
              Originally posted by coder View Post
              That's a performance metric, not a use case. A use case is an example of what sort of tasks a user would perform that would noticeably benefit from high sequential, random 4k IOPS. Reboots would be one such example. That's about the only thing a normal user would do that comes to mind, where they could probably observe a performance improvement.

              Examples of things professionals might do could involve searching through GIS data or maybe volumetric medical imaging on a dataset that's too big to fit in memory.
              For the sake of argument, say you use Virtual Machines on a workstation as part of your $DAYJOB. Would doing regular snapshots of states for a high degree of data protection on the fly be a use case where Optane would offer a noticeable benefit over NVMe SSDs as perceived by a user?

              BTW, Intel mentions that for a specific workload including medical imaging and analysis at an Italian university, Optane cut the necessary analysis time from 40 minutes to just 2 minutes (source). So that tracks with your assertion.
              Last edited by ermo; 15 October 2021, 09:38 AM.

              Comment


              • #27
                Originally posted by coder View Post
                That's a performance metric, not a use case. A use case is an example of what sort of tasks a user would perform that would noticeably benefit from high sequential, random 4k IOPS. Reboots would be one such example. That's about the only thing a normal user would do that comes to mind, where they could probably observe a performance improvement.

                Examples of things professionals might do could involve searching through GIS data or maybe volumetric medical imaging on a dataset that's too big to fit in memory.
                All things you do on a computer need 4k random reads (or maybe writes). Most of these are linked (but not limited) to reading (and saving) state/config.
                4k random access is not always the bottleneck. but improving 4k performance is like improving the 99th percentile for games' fps.

                And to actually provide a use case: compiling a program is almost exclusively about 4k random access. Imagine if incremental compiling in the background would suddenly become feasible. It would make writing compiled code, feel almost like scripting.

                Comment


                • #28
                  Originally posted by nils_ View Post
                  The consumer drives aren't particularly useful
                  I didn't mean for your purposes. They did have good QD=1 IOPS numbers, even if their sequential throughput wasn't above leading NAND-based competitors of their day.

                  Originally posted by nils_ View Post
                  and haven't been refreshed for PCIe4.0.
                  Well, we can still hope, I suppose.

                  Comment


                  • #29
                    Originally posted by ermo View Post
                    For the sake of argument, say you use Virtual Machines on a workstation as part of your $DAYJOB. Would doing regular snapshots of states for a high degree of data protection on the fly be a use case where Optane would offer a noticeable benefit over NVMe SSDs as perceived by a user?
                    Copying entire images should be a mostly sequential operation, which NAND SSDs can handle quite well. I'd just check that they can sustain write throughput for at least the size of your images. Write performance usually falls off a cliff, after a certain point, once their pseudo-SLC buffers are exhausted. You can read a bit about that, here:

                    https://www.storagereview.com/review...vme-ssd-review

                    You'll definitely want to steer clear of QLC drives. There aren't any consumer-grade MLC options, AFAIK. I think everything is now TLC or QLC.

                    Originally posted by ermo View Post
                    BTW, Intel mentions that for a specific workload including medical imaging ... So that tracks with your assertion.
                    I'd characterize it as a mere supposition, but thanks for sharing the link.
                    Last edited by coder; 15 October 2021, 10:08 PM.

                    Comment


                    • #30
                      Originally posted by bug77 View Post
                      All things you do on a computer need 4k random reads (or maybe writes). Most of these are linked (but not limited) to reading (and saving) state/config.
                      4k random access is not always the bottleneck. but improving 4k performance is like improving the 99th percentile for games' fps.
                      For reads, caching and read-ahead works very well. For those times when it doesn't, the latency of regular SSDs is good enough that it almost doesn't matter.

                      As far as writes go, write buffering is pretty powerful stuff. That and caching are the main reasons PCs with modern operating systems were quite usable with HDDs.

                      Regarding the analogy with 99th percentile gaming framerates, it's not a bad one but also kind of pointless. For games, the reason 99th percentile matters is that it's a realtime application. When the framerate drops, it's very noticeable. Even if it happens just a couple times in a session, it's still enough to potentially cause problems for the player. Though it might not often cause them to get killed or miss a shot, it could happen at inopportune times that really spike their stress levels, and that makes it very memorable and a problem worth trying to solve.

                      However, when we're talking about run-of-the-mill computer usage, 99th percentile stuff is likely to go unnoticed and doesn't really matter, because hey it's less than 1% of the time. And a lot of that is going to be when the user isn't expecting an immediate response, anyhow.

                      Now, I think we can imagine there's some barely-perceptible improvement in startup times of big apps. That would be consistent with initial reviews I read of Intel's 900p and 905p consumer Optane drives, at least. So, I'm not trying to say Optane is completely irrelevant for consumers, but it's generally not even close to justifying the price delta. I'm guessing that has something to do with the reason Intel is yet to release any Gen2 Optane devices for consumers.

                      Originally posted by bug77 View Post
                      And to actually provide a use case: compiling a program is almost exclusively about 4k random access.
                      This is BS. I've been messing around with parallel and distributed builds since 2005, far back into the HDD era. At that time, I was even doing builds on NFS mounts. If ever disk I/O should've been a bottleneck, that was it. Yet, only during linking did I sometimes see the disk I/O bottleneck actually bite. Again, caching and write buffering do a tremendous job of hiding the raw latency and media transfer performance of the underlying storage device.

                      Originally posted by bug77 View Post
                      Imagine if incremental compiling in the background would suddenly become feasible. It would make writing compiled code, feel almost like scripting.
                      I wouldn't know from experience, but I think there are some IDEs that have done that for quite a while. At least, to the degree of telling you when you have a syntax error or have referenced a nonexistent symbol.
                      Last edited by coder; 16 October 2021, 10:29 AM.

                      Comment

                      Working...
                      X