Announcement

Collapse
No announcement yet.

7.4M IOPS Achieved Per-Core With Newest Linux Patches

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by quaz0r View Post
    when somebody re-engineers code to do something way faster and more efficient than before, that means the previous implementation was doing it wrong. the new thing can be great and brilliant and cause for celebration and all, but it still also means the previous thing was doing it wrong, and i think we do ourselves as programmers a disservice by never acknowledging that. if you one day discover a direct route to the grocery store, where before your route consisted of first driving 500 miles in the opposite direction and then driving in circles for a week, its not so much that you engineered a brilliant new path, its that the previous thing was doing it wrong.
    There's something else that occurred to me, which is that you seem to be suggesting io_uring is simpler, which it's definitely not. Not in its implementation and certainly not in its usage. That's yet another reason I don't consider legacy I/O APIs to be "wrong".

    Comment


    • #32
      Originally posted by bug77 View Post
      And to actually provide a use case: compiling a program is almost exclusively about 4k random access. Imagine if incremental compiling in the background would suddenly become feasible. It would make writing compiled code, feel almost like scripting.
      compiling is easily parallelizable, which hides read latency. and usually everything your compiler reads is already in page cache. i.e. while you probably will suffer from compiling on hdd, it would be very hard to measure difference between optane and any decent ssd

      Comment


      • #33
        Originally posted by coder View Post
        I'm guessing that has something to do with the reason Intel is yet to release any Gen2 Optane devices for consumers.
        iirc intel decided to kill consumer optanes, so don't hold your breath

        Comment


        • #34
          Originally posted by pal666 View Post
          compiling is easily parallelizable, which hides read latency.
          Even doing parallel builds on HDDs, about a decade ago, I didn't see any real need or benefit from running more jobs than hardware threads. Disk cache & write buffering seem to do a very good job of alleviating disk bottlenecks. Of course, I was nearly always building C++ code with -O2, so my experiences could differ from someone doing kernel builds, for instance.

          Again, linking could be a different story, depending on whether all of the input files were still in disk cache. This comes down to a question of how much RAM you have vs. the size of the codebase you're building.
          Last edited by coder; 17 October 2021, 02:08 PM.

          Comment


          • #35
            Originally posted by nils_ View Post
            Not at all, they are quite expensive (compared to regular NVMe drives) and difficult to get due to low stock but they don't cost as much as a (good) car. I paid 1800€ for the 800GB P5800X.
            I just ran across some "lightly used" 400GB P5800X on ebay for $625 (buy-it-now):



            Note they're 2.5" U.2 drives, so you'll need a cable kit to use them.

            Comment

            Working...
            X