Announcement

Collapse
No announcement yet.

7.4M IOPS Achieved Per-Core With Newest Linux Patches

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by quaz0r View Post
    when somebody re-engineers code to do something way faster and more efficient than before, that means the previous implementation was doing it wrong.
    it doesn't mean it was wrong to do it that way. maybe it was simpler solution good enough to last until someone will have time to make it better. or maybe it was designed for different hardware and new solution is not better on that hardware

    Comment


    • #12
      Originally posted by onlyLinuxLuvUBack View Post
      This with newest amd(if you can get one) and a super expensive optane gen2 ? The intel drive probably costs almost a car ?
      Not at all, they are quite expensive (compared to regular NVMe drives) and difficult to get due to low stock but they don't cost as much as a (good) car. I paid 1800€ for the 800GB P5800X.

      Comment


      • #13
        In which kind of real-world applications and workloads is it generally acknowledged that Optane hardware offers a boost so significant that it is worth the added cost?

        Comment


        • #14
          Originally posted by ermo View Post
          In which kind of real-world applications and workloads is it generally acknowledged that Optane hardware offers a boost so significant that it is worth the added cost?
          When you say "added cost" it sounds like you're comparing Optane to SSD, however the market for this product is not in displacing cheaper SSD, but in displacing more expensive DRAM. I.e. Optane targets enterprise workloads that would otherwise be run in RAM. In-memory databases are the obvious market, but I'm sure there are others. Optane costs 1/2 what DRAM does, while offering similar performance characteristics, plus it's non-volatile. Data analytics, Telecom equipment, and mobile advertising networks are big consumers of in-memory databases, so I imagine they have a keen interest in Optane, if nothing else, for reducing cost vs. using DRAM.
          Last edited by torsionbar28; 13 October 2021, 06:36 AM.

          Comment


          • #15
            Jealous. Some folks are benchmarking 7 million IO operations per second.
            While I was formatting my Fujitsu 0.000099 TB hard drive from 1991 last night. I am restoring a dead retro PC. Yes my friends it's a 104 MB hard drive. It took 20 minutes. 😁😁😁

            Comment


            • #16
              Originally posted by onlyLinuxLuvUBack View Post
              This with newest amd(if you can get one) and a super expensive optane gen2 ? The intel drive probably costs almost a car ?
              They are not expensive, we are just poor:

              Comment


              • #17
                Originally posted by torsionbar28 View Post
                When you say "added cost" it sounds like you're comparing Optane to SSD, however the market for this product is not in displacing cheaper SSD, but in displacing more expensive DRAM. I.e. Optane targets enterprise workloads that would otherwise be run in RAM. In-memory databases are the obvious market, but I'm sure there are others. Optane costs 1/2 what DRAM does, while offering similar performance characteristics, plus it's non-volatile. Data analytics, Telecom equipment, and mobile advertising networks are big consumers of in-memory databases, so I imagine they have a keen interest in Optane, if nothing else, for reducing cost vs. using DRAM.
                Cheers, thanks for reframing it. It makes perfect sense when viewed like that.

                No wonder consumers haven't really picked up on it as I don't necessarily see an obvious use case on the consumer end of things; but maybe that's just me being oblivious!

                Comment


                • #18
                  Originally posted by ermo View Post

                  Cheers, thanks for reframing it. It makes perfect sense when viewed like that.

                  No wonder consumers haven't really picked up on it as I don't necessarily see an obvious use case on the consumer end of things; but maybe that's just me being oblivious!
                  You want an obvious use case? Optane destroys SSDs in 4k random reads at QD1.
                  But it's too expensive for consumers still.

                  Comment


                  • #19
                    Originally posted by quaz0r View Post
                    nobody seems to have the right mindset. when somebody re-engineers code to do something way faster and more efficient than before, that means the previous implementation was doing it wrong.
                    It wasn't wrong, per se. Granted, AIO sucked, because it had limited filesystem support and you had to use O_DIRECT, which is (usually) bad for a number of reasons we needn't go into.

                    Leaving that aside, the kernel managed to deliver good synchronous I/O performance via buffering, caching, and read-ahead optimizations. These were fine for sequential I/O, particularly when people were using HDD with up to only a couple hundred IOPS, and even SATA SSDs with some tens of thousands of IOPS.

                    It's not until we reach NVME drives (i.e. the NAND flash ones) capable of a couple hundred thousand IOPS, where ioctl() overhead really starts to add up. If each syscall adds a couple microseconds of overhead, that's the point where optimizing some away is going to deliver measurable benefits. And that's what io_uring does, effectively. It reduces the number of syscalls you potentially need to make per I/O operation.

                    Originally posted by quaz0r View Post
                    if you one day discover a direct route to the grocery store, where before your route consisted of first driving 500 miles in the opposite direction and then driving in circles for a week,
                    An apt analogy would be that when the only means of travel between continents was by boat, having to stop at the destination country's embassy and obtain a visa wasn't a major overhead. However, when you can take a direct flight on a jet plane, an embassy or consulate visit would be a significant overhead. So, the optimization of getting a visa online or even simply being able to travel with just your passport is a major win.

                    Comment


                    • #20
                      Originally posted by bug77 View Post
                      You want an obvious use case? Optane destroys SSDs in 4k random reads at QD1.
                      That's a performance metric, not a use case. A use case is an example of what sort of tasks a user would perform that would noticeably benefit from high sequential, random 4k IOPS. Reboots would be one such example. That's about the only thing a normal user would do that comes to mind, where they could probably observe a performance improvement.

                      Examples of things professionals might do could involve searching through GIS data or maybe volumetric medical imaging on a dataset that's too big to fit in memory.

                      Comment

                      Working...
                      X