Originally posted by quaz0r
View Post
Announcement
Collapse
No announcement yet.
7.4M IOPS Achieved Per-Core With Newest Linux Patches
Collapse
X
-
- Likes 7
-
Originally posted by onlyLinuxLuvUBack View PostThis with newest amd(if you can get one) and a super expensive optane gen2 ? The intel drive probably costs almost a car ?
Comment
-
Originally posted by ermo View PostIn which kind of real-world applications and workloads is it generally acknowledged that Optane hardware offers a boost so significant that it is worth the added cost?Last edited by torsionbar28; 13 October 2021, 06:36 AM.
- Likes 3
Comment
-
Jealous. Some folks are benchmarking 7 million IO operations per second.
While I was formatting my Fujitsu 0.000099 TB hard drive from 1991 last night. I am restoring a dead retro PC. Yes my friends it's a 104 MB hard drive. It took 20 minutes. 😁😁😁
- Likes 3
Comment
-
Originally posted by torsionbar28 View PostWhen you say "added cost" it sounds like you're comparing Optane to SSD, however the market for this product is not in displacing cheaper SSD, but in displacing more expensive DRAM. I.e. Optane targets enterprise workloads that would otherwise be run in RAM. In-memory databases are the obvious market, but I'm sure there are others. Optane costs 1/2 what DRAM does, while offering similar performance characteristics, plus it's non-volatile. Data analytics, Telecom equipment, and mobile advertising networks are big consumers of in-memory databases, so I imagine they have a keen interest in Optane, if nothing else, for reducing cost vs. using DRAM.
No wonder consumers haven't really picked up on it as I don't necessarily see an obvious use case on the consumer end of things; but maybe that's just me being oblivious!
Comment
-
Originally posted by ermo View Post
Cheers, thanks for reframing it. It makes perfect sense when viewed like that.
No wonder consumers haven't really picked up on it as I don't necessarily see an obvious use case on the consumer end of things; but maybe that's just me being oblivious!
But it's too expensive for consumers still.
Comment
-
Originally posted by quaz0r View Postnobody seems to have the right mindset. when somebody re-engineers code to do something way faster and more efficient than before, that means the previous implementation was doing it wrong.
Leaving that aside, the kernel managed to deliver good synchronous I/O performance via buffering, caching, and read-ahead optimizations. These were fine for sequential I/O, particularly when people were using HDD with up to only a couple hundred IOPS, and even SATA SSDs with some tens of thousands of IOPS.
It's not until we reach NVME drives (i.e. the NAND flash ones) capable of a couple hundred thousand IOPS, where ioctl() overhead really starts to add up. If each syscall adds a couple microseconds of overhead, that's the point where optimizing some away is going to deliver measurable benefits. And that's what io_uring does, effectively. It reduces the number of syscalls you potentially need to make per I/O operation.
Originally posted by quaz0r View Postif you one day discover a direct route to the grocery store, where before your route consisted of first driving 500 miles in the opposite direction and then driving in circles for a week,
- Likes 1
Comment
-
Originally posted by bug77 View PostYou want an obvious use case? Optane destroys SSDs in 4k random reads at QD1.
Examples of things professionals might do could involve searching through GIS data or maybe volumetric medical imaging on a dataset that's too big to fit in memory.
- Likes 1
Comment
Comment