Originally posted by Space Heater
View Post
Announcement
Collapse
No announcement yet.
Axboe Achieves 8M IOPS Per-Core With Newest Linux Optimization Patches
Collapse
X
-
-
Originally posted by sdack View PostWhat you have is a case of whataboutism.
You clearly stated the following:
Originally posted by sdack View PostUNIX/Linux systems have always dominated the server market, because of their persistency. No other OS could deliver the reliability and thus uptimes as UNIX/Linux could.
Originally posted by sdack View PostUNIX/Linux beat the dominance of Microsoft's operating systems, because one cannot run a reliable service when every software update requires a reboot.
Originally posted by sdack View PostOther OSes did not manage to dominate, not because they did not offer persistency, but they lacked in other qualities, which UNIX/Linux has in addition to its persistency.
Originally posted by sdack View PostAs you may know, has UNIX also become unpopular and it is now mostly only Linux.
- Likes 4
Comment
-
Originally posted by coder View PostTo put some numbers to it, I think Axboe said the single SSD could handle only 5.5 M IOPS. If you put 30 of them on a single 64-core Epyc, then that's just 165 M IOPS worth of SSD capacity. At 8 M IOPS per core, linear scaling would predict 512 M IOPS. Of course, the server CPUs run a lower clockspeed and we know scaling won't be linear, but I also didn't count the SMT threads.
Of course, that's all very simplistic, but I think it's clear the CPU is still far ahead of storage, leaving plenty of cycles for the network stack and for userspace code to do interesting things with the data.
Comment
-
Originally posted by yump View Post64 cores * 3 GHz / (165 MIOP/s) is a little over 1000 CPU cycles per I/O. That doesn't sound like much to me.
However, if they have some reason not to, then don't forget that these numbers only accounted for a single CPU. You could scale up to more CPUs. In the future, CPUs could scale up to more cores, there's potential clock scaling, IPC improvements, DDR5, chip stacking (AMD's V-Cache, for instance), and CPUs are continually adding tweaks like TSX or Intel's upcoming userspace interrupts, which could serve to further optimize some otherwise-stubborn syscall overheads. So, I wouldn't worry about CPUs running out of gas anytime soon.
And, if that's still not enough compute, CXL's recently-added support for memory devices will enable you to even scale up to more than 2 Epyc CPUs sharing a pool of nonvolatile memory.
As a matter of fact, it's really Optane that's running out of gas! Intel's 2nd generation Optane has only managed 4 layers, while 3D NAND is now up to something like 384 layers?
According to this, Samsung is developing 5-layer DDR5 DRAM. I don't know how the areal density of DRAM compares with 3D XPoint, but it'd be ironic if Optane even lost the density and GB/$ race to DDR5.
Last edited by coder; 18 October 2021, 01:35 AM.
Comment
-
Originally posted by coder View PostRealistically, anyone doing anything like that amount of IOPS is probably going to use NVDIMMs and PMEMFILE.
Sure desktop rarely need 2M IOPs, but games are often written with ease of programming and not optimal I/O. Additionally 3D environments with various z-buffer, load objects as you run/fly/drive around 3D environments, on demand textures (in multiple resolutions), etc generate large amounts of I/O. Sure it might not be 10M IOPS, but having to dedicate 5% of a single core instead of 10% is a win. Doubly so if *gasp* you actually multitask while in games, maybe recording a video stream of the game, or running anything else intensive. Even rather sedate games like MS flight sim can generate a fair bit of I/O.
On more mobile platforms running on battery, using 5-10% less power for I/O can be a noticeable savings.Last edited by BillBroadley; 19 October 2021, 12:58 AM.
- Likes 1
Comment
-
Originally posted by BillBroadley View PostNot really, even a semi-nice desktop these days might well have 2 SSDs.
Comment
Comment