Fresh Take On Linux Uncached Buffered I/O "RWF_UNCACHED" Nets 65~75% Improvement

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Svyatko
    replied
    Originally posted by coder View Post
    The standard stipulates 6 Gbps, however the fastest SATA SSDs seem to top out around 550 MB/s in sequential reads.


    Third-generation SATA interfaces run with a native transfer rate of 6.0 Gbit/s; taking 8b/10b encoding into account, the maximum uncoded transfer rate is 4.8 Gbit/s (600 MB/s).
    ​

    Originally posted by coder View Post

    Eh, PCIe 5-based consumer SSDs burn quite a lot of power, though. I doubt any will sustain those speeds, either. And a mixed workload (read+write) is still going to be bottlenecked by the underlying NAND chips, which can either read or write, but not both simultaneously.
    ​
    Drive controller can read data from NAND chip in one channel, and write data to another chip in another channel.
    Also 2 TB SSD NVMe drive can use 2 - 4 GiB onboard DRAM cache, and combine reading from it with writing to it plus using NAND chips, and using SLC cache on NAND chips.
    Last edited by Svyatko; 24 November 2024, 04:00 PM.

    Leave a comment:


  • coder
    replied
    Originally posted by Svyatko View Post
    SATA provides 600 MB/s = 600 megabytes/s.
    The standard stipulates 6 Gbps, however the fastest SATA SSDs seem to top out around 550 MB/s in sequential reads.

    Originally posted by Svyatko View Post
    ​For PCIe 5 consumer SSDs we have 4 lanes * 4 GB/s = 16 GB/s in one direction, it is possible to transfer data up & down simultaneously.
    Eh, PCIe 5-based consumer SSDs burn quite a lot of power, though. I doubt any will sustain those speeds, either. And a mixed workload (read+write) is still going to be bottlenecked by the underlying NAND chips, which can either read or write, but not both simultaneously.

    Leave a comment:


  • Svyatko
    replied
    Originally posted by erniv2 View Post

    In days where your average user has ssd that can give you 500 mbits even if you use a sata ssd you will see an increase of 25 mbits as conservative estimation, not to mention nvmes that run with 3gbits or pcie4 that run at 5 gbits, at 5gbits it already means a gain of 250mbits ~~.

    Ah thats a general asumption ofc you cant exeed the bus speeds sata 600mbit pci3x4 4gbits pcix4x4 8gbits.
    Bytes, not bits.
    SATA provides 600 MB/s = 600 megabytes/s.
    For PCIe 5 consumer SSDs we have 4 lanes * 4 GB/s = 16 GB/s in one direction, it is possible to transfer data up & down simultaneously.

    Leave a comment:


  • coder
    replied
    Originally posted by Soul_keeper View Post
    It would be interesting to see a phoronix benchmark comparison with these patches.
    I think it requires a code modification to actually use his new method. Unless you can just 1:1 replace code using O_DIRECT with this new flag, I doubt there will be anything to benchmark until the patches land and a few intrepid developers of I/O-heavy packages decide to start using it.

    Originally posted by erniv2 View Post
    And yes it will benifit the average user it will probably be in the range from 3-5% improvement but it helps.
    It's highly dependent on using software which wants uncached I/O. The main uses cases for this are related to media streaming and things like databases that do their own caching. Normal userspace code shouldn't use this, just as they don't currently use O_DIRECT.​

    Yes, O_DIRECT is more painful and annoying, which is the point of this new flag. However, most packages that have stood to benefit from uncached I/O are already using that method.

    I wonder if this new flag has a performance benefit vs. O_DIRECT, or if its main benefits mostly fall in the category of ease-of-use and fewer caveats. I think it's telling that he didn't benchmark it against O_DIRECT.
    Last edited by coder; 10 November 2024, 09:23 AM.

    Leave a comment:


  • Anon'ym'
    replied
    Originally posted by User29 View Post

    You mean desktop user? Probably yes.
    But if you are a customer using some services, you'd be happy to see it's speed boosted up.
    It will not. Maybe you will get even more telemetry.

    Leave a comment:


  • Anon'ym'
    replied
    Originally posted by erniv2 View Post
    And yes it will benifit the average user it will probably be in the range from 3-5% improvement but it helps.

    In days where your average user has ssd that can give you 500 mbits even if you use a sata ssd you will see an increase of 25 mbits as conservative estimation, not to mention nvmes that run with 3gbits or pcie4 that run at 5 gbits, at 5gbits it already means a gain of 250mbits ~~.
    ONLY if application will use this specific devs dont know about they exist flags?

    Leave a comment:


  • aviallon
    replied
    Originally posted by enigmaxg2 View Post

    So, probably marginal and negligible gains for the average user.
    I hit the exact same problem on my hypervisors, which is why every VM disk is configured so that it bypasses the host's page cache (even when it could improve performance a lot).
    This would mean a lot of free performance for me. Especially since this is self-hosting, and I do not have much money lol.

    Leave a comment:


  • ptrwis
    replied
    I can imagine Postgres as the first client for it

    Leave a comment:


  • User29
    replied
    Originally posted by enigmaxg2 View Post

    So, probably marginal and negligible gains for the average user.
    You mean desktop user? Probably yes.
    But if you are a customer using some services, you'd be happy to see it's speed boosted up.

    Leave a comment:


  • erniv2
    replied
    Originally posted by enigmaxg2 View Post

    So, probably marginal and negligible gains for the average user.
    This is work for Hyperscalers like Meta, where they have hundredes over hundrededs of harddrives and ssds in crasy raid setups, and if 15 lines of code can improve your read troughput by about 65% you will be crowned employee of the month and can go on vacation ....

    And yes it will benifit the average user it will probably be in the range from 3-5% improvement but it helps.

    In days where your average user has ssd that can give you 500 mbits even if you use a sata ssd you will see an increase of 25 mbits as conservative estimation, not to mention nvmes that run with 3gbits or pcie4 that run at 5 gbits, at 5gbits it already means a gain of 250mbits ~~.

    Ah thats a general asumption ofc you cant exeed the bus speeds sata 600mbit pci3x4 4gbits pcix4x4 8gbits.
    Last edited by erniv2; 06 November 2024, 11:50 PM.

    Leave a comment:

Working...
X