Originally posted by xrysf03
View Post
Announcement
Collapse
No announcement yet.
Linux 4.12 I/O Scheduler Tests With A HDD & SSD
Collapse
X
-
Originally posted by paolo View PostAnother test with BFQ configured the wrong way, i.e., for maximum responsiveness; at the expense, of course, of throughput ...
Comment
-
Originally posted by GrayShade View Post
We've both complained about this a couple of times . I think Michael isn't really into changing the default configuration of the software he's testing, although he's done that before, e.g. for video drivers. It's also possible that he doesn't see these posts, of course.
Comment
-
Originally posted by polarathene View PostHow is Michael going about it when he does change configuration? Does he have automated process to make such easy for him or his he doing it manually which takes up his time and discourages him from doing such?
Comment
-
Originally posted by Ropid View Post
I noticed that a Linux guest running on a qemu/kvm virtual machine boots up visibly faster when I have blk-mq enabled on the host. The host has no special hardware. The virtual machine's disk image is saved on a normal SATA SSD.
This raises a question on my part, if blk-mq uses a different IO buffering/caching/prefetch approach or memory volume (the standard block and VM have tunable knobs in /proc/sys/vm and elsewhere) or if it possibly disrespects barrier operations :-) i.e. does aggressive seek reordering across barriers, ignores sync() or some such.
The one thing I've noticed a couple years ago in the standard block+VM layer was, that even in the deadline scheduler, there was a definitive per-transaction timeout on writeback, which under heavier load made the elevator collapse into "first come fists served" after whatever finite timeout value you set in /proc/sys/vm. Once an IO transaction timed out from the dirty cache, it was shoved into a FIFO queue of "transactions that have timed out and need to be written ASAP" and that was the end of any write-combining and reordering on writeback :-)
Comment
-
Originally posted by GrayShade View Post
The BFQ tunable is a file under sysfs, so it's simply writing to that. For a video driver he probably needs to unload the driver and load it again with different parameters or maybe reboot the box.
Manually that can be annoying to maintain and apply right? So if some automation was used for common benchmark tunables and distros that barrier gets removed for Michael and he might be more happy to test a wider variety of configurations. No problems with making a mistake applying configs or missing one.
I've not used PTS yet, I'd assume it doesn't mess with configurations on the OS like that. Images could be built with selected configs and tested with PTS in an automated way though? Packer and Ansible for example might do it, or what was potterings new tool casync? Was meant to be able to do something similar wasn't it?
Comment
-
Originally posted by polarathene View PostManually that can be annoying to maintain and apply right? So if some automation was used for common benchmark tunables and distros that barrier gets removed for Michael and he might be more happy to test a wider variety of configurations. No problems with making a mistake applying configs or missing one.
I'm pretty sure that automatically writing a file isn't out of the reach of the current PTS. The test profiles are just a bunch of shell scripts: https://github.com/phoronix-test-sui...9.0/install.sh .
Comment
-
Originally posted by starshipeleven View PostWhy doesn't BFQ have a sane default that isn't max responsiveness? Micheal is known for testing with default settings.
Comment
Comment