Announcement
Collapse
No announcement yet.
Fedora Linux 40 Looks To Replace iotop With iotop-c
Collapse
X
-
Originally posted by RealNC View PostIs there a way to see on which storage device I/O is actually happening? I have three of them, and just seeing an overall number doesn't mean much to me. I'd like to see the numbers for each storage device.
It gives you a full breakdown by device and by statistic.
- Likes 1
Comment
-
Originally posted by geearf View Post
I never realized that, thank you!
edit: actually with this I don't know if I'll need any iotop anymore. Is there the same for network? Currently I use bandwich for that.
But I second atop as the swiss army knife. It also supports creating a history file. atop -B for fun graphs. Atop with "netatop" can also show per process network stats.Last edited by Serafean; 23 January 2024, 06:13 AM.
Comment
-
Originally posted by S.Pam View Post
There is just overall network tx/rx of the system. At least my htop doesn't have network meters per process.
It's too bad it looks like a DOS-era program though.
atop also seems to display a lot of stuff, actually so much it feels overwhelming, I may have to slowly dig into it.
Originally posted by gotar View Post
My favourite one is bwm-ng.
Thanks everyone!
Comment
-
Originally posted by geearf View Post[bwm-ng]
Hmmm, I did not know that one but after trying it it does not seem to really add anything particular compared to those above.
I use bwm-ng on 10+ Gb/s traffic with 0.5M connections using 30+ network interfaces with 0.066 s refresh rate.
- Likes 1
Comment
-
Originally posted by gotar View Post
Don't try to run iftop or iptraf (actually nothing pcap-based) on heavily used interfaces with zillions of connections. Well, iftop handles only single interface at a time.
I use bwm-ng on 10+ Gb/s traffic with 0.5M connections using 30+ network interfaces with 0.066 s refresh rate.
Thanks!
Comment
-
Also there's bpf-filetop from libbpf-tools if you want to see what files the I/O is touching. Doesn't show full paths, alas, but if the heavy hitters are "scaling_cur_freq" repeated 4 times on a 4-CPU machine, it's not hard to guess.
Comment
Comment