Amazing work and well-written article Michael! Kudos
Announcement
Collapse
No announcement yet.
Linux 5.0 To Linux 5.9 Kernel Benchmarking Was A Bumpy Ride With New Regressions
Collapse
X
-
This work is really important to spot such performance regressions! Thanks a lot!
I guess it´s as always a tradeof between latency and throughput, most of the time if you decrease latency, you will decrease throughput. You can typically fix this by adding more "context" to your work, that typically happens on a layer which has more context, so you increase parallelism where it matters.
Guess this is what we are seing with the current I/O workload performance regressions for highly parallel I/O.
- Likes 3
Comment
-
Originally posted by MadCatX View Post@Michael: What would it take for you to take a look at the Tensorflow drop? A lot of machine learning stuff is based on TF.Michael Larabel
https://www.michaellarabel.com/
- Likes 10
Comment
-
Originally posted by Michael View Post
Assuming you were the one that just PayPal tipped given the similar email/name, looking into the Tensorflow Lite regression now.
I have *not* been sniffing any of birdie's koolaid but these results do make me wonder what's up with all those overpaid Google & co. engineers that it - again - takes Michael and his PTS to unravel these regressions. I guess that we got lucky that Michael decided to do run this test now so the Apache regression was caught in the -rc stage.
- Likes 5
Comment
-
Originally posted by MadCatX View Post
Thanks!!! All TF users out there owe you a beer!
I have *not* been sniffing any of birdie's koolaid but these results do make me wonder what's up with all those overpaid Google & co. engineers that it - again - takes Michael and his PTS to unravel these regressions. I guess that we got lucky that Michael decided to do run this test now so the Apache regression was caught in the -rc stage.
Yep, Linus reached out and currently communicating with him over the Apache regression.Michael Larabel
https://www.michaellarabel.com/
- Likes 9
Comment
-
I'll take secure and error free code over minor performance regressions. It used to be that any security fix would be rejected outright as no one wanted a regression. Now the problem us stale code in applications and libraries as APIs change. Same problem on any software ecosystem, apps break in odd ways all the time as OS and libraries get updated. There's a reason debian is slow and methodical and even they break stupid stuff from time to time.
Comment
Comment