Originally posted by planetguy
View Post
Announcement
Collapse
No announcement yet.
Docker Benchmarks: Ubuntu, Clear Linux, CentOS, Debian & Alpine
Collapse
X
-
Originally posted by planetguy View Post
Clear Linux targets Haswell+ Intel CPUs exclusively, so they can choose instructions that are fast on that hardware and absent elsewhere. Kubuntu is a general-purpose distribution, so it can't do this.
Clear Linux also does profile-guided optimization - watching a running program to see where it spends its time. Profile-guided optimization would probably help Kubuntu, but again, different CPUs are faster at different tasks, so which CPU should it optimize for?
Comment
-
Originally posted by Goddard View Post
I would think each distribution should optimize for any CPU based on which it is installed for.
Comment
-
Originally posted by arjan_intel View Post
This is not correct; Clear Linux runs on much older than Haswell as well, we don't use AVX by default. We do of course add runtime detection and on systems with AVX2 we will use AVX2 for heavy math stuffs.
Comment
-
Originally posted by Goddard View PostI would think each distribution should optimize for any CPU based on which it is installed for.
For most users it's not really practical though.
- Likes 1
Comment
-
Originally posted by Goddard View Post
I would think each distribution should optimize for any CPU based on which it is installed for.
- Likes 1
Comment
-
Michael
Few bits about storage performance in Docker. I didn't find in article what Docker storage backend you were using and other guys says that you use default parameters for tested apps.
Docker has several storage backends, for example Ubuntu 16.10 uses AUFS driver, CentOS 7 uses devicemapper-loopback, and both these options suck in terms of write performance.
See https://docs.docker.com/engine/userg...er-performance
So any write intensive test, like Compile Bench, will be slow if it writes data to container layer.
But in any production use case for containers you'll not write to container storage and will you volumes for permanent data storage. And Docker volumes on standalone machine should have very little performance impact on disk reads and writes.
Please, next time you'll be testing container, mount a volumes to locations, where tested apps will write. Without that you'll be testing how AUFS sucks, but not a distribution inside container.
- Likes 1
Comment
-
Originally posted by defaultUser View Post
Its not practical to do that. First its necessary to recompile the code for every architecture and some instances, arguably, for every cpu model. Features like multi-versioning could be used to reduce the need of recompilation. However this feature increases the object/executable file sizes.
Comment
-
Originally posted by defaultUser View Post
Its not practical to do that. First its necessary to recompile the code for every architecture and some instances, arguably, for every cpu model. Features like multi-versioning could be used to reduce the need of recompilation. However this feature increases the object/executable file sizes.
It is not hard to have the kernel compiled for every architecture that's widely used (e.g. intel's Core, AMD's Bulldozer and upcoming Zen) and install the one you actually need. Unfortunately, that doesn't work well with installers or update tools in Linux. And of course, there's not much of an incentive to do that, since you can compile the kernel yourself with any optimizations you want. Or at least after a crash course into kernel's compilation and available optimizations you can.
Comment
-
Originally posted by Goddard View PostI understand not doing it for EVERYTHING, but intel and amd are the major platforms and AMD hasn't released anything new for awhile and Intel doesn't release CPUs that often. Seems like some build server in the back corner some where could manage this no?
Most half-serious distros offer source packages, you can easily download them, change the cflags to reflect what is your system and recompile, if you really need it.
Clear Linux devs have implemented some trickery that detects the type of system and switches binaries for some applications, but I doubt this would not be a pain in the ass if done on a decent scale. Debian has like 60k packages, even if 80% is obsolete shovelware that does not need this, it's still a ton of work, and debugging issues will get so much more fun afterwards.
Comment
Comment