Originally posted by XorEaxEax
View Post
First:
Benchmarks have been mentioned previously; see http://www.etalabs.net/compare_libcs.html under "Performance" (if you doubt the comparisons, the benchmark program is libc-bench)
No-one's done any tests with pts just yet, but I'm too busy with other things to bother myself.
If you don't want benchmarks from an Atom, don't ask me. I have an Atom N270 on my (current) main laptop, a PIII that I occasionally boot but it's running NetBSD now, and an AMD Neo (K8) in a laptop which currently has a flaky keyboard, so I haven't booted it in some time.
If you're wondering what the applicable optimizations are, decode this:
Code:
$ grep flags /proc/cpuinfo flags : fpu vme de tsc msr pae mce cx8 apic mtrr pge mca cmov pat clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe constant_tsc arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 xtpr pdcm movbe lahf_lm flags : fpu vme de tsc msr pae mce cx8 apic mtrr pge mca cmov pat clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe constant_tsc arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 xtpr pdcm movbe lahf_lm
Originally posted by Rich Felker
Other points:
More bloat means less reliability.
glibc's libm does well under default settings...but from what I hear, it messes up on some alternate settings (rounding).
Here's Rich's draft "promotional material":
Consistent quality and implementation behavior from tiny embedded
systems to full servers.
Minimal machine-specific code, meaning less chance of breakage on
minority architectures and better success with "write once run
everywhere" development.
Extremely efficient static and dynamic linking support, yielding small
binaries and minimal startup overhead.
Realtime-quality robustness. No unnecessary dynamic allocation. No
unrecoverable late failures. No lazy binding or lazy allocation.
MIT license.
Full math library with a focus on correctness. Exact and
correctly rounded conversion between binary floating point and decimal
strings.
Reentrancy, thread-safety, and async-signal safety well beyond the
requirements of POSIX. Even snprintf and dprintf are fully reentrant
and async-signal-safe.
Highly resource-efficient POSIX threads implementation, making
multi-threaded application design viable even for memory-constrained
systems.
Simple source code and source tree layout, so it's easy to customize
or track down the cause of unexpected behavior or bugs, or simply
learn how the library works.
systems to full servers.
Minimal machine-specific code, meaning less chance of breakage on
minority architectures and better success with "write once run
everywhere" development.
Extremely efficient static and dynamic linking support, yielding small
binaries and minimal startup overhead.
Realtime-quality robustness. No unnecessary dynamic allocation. No
unrecoverable late failures. No lazy binding or lazy allocation.
MIT license.
Full math library with a focus on correctness. Exact and
correctly rounded conversion between binary floating point and decimal
strings.
Reentrancy, thread-safety, and async-signal safety well beyond the
requirements of POSIX. Even snprintf and dprintf are fully reentrant
and async-signal-safe.
Highly resource-efficient POSIX threads implementation, making
multi-threaded application design viable even for memory-constrained
systems.
Simple source code and source tree layout, so it's easy to customize
or track down the cause of unexpected behavior or bugs, or simply
learn how the library works.
Leave a comment: