Announcement

Collapse
No announcement yet.

Red Hat Work On Improving Glibc Math Performance

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Red Hat Work On Improving Glibc Math Performance

    Phoronix: Red Hat Work On Improving Glibc Math Performance

    Siddhesh Poyarekar of Red Hat has written a blog post about how he's improved the multi-precision number math performance for glibc...

    http://www.phoronix.com/vr.php?view=MTg3NzY

  • #2
    That might sound kind of far removed for some people, but I bet it can be measured by many of javascript benchmarks since javascript uses double precision and complex functions like this are handled with library calls if there isn't an direct assembler instructions.

    Comment


    • #3
      Originally posted by carewolf View Post
      That might sound kind of far removed for some people, but I bet it can be measured by many of javascript benchmarks since javascript uses double precision and complex functions like this are handled with library calls if there isn't an direct assembler instructions.
      I'm sure that JS will not speedup even a little. First of all, even as JS uses supposedly double as default type, in fact:
      - all JS engines do lower down the types to integer operations and most of the times you don't call pow functions
      - JS engines write their binary JIT code so glibc's functions do not improve this generated code that is runned

      But even they are used as is, the performance of JS will not speedup much as most game engines (and every sane way of programming) will cache expensive computations, so very likely the range of values will be on the fast path with lower precision.

      Still, where it will count is very likely in C++ benchmarks, scientific computations and maybe some codecs which use these transcedental transformations, especially against compilers as Intel Compiler. Still, will be just in the few percents, but every speedup we get for free, is great to have.

      Intel boasts to have 25% faster code in floating point code:
      https://software.intel.com/en-us/c-compilers

      Comment


      • #4
        Anyone know if it can also affect stuff like Darktable or any other image processing software (since these are all footing point)?

        Comment


        • #5
          pow and transcendental math functions

          Originally posted by Redi44 View Post
          Anyone know if it can also affect stuff like Darktable or any other image processing software (since these are all footing point)?
          In general, yes. The power or exponentiation function, pow, in particular is used for gamma correction/adjustment of images and video, may as well be used in colour space calculations (e.g. converting from RAW to sRGB). Additionally image filters may use the trigonometry functions, sin, cos, tan, etc for curves (e.g. radial gradient), or cyclic effects.

          That said, depending on how it is compiled an application may contain or use high-speed approximations math libraries. They are typically not better coded, but typically gain a speed advantage at the expense of lower precision and/or accuracy, or may calculate an approximation using vector instructions (SIMD) rather than x87 instructions.

          Overall the quality both in terms of speed and accuracy for math functions in glibc is of some of the highest quality (circa 1998) in a general implementation (that is focused on standards compliance over speed or size optimization). I don't know of any more recent comparison of the math portion between libc implementations, most of the *BSD are still largely based on 1990s software floating point implementation from Sun Microsystems (fdlibm) so except for a few important bug fixes little has changed otherwise AFAIK, and musl is the only other libc implementation worth mentioning in non-embedded environment as far as I'm aware.

          Comment

          Working...
          X