Announcement

Collapse
No announcement yet.

Another Potential CPU Optimization For Mesa: Quadratic Probing

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by funfunctor View Post
    Rust ! By todays standards in language evolution, C is pretty darn shit tbh and quaz0r is very much correct above.
    rust can't replace c. it is not compatible and it is not proper language with iso standard and several mature implementations. it is a toy

    Comment


    • #12
      Originally posted by marek View Post
      Also, standard libraries are not always optimized or designed to be fast in the first place.
      while it is true that implementation can be of varying quality, c++ standard containers are designed for speed. contrary to mesa hash with function pointers. and at least on linux you are free to improve implementation
      Last edited by pal666; 08 February 2017, 08:32 PM.

      Comment


      • #13
        Is the C driver code written in C99, C03 or C11? I would suggest getting it optimized for C11 before bitching.

        Comment


        • #14
          Originally posted by marek View Post

          This is an interesting point, but the C language is not to blame. Programmers are, because they love re-inventing stuff. Also, standard libraries are not always optimized or designed to be fast in the first place.
          Programmers are to blame for being too lazy to learn a multitude of modern languages and think about things in new ways.

          One of the major lessons I learned from Haskell was type-directed programming - setting out my types and thinking about the mappings between them right at the end - a sort of "programming with types not functions". It is hard to explain unless you experienced it first hand. Type-dependent languages continue to push that bar to the point that the compiler can start inferring what you mean and write partial functions for you, its really amazing stuff but by large most programmers don't wanna know and dismiss it all as "language fad" so we are stuck with a fetishism for undefined behavior in languages with little to no type system, bashing away in the same old situation never to change.

          I have to say I am impressed at Mozilla for taking this stuff more seriously with Rust, such a large codebase with such high complexity will be a great role model for the rest of the ecosystem that you can make powerful software in modem languages and get the benefits that come with that.

          Comment


          • #15
            Originally posted by pal666 View Post
            rust can't replace c. it is not compatible and it is not proper language with iso standard and several mature implementations. it is a toy
            I suspect your trolling or you have never touched either language and thus have zero idea what your talking about. However i'll assume your just joking not to be <<TRIGGERED>>

            Comment


            • #16
              Originally posted by pal666 View Post
              rust can't replace c. it is not compatible and it is not proper language with iso standard and several mature implementations. it is a toy
              What do you mean it's not compatible? Also, there are several languages without an ISO standard that are very viable such as Python, PHP, and Java. C and C++ are the weird ones here. Most languages don't have an an official standard published but they may still have proper standards and specifications of the language. You sort of have to in order to have a usable language with dependable functionality.

              C++ containers were not designed for speed. They were designed to be generic and useful for common scenarios. https://en.wikipedia.org/wiki/Standa...ibrary#History
              They've started to adapt to more specific containers that are better for some scenarios such as unordered_map (which arguably should be used more often than an ordered map) but they don't pertain to how fast the containers are meant to be at all.

              Comment


              • #17

                This yields an approximate 10% reduction in shader-db runtime.
                Sounds good, but how significant is the shader-db runtime for overall performance? In any case, it surely helps.

                and so we should be able to completely fill the table.
                I suppose this doesn't mean the intention is to actually work with a full table, as that would surely diminish performance, even if possible with that algo and table sizes.


                Regarding C: I don't see a problem writing things like generic hash tables in C (at least as long as you are comfortable using function pointers, speaking generally). I don't really understand why that isn't happening on a larger scale on Linux (there surely are various libraries existing on a smaller scale), except there are so many possibilities and different priorities.

                Comment


                • #18
                  Originally posted by indepe View Post
                  I suppose this doesn't mean the intention is to actually work with a full table, as that would surely diminish performance, even if possible with that algo and table sizes.
                  I think this means that the algorithm needs a full table before conflicting hashes occur. Over all I think this sounds like a reasonable fix.

                  Comment


                  • #19
                    hjahre is spot on. It just means that the hashtable can​​​​ be filled to the brim without failing. That does not mean that it should be filled to the brim. Currently our load factor is 70% for a big table, but slightly higher when it is tiny.

                    One can argue whether rolling our own hashtable is ideal, but it does give some advantages. No external dependencies, and high compatibility, which is kind of important since it will have to be built on Windows, Linux, BSD's, etc. Another thing is that we can tune it to our usecase, which might be of interest in some cases.

                    Comment


                    • #20
                      Originally posted by CrystalGamma View Post
                      It's really sad that this is the kind of optimization that people have to write, just because every nontrivial C project still has to write their own data structure implementations (which can have their own bugs and all), mostly because C is so poorly suited to generic programming. To be clear, this would not be a problem if this were a specific hashmap used in a specific part of code with special requirements, but this is the hash table used in all of mesa.
                      they don't HAVE to write it. they could use any one of dozens of existing hash implementations already in libraries. this would be the same as writing in c++ and writing all your own hash classes instead of using boost etc. or writing in any language then re-inventing your own array lookups and not using the language ones. mesa COULD use many existing very optimized hash implementations and/or contribute to them to improve them for everyone, BUT mesa by design chooses to not rely on other libraries. this has nothing to do with C but a choice of that project to not have such dependencies.

                      Comment

                      Working...
                      X