Announcement

Collapse
No announcement yet.

Rust Now Prefers Using The GNU Gold Linker By Default

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Meteorhead
    replied
    Originally posted by Luke_Wolf View Post

    My patience with anti-OOP fools has run rather much threadbare...
    I did not mean to disguise myself as someone with the C/Unix mindset, I just happen to see/hear (just like you) many remarks and examples of misuse on programming paradigms/concepts/idioms/etc. As for the number of members, Ogre was known to be a monster (nomen est omen), and that any node inside the node graph was a class with 100+ members. I could look for the article, but it became the de-facto example of cache optimization, how rearranging the order of the members resulted in a 2-digit percentile speed-up in global.

    I'm not saying Ogre is useless or anything, but it was not crafted with such things in mind. What I really wanted to say was, that Ogre (rest assured there are a myriad of other cases) that have reached a complexity where OOP starts becoming inefficient. Properly identifying roles and responsibilities (let's say in an STL-like manner, where allocators are parameters of containers, and iterators are responsible for access, etc.) you can reduce the number of members at the cost of composing them into entities that implement some feature as a union. At best, your class hierarchies will be balanced and every class will have roughly the same number of members. However, as you are reaching higher and higher levels, your objects grow in size uncontrollably, even if in the given scenario you know you will be using only a very small subset of the classes capabilities (members). In GPGPU and at a very low level, this effect is demonstrated through AoS vs. SoA, however outside data-parallel algorithms it would be more adequate to call it DOD.

    Creating thin views that group some entities together for the lifetime an operation without having touch the entire object and load it into cache is something that is not possible, or is simply very hard to achieve with a classic OOP mindset.

    I did not mean to start a flame war, just wanted to make a point, which seems I did not back up sufficiently.

    Originally posted by Luke_Wolf View Post

    Proper OOP dictates that you should use no more than 5 levels of derivation
    Do you have any good source where such guidance of "canonical" OOP design is written? I am an HPC programmer/physicst, not a software engineer, so when it comes to programming, I am dominantly self-taught and care for low-level implementation and simple things, not high-level design.

    Leave a comment:


  • Luke_Wolf
    replied
    Originally posted by Ericg View Post

    Oi, Luke, cool it. Mods do exist on these forums, as infrequent as they are.
    My patience with anti-OOP fools has run rather much threadbare to the point, where I'm done being nice to people like PinkUnicorn there who brigade in with disingenuous argument copied off of C programmers who don't understand OOP, which almost always rely upon things total newbs do with the class system that don't really have anything to do with Object Oriented Programming. At least the cult of The Unix Philosophy guys while falling to similar levels of disingenuity aren't quite that asinine.

    Leave a comment:


  • PinkUnicorn1421543
    replied
    Originally posted by Luke_Wolf View Post

    no it doesn't you moron.
    And that, kids, is how you make your point. /s

    Leave a comment:


  • Ericg
    replied
    Originally posted by Luke_Wolf View Post

    no it doesn't you moron.
    Oi, Luke, cool it. Mods do exist on these forums, as infrequent as they are.

    Leave a comment:


  • Luke_Wolf
    replied
    Originally posted by PinkUnicorn1421543 View Post
    That's 128 members including all nested members. Proper OOP in this case would be 128 levels of indirection.
    no it doesn't you moron. Proper OOP dictates that you should use no more than 5 levels of derivation (not including off of Object) and no more than 3 of those should be actual classes with the preference being Interface -> (Abstract Class ->) Class -> Class -> Class. This rule can be broken but if you're going more than 3 deep proper OOP considers it a code smell.

    Leave a comment:


  • PinkUnicorn1421543
    replied
    Originally posted by Luke_Wolf View Post

    If you've got 128 members on an object I can guarantee that whatever you're looking at isn't Object Oriented design. There aren't any real scaling problems with proper OOP.

    That's 128 members including all nested members. Proper OOP in this case would be 128 levels of indirection.

    Leave a comment:


  • Ericg
    replied
    Originally posted by milkylainen View Post
    Big whoop. Everybody seems to care about the seconds spent compiling and building objects. While I commend the effort being placed in writing gold, it isn't a ld replacement (yet).
    Also, speed is usually a (somewhat) function of feature set. While LLVM/Clang and gold might be shiny, new and fast, they do not carry the same feature set as their more mature competitors do.
    People tend to forget that GCC is a thirty year old project. The amount of features and cruft collected is just astounding. I will happily benchmark LLVM/Clang 25 years down the road and compare it against a new shiny lightweight compiler.. (Oh.. the new is soo shiny and faaast. It can compile my kernel in 28 seconds instead of 30!).
    Its not just features though, its paradigms and mentalities and the way things are done. Thirty years down the road, there will be a new shiny compiler that will be benchmarked against LLVM/Clang and GCC, and it will be faster than both of them on similar hardware. And it won't just be because it has less features, it will be because it has 30yrs of additional research backing it that LLVM, and especially GCC, didn't have, and they have learned new things, and come up with better algorithms and methods for doing things and things continually improve. To say that LLVM/Clang, and Gold are only faster because they support less features is being overly forgiving to GCC and ld, and not being disingenuous to LLVM/Clang and Gold.

    Leave a comment:


  • Luke_Wolf
    replied
    Originally posted by Meteorhead View Post
    Same things goes pretty much for object orientation. It seemed like a good idea at the time, but it proved not to scale. Objects with 128 members are not efficient when thrown about in memory inside nested loops. Something else needs to be done in these cases (Data-oriented Design?).
    If you've got 128 members on an object I can guarantee that whatever you're looking at isn't Object Oriented design. There aren't any real scaling problems with proper OOP.

    Leave a comment:


  • smitty3268
    replied
    Originally posted by milkylainen View Post
    Big whoop. Everybody seems to care about the seconds spent compiling and building objects. While I commend the effort being placed in writing gold, it isn't a ld replacement (yet).
    Also, speed is usually a (somewhat) function of feature set. While LLVM/Clang and gold might be shiny, new and fast, they do not carry the same feature set as their more mature competitors do.
    People tend to forget that GCC is a thirty year old project. The amount of features and cruft collected is just astounding. I will happily benchmark LLVM/Clang 25 years down the road and compare it against a new shiny lightweight compiler.. (Oh.. the new is soo shiny and faaast. It can compile my kernel in 28 seconds instead of 30!).
    When your linker takes longer to run than the actual compiler, you have a problem. Also, the GNU devs have acknowledged how terrible the LD code is, which is why they accepted the gold project into binutils.

    I certainly agree that newer isn't always better, but in this case it is.

    Leave a comment:


  • Meteorhead
    replied
    Originally posted by milkylainen View Post
    Big whoop. Everybody seems to care about the seconds spent compiling and building objects. While I commend the effort being placed in writing gold, it isn't a ld replacement (yet).
    Also, speed is usually a (somewhat) function of feature set. While LLVM/Clang and gold might be shiny, new and fast, they do not carry the same feature set as their more mature competitors do.
    People tend to forget that GCC is a thirty year old project. The amount of features and cruft collected is just astounding. I will happily benchmark LLVM/Clang 25 years down the road and compare it against a new shiny lightweight compiler.. (Oh.. the new is soo shiny and faaast. It can compile my kernel in 28 seconds instead of 30!).
    Following up on your thoughts:

    X.Org is a 30 year old project. XServer rocks, who needs this new and shiny Wayland anyway​...

    Just because something is 30 years old doesn't render it automatically better, than something new. The new and shiny isn't better, because it is inherently better, or because it is simply new. The new and shiny is exactly the same as the old, but written WITH the 30 years of wisdom that has accumulated in the old. It is something that looks as if the developers of the old could go back in time and start over fresh with the experience gained over time.

    XServer has design choices deeply rooted that cannot be changed and which seemed like a good idea at the time, but proved to be wrong decisions. They were indeed good choices at the time, but software development evolved and XServer become obsolete, but archaic the least. Same things goes pretty much for object orientation. It seemed like a good idea at the time, but it proved not to scale. Objects with 128 members are not efficient when thrown about in memory inside nested loops. Something else needs to be done in these cases (Data-oriented Design?).

    So yes, the new and shiny respects the old, but has every chance of becoming equally feature rich, but in less amount of time. If it does not reimplement the old in 5 years instead of 30, they need to reevaluate.

    Leave a comment:

Working...
X