Announcement

Collapse
No announcement yet.

Mir Now Depends Upon C++14

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • AlanGriffiths
    replied
    Originally posted by Azpegath View Post
    I agree, as well as minimizing the use of pointers. They are vastly overused in most code bases: they introduce memory leaks (unless reference counted), threading problems and break the cache. Sometimes you have to use pointers - they are extremely powerful as we all know - and that's fine. Just don't do it as a default.
    My rule (that I find clearer) is that a raw pointer means "this one" and never means "I own this"[1].

    That means e.g. iterating by pointer is fine; but that anything that needs to be deleted/freed/[whatever]ed is needs to be owned.

    [1] An exception obviously exists for code whose sole purpose is ownership (like std::unique_ptr<>) but that should rarely be part of user code.

    A second exception (but only when GC is used) is anything guaranteed to (transitively) only own memory.

    Leave a comment:


  • Azpegath
    replied
    Originally posted by zoomblab View Post
    I'll give you an answer from my experience with Objective-C, a language where all objects are reference counted and allocated on the heap. Because of having one way to treat objects, things are more straightforward, than let's say C++. That has also allowed for standard rules to be defined that, when adhered, guarantee code robustness and no memory leaks. It also made possible for clever compiler optimizations to emerge like the recent ARC (automatic reference counting).

    My point is that using smart pointers by default result in less headaches better productivity and more robust code.
    I agree, as well as minimizing the use of pointers. They are vastly overused in most code bases: they introduce memory leaks (unless reference counted), threading problems and break the cache. Sometimes you have to use pointers - they are extremely powerful as we all know - and that's fine. Just don't do it as a default.

    Leave a comment:


  • zoomblab
    replied
    Originally posted by Kemosabe View Post
    I never use smart pointers. It's against my definition of clean and elegant coding.
    Am i a dummy now?
    I'll give you an answer from my experience with Objective-C, a language where all objects are reference counted and allocated on the heap. Because of having one way to treat objects, things are more straightforward, than let's say C++. That has also allowed for standard rules to be defined that, when adhered, guarantee code robustness and no memory leaks. It also made possible for clever compiler optimizations to emerge like the recent ARC (automatic reference counting).

    My point is that using smart pointers by default result in less headaches better productivity and more robust code.
    Last edited by zoomblab; 24 February 2015, 03:57 AM.

    Leave a comment:


  • pal666
    replied
    implementation language is the best feature of mir

    Leave a comment:


  • pal666
    replied
    Originally posted by Kemosabe View Post
    I never use smart pointers. It's against my definition of clean and elegant coding.
    Am i a dummy now?
    yes.
    .
    .

    Leave a comment:


  • boudewijnrempt
    replied
    Originally posted by gamerk2 View Post
    Define "complex" for me?
    Over 200,000 lines of code, more than ten years of history and more than a hundred contributors, most of whom are not around any longer.

    Anything else is simple.

    If you can call what you're doing your 'current' project, it's too small to be complex.

    If "specializing your allocator classes" is the first thing you usually do, you're not working on a complex project.

    Leave a comment:


  • gamerk2
    replied
    Originally posted by JS987 View Post
    Smart pointers are necessary to avoid leaks, crashes, some security bugs when application becomes complex.
    Define "complex" for me?

    Leave a comment:


  • Azpegath
    replied
    Originally posted by curaga View Post
    It's also the same increase in RAM use. But this attitude "who cares, I have 25 EB disk and 16 TB RAM" is exactly what's wrong with modern software.
    I don't agre that it's "whats wrong with modern software" but I do agree that it's is ONE of the problems wrong with modern software. Many languages are also designed around this: "Managing memory isn't really a thing anymore, it's OK to allocate and deallocate memory all the time".
    Modern CPU:s are more demanding on memory layout and proper usage of the cache than ever. Check out Herb Sutters' presentations on that the free lunch is over.

    Leave a comment:


  • SystemCrasher
    replied
    Originally posted by curaga View Post
    It's also the same increase in RAM use. But this attitude "who cares, I have 25 EB disk and 16 TB RAM" is exactly what's wrong with modern software.
    Exactly, Each fucktard thinks his program is best of the best and fucking unique, etc. So we can afford extra 1Mb and so on. Then, we have hundreds of progs and libs and all this runs at the same time. So it can easily go 100s Mb of RAM and diskpace.

    Canonitcal example would be printer icon in kubuntu/xubuntu. For some odd reason it was written in Python. It's so fucking cool when some crappy tray icon wastes whopping 20Mb RSS! That counts as 1% of overall system RAM on 2Gb machine, btw! I do think wasting 1% of system RAM is bit too much for some shitty icon, isn't it?

    Leave a comment:


  • curaga
    replied
    Originally posted by TheSoulz View Post
    really people still care about anything less then 1MB of space?
    I could trow away 100GB right now and still have more then 1TB of free space -.-
    It's also the same increase in RAM use. But this attitude "who cares, I have 25 EB disk and 16 TB RAM" is exactly what's wrong with modern software.

    Leave a comment:

Working...
X