Announcement

Collapse
No announcement yet.

The Leading Linux Desktop Platform Issues Of 2018

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by Weasel View Post
    Nope, has nothing to do with it, and you linking the same thing over and over again isn't going to turn it into a fact, sorry. It's purely about symbols and your link has nothing to do with symbols, but sure, keep linking it and stay ignorant..
    Find out why mixing multiple Visual Studio versions in a program is evil. Learn some workarounds, if you must allow multiple Visual Studio versions.


    How many times do I have to link to it before you pull you head out ass. This is a symbol problem. The fact that you have multi symbols leading to multi functions performing allocations. That is the problem here.

    Originally posted by Weasel View Post
    This is wrong. If two apps used the same runtime then there's no issue with DLLs either. However, if they use different runtimes, which obviously means that... well... they're different. A symbol conflict would just crash it on call. No different than with DLLs (where it crashes due to mixing them up, not due to symbol collision).
    .
    I am not talking about 2 applications. I am talking about using 2 versions of runtime inside a single application. With global symbols loading old and new version of glibc is possible into a single application. Yes you have the default symbols from both libraries auto resolve to only 1 function. This means when you use multi CRT on a ELF system it does not fatally explode. Yet windows visual studio runtimes in the same case fail big time.


    Originally posted by Weasel View Post
    No because the global namespace is the DEFAULT and is even called that way. RTD_DEFAULT looks up with global rules too. Why the fuck does this shit even exist?
    This is what I am answering why in heck global exists. RTLD_DEFAULT exists so you can dlopen the application global decide default symbol.

    You can have a program that has loaded 3 libraries that contains gcc malloc due to that being global only 1 function in fact gets used. If the wrong default is picked that what LD_PRELOAD allows you to change. Load the library first the application should use for the default symbols.

    libcapsule is also duplicating this behaviour where it looks at the steam runtime and the host runtime and choose the newest glibc to be assigned as default for your memory allocation, syscall cache, locking functions.... Application using libcapsule may not be using the glibc it loaded but in fact using the one libcapsule loaded and swapped the defaults in.

    This is a mirror problem.

    The problem you are referring to is where two symbols in two different libraries the loader merges them into 1 function and the application has a issue because both functions have different behaviours to what is expected leading to a crash because program expect them to be different. This first form of Symbol Conflict.

    The problem I am referring to again you start off with two symbols in two different libraries except this time they fail to be made into 1 function and both functions get used one from each library. This causes a conflict to happen in like memory allocation issues because you end up using 2 incompatible methods when you should have only been using 1 method. This is the second form of Symbol Conflict this one is known for causing data corruptions and other strange issues.

    Both problems are symbol conflicts where you have 2 symbols and behaviour is not want developer expected.

    The second one is way more evil its like this bit of code is doing an allocate this should be shareable as it an allocate done with malloc right then it explodes because one bit of program was built with 1 version of visual studio runtime and another bit was built with a different visual studio runtime. You rebuild the program with the same visual studio runtime across the board and it behaves it self again.

    Notice they are exact mirrors. Dammed if you merge the symbol so they 1 function dammed if you don't merge the symbols so they are 2 functions.

    Now comes the question how do you mark what symbols should be processed globally and what symbols should come from a particular dll/so. Dll you don't have any way to declare functions as global. ELF you do have means to declare functions global and in full ELF as SUN designed you have means using versioned to say these functions must come from X library using filter don't use the global resolve here. In fact versioned design allows pulling functions in as filtered as hidden so not entered into the global resolve table.

    DLL design only deals with the the first issue that you are tunnel visioned in on. ELF fully implemented allows dealing with both cases but requires developer to provide direction like with versioned declaring filter.

    Comment


    • Originally posted by oiaohm View Post
      http://siomsystems.com/mixing-visual-studio-versions/

      How many times do I have to link to it before you pull you head out ass.
      https://www.logicallyfallacious.com/...-by-Repetition

      Originally posted by oiaohm View Post
      This is a symbol problem.
      PROVE IT.

      Like I said, even a retard could use Search / CTRL+F on the shit you linked and look for "symbol", no expertise needed at all. You won't find anything because it's not about symbols, so SHUT THE FUCK UP you clueless puppet.

      You only find a question in the COMMENTS by a retard like yourself WHO DOESN'T UNDERSTAND THE PROBLEM and *THINKS* it's a symbol problem when it's not. Note how his question was IGNORED because he's full of shit.

      Originally posted by oiaohm View Post
      The fact that you have multi symbols leading to multi functions performing allocations. That is the problem here.
      NO. Symbols don't perform anything you incompetent retard. Symbols are just mappings of a name to an ADDRESS. They can be VARIABLES TOO not just functions (which are really just addresses to code). The problem is mixing functions, not symbols.

      Here's another example cause you clearly don't get it. If you STATICALLY LINK two runtimes and mix/match functions IT WILL STILL CRASH despite the fact that THERE ARE NO FUCKING SYMBOLS since it's statically linked (and use -s to strip debug symbols which are useless anyway). That's because it's like using the wrong type to a function that expects something else. This doesn't have to be even static libraries, you can even have object files or the same source file.

      At this point, I'm done arguing with a moron who doesn't understand BASICS and stays ignorant like a true piece of shit linking the same crap I proved wrong 3 months ago and even now you LITERALLY ignore anything I say, no matter how factual (and you can verify it yourself, even the braindead person can do CTRL+F and see how much full of shit you are).

      I suggest you finally understand what a SYMBOL is before wasting people's time with an endless sea of bullshit.
      Last edited by Weasel; 07 November 2018, 08:06 AM.

      Comment


      • oiaohm Ok, let me put this another way, because I'm done with talking sense into ignorants. I will give proper reasoning (no insults) for anyone who is still interested in this crap.

        Given what I said about, if you static link two runtimes, you get the same crash as with DLLs if you mix them up. Since static linking has nothing to do with runtime symbol lookup, it means this is not a DLL problem nor a symbol problem. It really is as simple as that.

        Note that DLLs give you exactly what you ask for here. Just like static linking, they give you exactly what YOU (the programmer) ask for: you ask to mix the runtimes, that's what you get. If it crashes, it's YOUR fault. (note: mixing runtimes means that you mix one's malloc with another's free or stuff like that; it is perfectly valid to load both in the same process if you make sure to malloc in one and free in the same runtime -- NO CONFLICTS whatsoever, just like static linking -- mixing is done DELIBERATELY)

        Adding some band-aid like forcing one of them to default to the other is INSANE. Not only do you get something you DID NOT ask for as a developer, you also get such things "behind your back" and is just one out of many. That's literally the worst kind of bug design.

        I mean, if YOU didn't want to mix/match them, then fix your fucking code and don't (if a library is retarded and needs it, then you have to link with its runtime, no questions -- possibly link two runtimes, with .def files as I gave examples, yeah it's ugly, but so is such a piss poor library design).

        It's really simple: if you get consistent behavior with static linking and without, it's sane. DLLs are sane, because you can be sure that a symbol will always match a given library, just like static linking, irrespective of 1) env vars (LD_PRELOAD) or 2) other runtimes installed on the system. Again, you get what you ask for (in code), and that's totally sane. CONSISTENCY is sane.

        So ELF with global namespace is INSANE, to put it nicely.

        The end.


        EDIT: extra tip. Imagine every runtime PREFIXED the function names with the version because that's actually what happens -- they are DIFFERENT FUNCTIONS. So instead of "malloc" and "free" you have "crt4_malloc" and "crt4_free" and "crt6_malloc" and "crt6_free" and so on.

        Now think that a library expects you to free using "crt4_free", imagine HOW INSANE it is to have you call crt6_free (mixing) and have it transparently "converted" to crt4_free. You call this a feature? That's fucking disgusting and insane.

        That's just how bad ELF is. It does all this behind your back. If a library expects runtime X's free, that's what you HAVE to use. You can't use crt6 if it expects crt4. Go import both runtimes, which is perfectly possible with DLLs. Fix your damn code. Use crt4 to interface with the library that expects crt4_free and use crt6 everywhere else. Yes, it's totally possible, just use .def files as I said.

        This is what is SANE.

        Replacing crt6's free with crt4's at RUNTIME is BEYOND INSANE. It stinks of an extremely UNSTABLE, VOLATILE environment that you can't rely on when you build your app. Yuck.

        End of story.
        Last edited by Weasel; 07 November 2018, 08:27 AM.

        Comment


        • Originally posted by Weasel View Post
          [USER="105978"]That's just how bad ELF is. It does all this behind your back. If a library expects runtime X's free, that's what you HAVE to use. You can't use crt6 if it expects crt4. Go import both runtimes, which is perfectly possible with DLLs. Fix your damn code. Use crt4 to interface with the library that expects crt4_free and use crt6 everywhere else. Yes, it's totally possible, just use .def files as I said.
          Find out why mixing multiple Visual Studio versions in a program is evil. Learn some workarounds, if you must allow multiple Visual Studio versions.

          Except what is documented here using CRT4 Alloc and transfer into a CRT6 section things may not work out.
          • Memory Access
          • Function call crosses CRT boundary.
          • Accessing the allocated memory may not result into an error. However, interpreting the memory content may result into exceptions, data corruption, memory access error, or program crash.
          How many times do I have to quote this. The document you are not reading covers the case where you do never cross the free and alloc operations and it still fails except now with a random factor.

          So I am not excepting to be crossing usage of malloc and free. You use malloc and free in correct pairs from correct runtimes and you still have issues.

          The person who write the site I am quote went and imported the different visual studio runtimes and tried the same crap you keep on saying works and found it does not. Its a horrible one that appears to work until you run proper quality control.

          Replacing glibc current malloc. realloc and free with glibc 10 years ago works perfectly fine. Wine implementation of the Microsoft CRT redirect to a single implementation of the allocation system. Yes the fact wine redirect to a single allocation is why at times some programs work under wine 100% stable yet under real windows the same program is suffering from random crashes.

          Originally posted by Weasel View Post
          Here's another example cause you clearly don't get it. If you STATICALLY LINK two runtimes and mix/match functions IT WILL STILL CRASH despite the fact that THERE ARE NO FUCKING SYMBOLS since it's statically linked (and use -s to strip debug symbols which are useless anyway). That's because it's like using the wrong type to a function that expects something else.
          Using sun LD to static link a dynamic application and libraries into a pure static binary I don't have this problem. Why because the global symbol solve is in fact performed in this case when you static link. There are symbols at the linking stage every single time be it static linking or dynamic linking.

          Yes there are static linkers that have been made that have a concept of global symbols. Most common linker with global symbol support are ELF relinkers that support taking dynamic elf binary and turn it into a static binary but there have been others.

          Originally posted by Weasel View Post
          This doesn't have to be even static libraries, you can even have object files or the same source file.
          Static linkers with the concept of global symbols will global resolve even when linking just objects if the symbol is tagged as a global. Normally by displaying warning like X function in Y object has been override by Z object. This is in fact useful so you have .a file called one.a you have found 1 function is dud you create two.o with symbol set global put two.o in the right place on the link like and problem solved.

          Basically statically linking point is complete invalid and comes from your limited exposure. Its only said by a person who has never handled all the different forms of static linkers and relinkers.

          Be a linker static or dynamic it a feature if it contains global symbol solving or not.

          Comment


          • https://www.logicallyfallacious.com/...-by-Repetition

            Stop posting that link: you don't understand it.

            Originally posted by oiaohm View Post
            Except what is documented here using CRT4 Alloc and transfer into a CRT6 section things may not work out.
            I never said anything about sections. I said prefix the function names. I literally mean what I say, not what you think I say. I clearly typed "crt4_malloc" and "crt6_malloc" (and the free() funcs). Do you see the underscore?

            But sure, keep playing dumb.

            You're the first person to use the word "section" on this page. Stop this shit.

            What's the point in explaining to you the shit you link when I did so many months ago and you're back to being the same retard?

            I know why you do it though: you realize that if a library uses the function (not section) called "crt4_malloc" and expects you to use "crt4_free" to free it, it's absolutely INSANE to be able to call crt6_free on it, which is exactly the kind of insanity a global namespace brings.

            This isn't a feature, it's insanity.
            Last edited by Weasel; 08 November 2018, 12:27 PM.

            Comment



            • https://www.logicallyfallacious.com/...-by-Repetition
              Originally posted by Weasel View Post
              Stop posting that link: you don't understand it.
              No you don't understand it.

              Originally posted by Weasel View Post
              I know why you do it though: you realize that if a library uses the function (not section) called "crt4_malloc" and expects you to use "crt4_free" to free it, it's absolutely INSANE to be able to call crt6_free on it, which is exactly the kind of insanity a global namespace brings.
              And I clearly said Alloc meaning allocation. Meaning how the memory is created in the page tables the application is using.

              You many think that you have individual heaps of memory but in reality don't you application are in one address space using 1 set of page tables right. Reality this is wrong. You have NUMA https://en.wikipedia.org/wiki/Non-uniform_memory_access this is what brings you to dead and makes this more complex.

              X is allocated with crt4_malloc
              Y is allocated with crt6_malloc
              Now you attempt to memcpy from X to Y what happens.
              1) it works by luck.
              2) it segfaults because where you are the one of the allocations does not exist.
              3) it copy random junk into Y (this would be in crt6 page table area)
              4) does nothing (this would be in crt4 page table area)

              This error is what the site I keep on referring is referencing this.

              Using 2 mallocs in your program lead to what is called undefined behaviour. Undefined behaviour also means you program random crashes. Yes it undefined behaviour under windows.

              Reality here is that the fact you have made a pointer passing it around inside you application you have made it possible to use the wrong free on it. Free is not your worse problem. The worst problem is the fact that any modification operation on a pointer allocated in memory by an allocation method incompatible with where you are in the program can and does cause crashes.

              Global namespace being correctly used for these functions there will not be two mallocs and there will not be two frees. Instead there will be 1 malloc and 1 free.

              There are parts of C runtime where you want only 1 version of that function inside your application. The ability to use 2 mallocs to create pointers you are going to pass around inside you application is the path to hell. Using the wrong free is only start of problem. The fact that you cannot safely perform operations between pointers allocated by different allocations systems is another big problem.

              Reality weasel is a basic not experienced programmer mistake to think you can mix different mallocs and then only care about freeing with the right one. You need to use 2 different mallocs you are safer to use 2 different processes and using the platforms IPC this avoids the allocation system cat fight for control.


              Comment


              • Originally posted by oiaohm View Post
                And I clearly said Alloc meaning allocation. Meaning how the memory is created in the page tables the application is using.
                lmfao.

                1) The symbol name has nothing to do with what it actually does. Nothing forces it to and doesn't matter
                2) Userland alloc functions usually do NOT touch pages except when needed to grow the heap. Lastly, pages are NOT allocated with malloc, but with syscalls (i.e. without symbols). malloc does use syscalls if it needs to grow the heap though. And that has zero conflicts, since well the syscall is available equally to any application, even one that imports zero symbols. It simply always works.

                Originally posted by oiaohm View Post
                You many think that you have individual heaps of memory but in reality don't you application are in one address space using 1 set of page tables right. Reality this is wrong. You have NUMA https://en.wikipedia.org/wiki/Non-uniform_memory_access this is what brings you to dead and makes this more complex.
                This is simply hilarious. Now you started using "page tables" as your newer buzzword (which has nothing to do with allocations but eh). And of course, like usual, you link something that is completely unrelated.

                The fact that, yet again, your link doesn't even contain the "page table" in it should be proof enough for anyone.

                Right, lost cause at this point. Keep babbling.

                Originally posted by oiaohm View Post
                X is allocated with crt4_malloc
                Y is allocated with crt6_malloc
                Now you attempt to memcpy from X to Y what happens.
                Works perfectly fine.

                Originally posted by oiaohm View Post
                1) it works by luck.
                2) it segfaults because where you are the one of the allocations does not exist.
                3) it copy random junk into Y (this would be in crt6 page table area)
                4) does nothing (this would be in crt4 page table area)
                I laughed out loud at your bullshit. I wonder if you realize that malloc does NOT give you raw allocated memory but instead it requires a structure before the actual allocation.

                And clearly the struct can differ between crt4_malloc crt6_malloc. That's exactly why calling crt6_free on crt4_malloc will crash or corrupt your heap, it simply uses a different struct or a different heap (it's as if you casted it to the wrong type).

                It has NOTHING to do with the CPU or page tables or whatever other nonsense (all syscalls would have zero conflicts).

                Reality here is that you are beyond clueless monkey and have literally not a fucking clue what you're talking about. The amount of bullshit in your last post is hysterical.

                You truly are the definition of techno-babble.

                This fits right up your alley: https://en.wikipedia.org/wiki/Turboencabulator
                Last edited by Weasel; 10 November 2018, 01:36 PM.

                Comment


                • Originally posted by Weasel View Post
                  1) The symbol name has nothing to do with what it actually does. Nothing forces it to and doesn't matter
                  2) Userland alloc functions usually do NOT touch pages except when needed to grow the heap. Lastly, pages are NOT allocated with malloc, but with syscalls (i.e. without symbols). malloc does use syscalls if it needs to grow the heap though. And that has zero conflicts, since well the syscall is available equally to any application, even one that imports zero symbols. It simply always works.
                  Syscalls to grow heap are not without problems.
                  Creates a private heap object that can be used by the calling process. The function reserves space in the virtual address space of the process and allocates physical storage for a specified initial portion of this block.

                  Sorry to say the zero conflicts idea is wrong.
                  HEAP_NO_SERIALIZE
                  Such a bright spark of a idea. It was such a bright spark of a idea at increasing performance that mbind appeared in Linux.

                  Originally posted by Weasel View Post
                  And clearly the struct can differ between crt4_malloc crt6_malloc. That's exactly why calling crt6_free on crt4_malloc will crash or corrupt your heap, it simply uses a different struct or a different heap (it's as if you casted it to the wrong type).
                  I am not talking about that fault. There are differences in memory management implementation.
                  This is not the only one.

                  Fun of HEAP_NO_SERIALIZE or some of the linux mbind syscall effects. No different in struct in heap. Problem is failure to take right lock before modifying heap so leading to heap disaster when performing malloc or free or memcpy.....

                  Originally posted by Weasel View Post
                  It has NOTHING to do with the CPU or page tables or whatever other nonsense (all syscalls would have zero conflicts).
                  Sorry syscalls don't have zero conflicts when you are on windows even on Linux when it comes to kernel allocating memory. A malloc implement for speed on windows can be using HEAP_NO_SERIALIZE meaning the locking is now pushed back on the CRT instead of being done by the kernel.

                  1) it works by luck.
                  2) it segfaults because where you are the one of the allocations does not exist.
                  3) it copy random junk into Y (this would be in crt6 page table area)
                  4) does nothing (this would be in crt4 page table area)
                  Yes HEAP_NO_SERIALIZE on windows causes this. Under linux to dig you self into this hole you allocate has to have been using mbind syscall.

                  The 4 and 6 versions of visual studio runtime the allocate system is using HEAP_NO_SERIALIZE behind your mallocs.

                  There are prices to pay for performance.

                  Comment


                  • Originally posted by oiaohm View Post
                    Syscalls to grow heap are not without problems.
                    https://docs.microsoft.com/en-us/win...api-heapcreate
                    Sorry to say the zero conflicts idea is wrong.
                    HEAP_NO_SERIALIZE
                    Such a bright spark of a idea. It was such a bright spark of a idea at increasing performance that mbind appeared in Linux.
                    I have no idea what you're talking about. What the hell did I just read?

                    HeapCreate isn't a syscall. It uses VirtualAlloc, which is going to eventually call a syscall to map pages into memory.

                    HEAP_NO_SERIALIZE basically means that all allocations on that heap don't use locks (mutexes). Nothing else. Nothing to do with memory page allocations or mappings, lol dude.

                    I don't know how to tell you this large secret, but a heap is a DATA STRUCTURE. When you add something to it (HeapAlloc) you need to CHANGE THE DATA STRUCTURE. Changing it must be done under a lock if other threads access it at the same time. That's why serialization is the default. HEAP_NO_SERIALIZE bypasses the locks for performance so all changes to the heap are done directly. None of this has anything to do with syscalls.

                    If you want raw pages -- with no data structure at all -- you use VirtualAlloc. That one uses a syscall though, so it's slow, much slower than HeapAlloc if you do it very many times. HeapAlloc does not give you raw pages, it gives you a buffer of specified size, surrounded by a lot of data structures.

                    Think of it like a filesystem: Raw syscalls is like operating directly on the BLOCKS of the device, without mounting the filesystem. A heap is the entire filesystem. It stores metadata (data structures) not just file contents. Of course, creating a heap requires backing raw storage, device blocks.

                    And what you get with HeapAlloc is just the file contents; which is surrounded by a lot of metadata. HeapAlloc does NOT give you raw blocks to the device, that's VirtualAlloc's job.

                    Now think about what happens when you delete a file. The device doesn't shrink, so pages do NOT usually get unmapped (usually, it depends though). Hence no syscalls are involved in most HeapFree cases. What happens is that the metadata is updated to reflect that the file is now free and removed.

                    This metadata update is what needs to be protected behind locks, if multiple threads want to access it at the same time.

                    LASTLY: crt4_malloc and crt6_malloc operate on different heaps -- on DIFFERENT FILESYSTEMS using this analogy. Does it make sense to you to deallocate a filesystem's files using a different filesystem's functions?!?? This is EXACTLY why you can't mix heaps; they are DIFFERENT DATA STRUCTURES (filesystems in this analogy).

                    memcpy does NOT touch the data structures: you copy file contents, which are always the same with any heap. (memory is always contiguous, there's no fragmentation with file contents; the heap's metadata may become fragmented though). So you're beyond wrong.

                    Why am I even explaining basics to you, I wonder.

                    Originally posted by oiaohm View Post
                    I am not talking about that fault. There are differences in memory management implementation.
                    This is not the only one.

                    Fun of HEAP_NO_SERIALIZE or some of the linux mbind syscall effects. No different in struct in heap. Problem is failure to take right lock before modifying heap so leading to heap disaster when performing malloc or free or memcpy.....
                    No, you are CLUELESS. memcpy DOES NOT TOUCH the data structures in the heap. Those data structures exist before the block allocated. malloc DOES though since it has to tell the data structures that some portion of it is allocated or freed or w/e, so it has to change the data structure.

                    ALL of this is done in userspace. The kernel only gets involved when you need NEW PAGES which have *NOTHING* to do with the heap, which is just data structure. The heap just lives on pages, but nothing else.

                    Originally posted by oiaohm View Post
                    Sorry syscalls don't have zero conflicts when you are on windows even on Linux when it comes to kernel allocating memory. A malloc implement for speed on windows can be using HEAP_NO_SERIALIZE meaning the locking is now pushed back on the CRT instead of being done by the kernel.
                    Dude all of the heap functions (malloc included) are userspace. Just STFU already.

                    The only time they ever go into kernel space is when they need to grow the heap itself.

                    Note that some heaps are NOT GROWABLE when created. They always allocate everything at start. You still use HeapAlloc after that to get such allocations but the PAGES are reserved from the beginning, and the system calls are what is expensive. HeapAlloc is fast because it's done entirely in userspace and doesn't require any syscalls normally. In such a heap that reserves all by default, it is only done once also.

                    So after the initial creation, they are done FULLY in userspace and will NEVER touch the kernel (except with page faults to allocate memory as touched, but that's completely TRANSPARENT and a kernel feature, not a single extra instruction needs to be aware of it), not even on a HeapFree. Only time it will call the kernel is to unmap the pages when HeapDestroy is called. (It will use VirtualFree).

                    And with that... I'm completely done with you. Why? Because this is a completely pointless topic that is totally out of your league (like everything else) but has nothing to do with the topic at hand and you always find a way to stray off course and waste my time.

                    Personally I'm not here to school you about how programming and low level stuff like these work. That's a job I don't want to do for free.

                    Have fun staying ignorant though.
                    Last edited by Weasel; 11 November 2018, 09:15 AM.

                    Comment


                    • Originally posted by Weasel View Post
                      HEAP_NO_SERIALIZE basically means that all allocations on that heap don't use locks (mutexes). Nothing else. Nothing to do with memory page allocations or mappings,
                      So you don't know it. You are looking at this in isolation.

                      Originally posted by Weasel View Post
                      I don't know how to tell you this large secret, but a heap is a DATA STRUCTURE. When you add something to it (HeapAlloc) you need to CHANGE THE DATA STRUCTURE. Changing it must be done under a lock if other threads access it at the same time. That's why serialization is the default. HEAP_NO_SERIALIZE bypasses the locks for performance so all changes to the heap are done directly. None of this has anything to do with syscalls.
                      Originally posted by Weasel View Post
                      Dude all of the heap functions (malloc included) are userspace. Just STFU already.
                      They also put a safe guard around the VirtualAlloc call.
                      HEAP_NO_SERIALIZE allows you to bipass locks on the heap for performance. Now if you are developer mad enough to-do this. The next thing should come as completely no surprise. When these developer go and implement malloc for performance they do something super horrible.

                      As you said VirtualAlloc is slow. So why not throw VirtualAlloc off into a thread. HEAP_GENERATE_EXCEPTIONS+HEAP_NO_SERIALIZE has done exactly that with no safety net right.


                      CRT under windows implement segfault handlers how these are to be implemented are not clearly documented by Microsoft. Your program comes along attempts to use a section of memory that is still waiting for VirtualAlloc to assign (as you said its slow so for performance the runtime has not waited around for it .) so it segfaults and need to get to the right handler.

                      Due to the fact you don't have a global memory system that is globally agreed on the CRT has set up you segfault handlers on the modules where it being used so it can ignore errors outside it domain. Kind of does not work when it gets a segfault from a different domain.

                      Find out why mixing multiple Visual Studio versions in a program is evil. Learn some workarounds, if you must allow multiple Visual Studio versions.

                      X is allocated with crt4_malloc
                      Y is allocated with crt6_malloc
                      Now you attempt to memcpy from X to Y what happens.
                      1) it works by luck.
                      2) it segfaults because where you are the one of the allocations does not exist.
                      3) it copy random junk into Y (this would be in crt6 page table area)
                      4) does nothing (this would be in crt4 page table area)

                      Yes the reported effects are ways of using VirtualAlloc horribly with segfault handlers under windows by windows CRT. Page table area I am referring to where CRT has called to the kernel to be allocated and used. So running two CRT under windows you end up with race conditions particularly when someone has attempt to optimise memory allocation for performance.

                      Lot of people complain about Linux kernel nature to over commit memory. Most people are not aware that Visual studio runtimes for performance also over commit memory and depend on catching segfaults until the memory is correctly assigned. This horrible optimisation is not only found in Visual studio runtimes. Under Linux it fairly safe because you have global malloc, realloc, free and you can set the segfault handler globally across the complete application.

                      This is the side of memory management for performance that is evil. This is why its not just about when you call malloc or free it about will you have the right exception/signal handler when you use that pointer and it not in fact allocated yet. Yes the warped reality that malloc/new returns a pointer when the pages are not allocated yet and this is about performance.

                      Comment

                      Working...
                      X