Announcement

Collapse
No announcement yet.

The Leading Linux Desktop Platform Issues Of 2018

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #91
    Originally posted by carewolf View Post
    Listen dumbass. It is one thing that you are ludicrously wrong, but don't also be offensively wrong.

    Just stop.

    C has a global name space, and adding import renaming doesn't change that, nor is it a flaw in system if it doesn't force you to do symbol renaming, you are trying to make a virtue of a flaw in dlls when it is just a flaw, and you can do similar renaming in any other linking framework
    You are fucking delusional. C has NOTHING to do with the binary output. Show me where in the standard it says that the C function must match with the exported function name or that function exports exist at all. In fact, the function could just as well be "exported" by an index or ordinal, without a name at all. Name is just convenience, since names after all are SOURCE CODE not binary.

    C is ONLY the SOURCE CODE and should have zero part in the binary output. C stops at static libraries with "external linkage" and says absolutely nothing about the dynamic loading of libraries or symbols. In fact symbols could not even exist in the compiled output.

    C's namespaces applies to the source code ONLY. C++ namespaces and name mangling aren't even defined by the C++ standard, so their exports with name mangling is even MORE proof that the compiler is free to do as it pleases here just to "be compatible with C-style function exports" (since C++ was supposed to be compatible with C linkers initially).

    Just stop. Or show some fucking proof for your bullshit. You made the claim, you prove it.


    ELF is pathetic because it exposes the source code internal workings to the binary. Clearly it wasn't even designed with an open mind, just scrapped together by C code (since C was used to write Unix back in the day) like almost all shitty projects with zero design and just "code first, design never" mantra. What if there's a language which doesn't even HAVE function names? Why should the binary format be tailored to a specific language or source code? That's just fucking retarded.

    Binary format must be as generic as possible so that you can even build it by hand in assembly without relying on exposing C-like function names or whatever other source code internals.
    Last edited by Weasel; 09 October 2018, 08:07 AM.

    Comment


    • #92
      Originally posted by oiaohm View Post
      What you described ignores posix standard.


      libc is define as part of Posix standard. It is allowed to under POSIX that is no other way to allocate memory other than free/malloc/calloc in libc. So implementing your own malloc/free will be sitting on top of the C one.
      Show me where it specifies the exact implementation of libc's malloc/free and what data structures they are supposed to use.

      Originally posted by oiaohm View Post
      This part of standard means when you have a 2 link map inside an application or more the libc memory allocations have to be kept the same. This make libcapsule design mandatory or you will break legacy Linux applications. Windows platform rules don't define 1 single malloc/free for applications to use instead define multi-able.
      WRONG. HeapAlloc and other Windows functions imported from kernel32 are defined and guaranteed to be the same on all processes because the same kernel32 is loaded automatically by the loader no matter what. So you're just full of shit.

      Nobody except you thinks that malloc or the standard library is "magic" and "special". No it's not, only on Linux it is. In Windows it's just a library, like any other. kernel32 is special, though.

      And in Linux, you don't even need malloc since you can use mmap and kernel directly.

      Originally posted by oiaohm View Post
      Free exists on posix because it was defined and you are meant to link against the platform libc. Most Linux distributions platform libc is glibc.

      When writing application yes you have to import malloc and free. If you application is valid POSIX you have imported malloc and free not written your own and with that imported malloc and free you have a expected set of behaviours include the ability to allocate in 1 library and free in a different one.

      Sorry the windows one of you must free inside the dll that allocated does not fly on POSIX. A common runtime is requirement of Posix this has made doing libcapsule a little tricker than what windows did with each dll having it own link-map.
      Literally every bullshit you say about POSIX applies to Windows, if you consider kernel32 their "libc". Screw your malloc/free obsession: on Windows, they're called HeapAlloc and HeapFree.

      Originally posted by oiaohm View Post
      This is what you call being termless idiot.

      Relocation Record is term for the entry in the dynamic link map of entry going between two different files be this the executable to a dynamic link libary or dynamic link libary to dynamic link library. RIP/PIE has absolutely nothing to-do with this. The values for the pointers contained in dynamic link maps has to be resolved this comes with overhead RIP/PIE in fact increase this overhead.

      10 so files with individual link-map per file run into the same overhead problem as 10 dlls with individual link-map.
      Obviously you have no idea what it takes to load a DLL. It's not like I made self-generating code or anything so ofc you know better with zero coding experience.

      Here's the deal in general: Show some proof or shut the fuck up with your bullshit.

      inb4 you link an article which you don't even understand, and that has nothing to do with it but you claim as if it does.

      Originally posted by oiaohm View Post
      Yes 10 dll gives you the flexibility of 10 different runtimes but what about the case where you only have 1 runtime. You are paying the overhead as if you are using 10.
      How are you paying for overhead of something that doesn't get loaded if you load just 1? WTF. Again, proof. It helps.

      Originally posted by oiaohm View Post
      Also Weasel the rule about always free in the dll that allocated it happens be based around the presume that every dll has it own runtime.
      No, it's because each DLL is free to import whatever standard library runtime it wants. The stupid libc or standard library is not special in Windows, it's like any other library. And any other library is free to have 1 million different versions or implementations. You can't guarantee it's the same as the one YOU are using and importing, that's why. This is FREEDOM and FLEXIBILITY offered by DLLs, where ELF would just crap out.

      If you want something forced and guaranteed to be the same across all modules just like your POSIX libc, then kernel32 fits the bill. Every process or module is forced to load kernel32. So yes you do have that alternative in Windows: use HeapAlloc and HeapFree and stop this nonsense malloc/free obsession. They're not special.

      Originally posted by oiaohm View Post
      Of course this does cause a overhead problem. So X is allocated in A dll/so using C runtime. B dll/so using C runtime is the point where it worked X should be freed obeying your logic you will have to call exported function on A that then calls the C runtime to free X now posix logic B calls the C runtime directly.
      Most of the time the destruction process is far more involved than just a simple free, so you need a library API anyway. Look at even a simple library like zlib.
      Last edited by Weasel; 09 October 2018, 08:17 AM.

      Comment


      • #93
        Originally posted by Weasel View Post
        WRONG. HeapAlloc and other Windows functions imported from kernel32 are defined and guaranteed to be the same on all processes because the same kernel32 is loaded automatically by the loader no matter what.
        That is a two incorrect presume. The only dll that has to be loaded by a win32/64 program on NT operating system is in fact none if you are mad enough to use the unstable syscall interface. Application compatibility shim can in fact give you a different kernel32.dll in a dll under you application to your main application so it not 100 percent guaranteed but is expected behavour.

        Originally posted by Weasel View Post
        And in Linux, you don't even need malloc since you can use mmap and kernel directly.
        mmap by syscall is free able by glibc under Linux. But malloc is not that simple either.
        https://sploitfun.wordpress.com/2015...sed-by-malloc/
        You perform a malloc at kernel its either a brk or a mmap.

        Originally posted by Weasel View Post
        Obviously you have no idea what it takes to load a DLL.
        Really I linked to the libcapsule video this should have been clear that I would have been using unix world terms not windows ones. The reality is the way windows PE load works.

        Also you are making a very bad presume I know how dll works because guess what is my scripts that are the base to what build the reactos mingw based build system. Making self modifing code is different to working on a multi platform complier. Platform complier you have to know the right terms like link map and relocation record theses are the platform neutral terms. You would be use to import address table instead of relocation table problem is that terms is not cross platform there are some formats where a import address table basically contains what you windows users would call ordinal numbers so nothing todo with dynamic linking. So every entry in the PE import address table generically is called a relocation record. Coff format that PE is based on has a relocation table Microsoft renamed to import address table. Of course records in a relocation table are relocation record.

        How many times does a relocation record in a get over written under windows or linux in normal usage. The answer is most cases twice. First time to link to handler to be used when/if the function is called this is to reduce how much of .so/.dll files you in fact load because programs have bad habit of declaring imports they never use or use insanely rarely. Second time after the handler to called to load the library function really into memory.

        Next a link map is not part of the PE or ELF format itself. Where does your dynamic linker get the value it uses in the relocation record and can cache the record to use in relocation records. That is right the exports from the imported libraries these go into forming the link map. Most people who do polymorphic code never touch the link map. Processing the exports multi times to produce multi link maps is not free.

        Originally posted by Weasel View Post
        No, it's because each DLL is free to import whatever standard library runtime it wants. The stupid libc or standard library is not special in Windows, it's like any other library. And any other library is free to have 1 million different versions or implementations. You can't guarantee it's the same as the one YOU are using and importing, that's why. This is FREEDOM and FLEXIBILITY offered by DLLs, where ELF would just crap out.
        This is what the single link map prevents. There will only be 1 version of a function named exactly X from exactly 1 so file used in a single link map. Importing multi different versions does not end up with multi different versions when you lock to a single link map it end with a error.

        A person has already mentioned there is a flag when you are building a .so file to trigger the same thing under Linux so that a so has it own link map.

        libc is part of the platform promise in Linux and Unix based operating systems. So should not just break that.

        Why ELF performs differently to PE is not the ELF format. Its the dynamic loader. Ever wonder why under elf you have dlmopen.

        dlmopen gives you the means to have FREEDOM and FLEXIBILITY. Has not help that documentation on how to use this function is insanely light on or that glibc implementation has been very badly broken. libcapsule is fairly implementing the helper functions you need to give Linux/Unix platform promises when you use dlmopen and providing documented examples how to and providing upstream fixes to glibc so its version of dlmopen in fact works.

        Please note we are talking about a solution for the Linux Desktop not for Windows. So this means the solution has to conform to the Linux/posix platform promises.

        Originally posted by Weasel;n10531If you want something [b
        forced[/b] and guaranteed to be the same across all modules just like your POSIX libc, then kernel32 fits the bill. Every process or module is forced to load kernel32.
        You are not forced to load kernel32 in every process or module. You can have libraries using ntdll.dll __chkstk(). You can do quite a lot with a library that has only imported ntdll.dll.

        It is in fact possible to have a program load 2 different versions of kernel32.dll under windows. Microsoft does a lot file system protection to make sure to pull this off is absolutely intentional. This is possible for this split brain because with PE you have multi link maps. You do sometimes see this happen with some badly bundled windows 95/3.11 time frame applications where they bundled up the kernel32.dll in the program directory of the application dll of course these applications fail correctly run until you delete the rouge kernel32.dll.

        Multi link maps do cause downsides.

        Originally posted by Weasel View Post
        Most of the time the destruction process is far more involved than just a simple free, so you need a library API anyway. Look at even a simple library like zlib.
        I guess you did not look at zlib. zlib has at runtime definable alloc and free these days but if you go back in history it use to be hard locked to the runtime it was built with. Due to fact with zlib you need to free stuff quickly its done that way.

        .https://www.zlib.net/manual.html
        I guess you missed the set able zalloc and zfree. general operations of data allocated in library and passed to another library are not complex and it really waste of processing to have to back trace those. Instead either you end up including a work around like zlib has of at runtime declarable alloc/free so these can be synced or you want a platform promise you can trust that they will be the same. Programs developed for Posix platforms expect malloc and free from the libc to be sync along with many other things from the libc.
        Last edited by oiaohm; 09 October 2018, 10:06 PM.

        Comment


        • #94
          Originally posted by oiaohm View Post
          Programs developed for Posix platforms expect malloc and free from the libc to be sync along with many other things from the libc.
          There are many advantages to this, one is that you can easily change out malloc lib and it applies to the entire application, e.g. tcmalloc, or jemalloc might suit your particular workload better. On Windows you end up with this type of mess http://jemalloc.net/mailman/jemalloc...er/000928.html

          Comment


          • #95
            Originally posted by brrrrttttt View Post
            There are many advantages to this, one is that you can easily change out malloc lib and it applies to the entire application, e.g. tcmalloc, or jemalloc might suit your particular workload better. On Windows you end up with this type of mess http://jemalloc.net/mailman/jemalloc...er/000928.html
            There are other more critical differences than just being able to change the malloc. Having multi malloc solution inside application also leads to less than ideal memory allocation. Because malloc implementations normally request blocks of memory from the kernel to reduce down number of syscalls to get memory. So each malloc solution you have loaded has normally a part used memory block.

            Makes debugging memory errors more complex as well. So there is very limited advantage to having lots and lots of link maps.

            I see need for:
            1 link map for using distribution provided libraries.
            1 link map for applications own internal usage and bundled libraries.
            and maybe a few link maps for plugins.
            With the same libc in them all.
            So for file, memory, network stuff simple debugging.

            Applications without plugins I cannot see any reason for more than 2 link maps. The numbers of link maps windows design choice leads to is just insane.

            The windows mess of many link maps solved one problem while incorrectly encouraging developers to go out an reinvent the wheel of memory management like a pack of idiots instead of using the universally provided memory management.

            Comment


            • #96
              He's talking about having a stable platform/OS that doesn't change so dramatically; basically something like Windows or macOS, but the 'problem' lies in the fact that those operating systems are created by very large companies who make technical and business decisions for a product, then stick to those decisions. GNU/Linux is a completely different beast: It's created by many individuals or organisations who make apps, libraries, kernels, tools ad infinitum. They each make individual decisions even if they try to stick to some guidelines, I imagine. The core issue here is that those tools, apps and libraries are free software. When a distribution packages it all together to make it all into a useable system they make their own decisions about what to package, out of the thousands of available pieces of software, and how to package it. That ecosystem is actually something special, as opposed to corporate products, and I agree with his sentiment that it's far from perfect, but not neccessarily with his solutions. It would seem a better path to get distributions to agree between themselves on certain standards, without compromising their advantages over the competition.

              I imagine Flatpak et al is useful in certain situations, but I don't want to have to download a 50MB app at every update as opposed to delta changes, and I don't want 50 apps on my system that all packaged the same 10MB library in different versions that are all out of date because the app developer has standardised on that library that he doesn't really care about because it's not a big part of his application, when the latest library is fully patched, available and installed from the repos on my system. I suppose that security argument comes down to: do I trust that the sandboxing in Flatpak will protect me from the developers laziness of updating a 10 yr old library vs does the latest library, installed once on my system, justify the hassle for developers of having to accomodate that library version in their application.

              And as for "platforms" like Android/iOS, have you ever noticed that base platform apps, that are part of the platform, are very tiny in size when updating, because they rely on other internal components being compatible that they can use and therefore don't need to package in that app component that's about to be updated? Whereas, third-party apps, not part of the base platform, are like 50MB and upwards sometimes, and every update is the same size, so often when a new update comes out a few times in quick succession, you've got to download the _full_ app again. It's not efficient use of bandwidth or storage space.

              So, It is a problem for standardisation, but Flatpak/AppImage and the One True Solution, sucks because it accomodates bad security and laziness.

              Comment


              • #97
                Originally posted by finite9 View Post
                He's talking about having a stable platform/OS that doesn't change so dramatically; basically something like Windows or macOS, but the 'problem' lies in the fact that those operating systems are created by very large companies who make technical and business decisions for a product, then stick to those decisions. GNU/Linux is a completely different beast: It's created by many individuals or organisations who make apps, libraries, kernels, tools ad infinitum. They each make individual decisions even if they try to stick to some guidelines, I imagine. The core issue here is that those tools, apps and libraries are free software. When a distribution packages it all together to make it all into a useable system they make their own decisions about what to package, out of the thousands of available pieces of software, and how to package it. That ecosystem is actually something special, as opposed to corporate products, and I agree with his sentiment that it's far from perfect, but not neccessarily with his solutions. It would seem a better path to get distributions to agree between themselves on certain standards, without compromising their advantages over the competition.

                I imagine Flatpak et al is useful in certain situations, but I don't want to have to download a 50MB app at every update as opposed to delta changes, and I don't want 50 apps on my system that all packaged the same 10MB library in different versions that are all out of date because the app developer has standardised on that library that he doesn't really care about because it's not a big part of his application, when the latest library is fully patched, available and installed from the repos on my system. I suppose that security argument comes down to: do I trust that the sandboxing in Flatpak will protect me from the developers laziness of updating a 10 yr old library vs does the latest library, installed once on my system, justify the hassle for developers of having to accomodate that library version in their application.

                And as for "platforms" like Android/iOS, have you ever noticed that base platform apps, that are part of the platform, are very tiny in size when updating, because they rely on other internal components being compatible that they can use and therefore don't need to package in that app component that's about to be updated? Whereas, third-party apps, not part of the base platform, are like 50MB and upwards sometimes, and every update is the same size, so often when a new update comes out a few times in quick succession, you've got to download the _full_ app again. It's not efficient use of bandwidth or storage space.

                So, It is a problem for standardisation, but Flatpak/AppImage and the One True Solution, sucks because it accomodates bad security and laziness.
                Flatpak had to start somewhere. Flatpak major work is working on what frameworks need to exist so we can in fact sandbox applications and not be user painful.

                We are going to have to watch own Flatpak develops. If they start using libcapsule for opengl/vulkan this may end up always using what ever is the newest glibc be this the host glibc or the runtime glibc.

                A question that we have not really fully explored is how much can be done with shims. Like how many old libraries applications want can we make a shim to have them pretend to exist even they that are using newer versions of the library.

                There are reasons you need to shim for old applications. Like old application own libraries might be using a function named the same as what was added to newer version of a library.

                Please remember Android is a Linux Distribution. Inside a distribution you do see at times quite small application updates but this does not last forever.


                Really you have to put the idea of flatpak vs the idea of chroot to run odd applications.

                Comment


                • #98
                  Originally posted by brrrrttttt View Post
                  There are many advantages to this, one is that you can easily change out malloc lib and it applies to the entire application, e.g. tcmalloc, or jemalloc might suit your particular workload better. On Windows you end up with this type of mess http://jemalloc.net/mailman/jemalloc...er/000928.html
                  That's people's problem for using malloc/free when they should be using HeapAlloc/HeapFree. (which can be hotpatched btw)

                  Comment


                  • #99
                    Originally posted by oiaohm View Post
                    That is a two incorrect presume. The only dll that has to be loaded by a win32/64 program on NT operating system is in fact none if you are mad enough to use the unstable syscall interface. Application compatibility shim can in fact give you a different kernel32.dll in a dll under you application to your main application so it not 100 percent guaranteed but is expected behavour.
                    Dude just SHUT UP WITH MISLEADING BULLSHIT and provide PROOF because I know that if you did you'd realize just how fucking wrong you are.

                    Copy kernel32.dll from System32 (or SysWOW64 if you're testing with a 32-bit application) to the directory containing your EXE file Run the EXE file Process Monitor shows it doesn't even bother to ...

                    while reading the answers to Can I statically link (not import) the Windows system DLLs? I came up with another question. So: Is there a way to write a program that has no dependencies (nothing is


                    kernel32 gets loaded by the process loader, even if your executable has literally no imports whatsoever (yes there's malware that scans the address space for imports manually, with exception handling -- you don't have to use syscalls). Obviously, when a library gets loaded it gets loaded in the same process so kernel32 is already loaded, no matter what you do.

                    Same with ntdll.

                    Once again you keep talking out of your ass and bring so much bullshit misinformation on this forum that people actually believe. Just fucking stop already.

                    An optional entry point into a dynamic-link library (DLL). When the system starts or terminates a process or thread, it calls the entry-point function for each loaded DLL using the first thread of the process.


                    Excerpt:
                    Originally posted by Microsoft
                    Because Kernel32.dll is guaranteed to be loaded in the process address space when the entry-point function is called, calling functions in Kernel32.dll does not result in the DLL being used before its initialization code has been executed. Therefore, the entry-point function can call functions in Kernel32.dll that do not load other DLLs.
                    Originally posted by oiaohm View Post
                    mmap by syscall is free able by glibc under Linux. But malloc is not that simple either.
                    https://sploitfun.wordpress.com/2015...sed-by-malloc/
                    You perform a malloc at kernel its either a brk or a mmap.
                    I don't know what your point is? mmap is analogous to VirtualAlloc under Windows (along with CreateFileMapping and such). Which *can* be used to implement your own allocator completely in userspace, that's what HeapAlloc (and malloc) do anyway.

                    Originally posted by oiaohm View Post
                    Really I linked to the libcapsule video this should have been clear that I would have been using unix world terms not windows ones. The reality is the way windows PE load works.
                    I don't care about your unix explanations, I'm talking about your bullshit about Windows.

                    Originally posted by oiaohm View Post
                    Also you are making a very bad presume I know how dll works because guess what is my scripts that are the base to what build the reactos mingw based build system. Making self modifing code is different to working on a multi platform complier. Platform complier you have to know the right terms like link map and relocation record theses are the platform neutral terms. You would be use to import address table instead of relocation table problem is that terms is not cross platform there are some formats where a import address table basically contains what you windows users would call ordinal numbers so nothing todo with dynamic linking. So every entry in the PE import address table generically is called a relocation record. Coff format that PE is based on has a relocation table Microsoft renamed to import address table. Of course records in a relocation table are relocation record.

                    How many times does a relocation record in a get over written under windows or linux in normal usage. The answer is most cases twice. First time to link to handler to be used when/if the function is called this is to reduce how much of .so/.dll files you in fact load because programs have bad habit of declaring imports they never use or use insanely rarely. Second time after the handler to called to load the library function really into memory.

                    Next a link map is not part of the PE or ELF format itself. Where does your dynamic linker get the value it uses in the relocation record and can cache the record to use in relocation records. That is right the exports from the imported libraries these go into forming the link map. Most people who do polymorphic code never touch the link map. Processing the exports multi times to produce multi link maps is not free.
                    Like I said, proof or shut up.

                    Clearly you haven't EVEN TESTED anything so you're full of shit, considering you said it's possible to not have ntdll/kernel32 loaded in a process (without employing exploits, I mean with normal CreateProcess).

                    tl;dr You have no idea what you're talking about and spread misinformation to the poor snobs who actually believe your technobabble crap. You don't have to go further than the first two links I gave, and even Microsoft themselves in DllMain docs say this fact.

                    Originally posted by oiaohm View Post
                    Why ELF performs differently to PE is not the ELF format. Its the dynamic loader. Ever wonder why under elf you have dlmopen.

                    dlmopen gives you the means to have FREEDOM and FLEXIBILITY. Has not help that documentation on how to use this function is insanely light on or that glibc implementation has been very badly broken. libcapsule is fairly implementing the helper functions you need to give Linux/Unix platform promises when you use dlmopen and providing documented examples how to and providing upstream fixes to glibc so its version of dlmopen in fact works.

                    Please note we are talking about a solution for the Linux Desktop not for Windows. So this means the solution has to conform to the Linux/posix platform promises.
                    No, what ELF performs differently is that it does not FORCEFULLY ASSOCIATE a symbol with a library/module, and that is why it is garbage.


                    Originally posted by oiaohm View Post
                    You are not forced to load kernel32 in every process or module. You can have libraries using ntdll.dll __chkstk(). You can do quite a lot with a library that has only imported ntdll.dll.
                    But kernel32 is still loaded even if you don't import it, the same kernel32 in literally every process in existence. That's why HeapAlloc/HeapFree can be used as long as they refer to same heaps (use GetProcessHeap() for most situations).

                    Originally posted by oiaohm View Post
                    It is in fact possible to have a program load 2 different versions of kernel32.dll under windows. Microsoft does a lot file system protection to make sure to pull this off is absolutely intentional. This is possible for this split brain because with PE you have multi link maps. You do sometimes see this happen with some badly bundled windows 95/3.11 time frame applications where they bundled up the kernel32.dll in the program directory of the application dll of course these applications fail correctly run until you delete the rouge kernel32.dll.
                    Prove it or shut it.


                    Originally posted by oiaohm View Post
                    I guess you did not look at zlib. zlib has at runtime definable alloc and free these days but if you go back in history it use to be hard locked to the runtime it was built with. Due to fact with zlib you need to free stuff quickly its done that way.
                    That has nothing to do with it, those are function pointers and have nothing to do with SYMBOLS. They're also NOT used to destroy the zlib object -- but internally, so it doesn't matter since they'll always match.

                    I'm talking about the fact that you use functions like inflateInit and inflateEnd, because like I said, construction and destruction are usually more than just freeing something. zlib NEVER asks you to free anything to "deallocate its state", instead it gives you functions to do that (init / end) like any proper API.
                    Last edited by Weasel; 10 October 2018, 08:42 AM.

                    Comment


                    • Originally posted by Weasel View Post
                      No they aren't, that's the whole point of this thread. If they were working great, this thread wouldn't exist.
                      No the point of this thread is that "GNU/Linux Desktops don't have a sizable marketshare", which is "Year of the Linux Desktop". The reason is that Linux has poor support for proprietary apps cross distro. There is no open standard for closed apps.

                      As far as actually using a Linux Desktop. It works better than windows 10.

                      Comment

                      Working...
                      X