Announcement

Collapse
No announcement yet.

Mono 2.6 Released, Supports LLVM Generation

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by ciplogic View Post
    You don't give a damn for devel time? Not a problem!
    Nope, if less development time means lower quality, performance, higher resource usage (however, according to some of your comments it should be better, but time will show). I'm talking for myself of course. I'm sure there are people who think different and who care about development time (I also care about this sometimes, but not in this case).

    I said what I think about comparing GNotes and Tomboy already. I gave you even better example - Firefox. You agreed Tomboy consumes 25% more memory and according to what you said it's like this: less development time - more memory usage. If this is correct I'll stick to what I said above.

    What I find strange is Gnome devs didn't care about things Blackstar and some fanboys scream about on their blogs or PG when there was just Java (or C# wasn't mature) and there wasn't MS-Novel deal. Even if C# is better then Java it means nothing, because afaik their goals, "advantages" are similar and if this is true, they should use Java since quite a long.

    The Shared Memory numbers are shown and they say how much is shared.
    I'm really interested in measuring memory usage correctly. According to link I gave, could you instruct me how did you measure real memory usage, please?
    Last edited by kraftman; 21 December 2009, 04:25 PM.

    Comment


    • I'm really interested in measuring memory usage correctly. According to link I gave, could you instruct me how did you measure real memory usage, please?
      My understanding is as follows (please correct any inaccuracies you find):
      • Virtual memory is the amount of memory requested by the program, which may or may not be in use.
      • Resident memory is the actual amount of memory in use.
      • Shared memory is the amount of memory that can be shared with other applications or between multiple instances of the same application (this accounts for shared objects, system libraries, etc).


      As a rule of thumb, low resident memory is the what you are interested in. If you have two programs with similar resident memories, the one with the highest shared memory wins. Virtual memory is not a useful metric when comparing memory usage.

      Managed (GC-enabled) applications tend to have slightly inflated virtual memory values (this is done for performance reasons - the details are interesting but rather technical for this discussion). For example, .Net/win32 always requests something like 8MB of memory on application startup - even if the application is only using 2MB of those. (These extra 6MB do not actually hurt performance, i.e. they won't cause you to run out of physical memory).

      Mono applications also tend to have lower shared memory values (this is a bad thing), because the runtime cannot share JIT-able bytecode. This can be improved using AOT on your binaries (AOT-ed binaries are shareable).

      Comment


      • Originally posted by BlackStar View Post
        My understanding is as follows (please correct any inaccuracies you find):
        • Virtual memory is the amount of memory requested by the program, which may or may not be in use.
        • Resident memory is the actual amount of memory in use.
        • Shared memory is the amount of memory that can be shared with other applications or between multiple instances of the same application (this accounts for shared objects, system libraries, etc).


        As a rule of thumb, low resident memory is the what you are interested in. If you have two programs with similar resident memories, the one with the highest shared memory wins. Virtual memory is not a useful metric when comparing memory usage.

        Managed (GC-enabled) applications tend to have slightly inflated virtual memory values (this is done for performance reasons - the details are interesting but rather technical for this discussion). For example, .Net/win32 always requests something like 8MB of memory on application startup - even if the application is only using 2MB of those. (These extra 6MB do not actually hurt performance, i.e. they won't cause you to run out of physical memory).

        Mono applications also tend to have lower shared memory values (this is a bad thing), because the runtime cannot share JIT-able bytecode. This can be improved using AOT on your binaries (AOT-ed binaries are shareable).
        Thanks, but which tool?

        Comment


        • In Linux there is a virtual file system named /proc. Which describe data about your processes that run there. You can use a lot of tools to go deep in folders like command line ones: cat (display information about your process), or other that do this work for you: top (command line "task manager"), gtop (similar), gnome-system-monitor.
          If you ask on Windows, Task Manager do the same thing but only visual.

          Comment


          • Originally posted by ciplogic View Post
            In Linux there is a virtual file system named /proc. Which describe data about your processes that run there. You can use a lot of tools to go deep in folders like command line ones: cat (display information about your process), or other that do this work for you: top (command line "task manager"), gtop (similar), gnome-system-monitor.
            If you ask on Windows, Task Manager do the same thing but only visual.
            Thank you. However, afaik Top is useless in this (I'll past a link about this if I find it). I'm not sure on what *system-monitors base, but if on Top it could make them useless too. I know about /proc, but I'm looking for a tool which will be easy and accurate.

            Comment


            • On Vista and newer open the "resource monitor" app (task manager -> performance -> resource monitor) and check the "commit", "working set" and "shared" numbers under "memory". Also, doesn't KDE have something in the spirit of gnome-system-monitor?

              Comment


              • Originally posted by BlackStar View Post
                On Vista and newer open the "resource monitor" app (task manager -> performance -> resource monitor) and check the "commit", "working set" and "shared" numbers under "memory". Also, doesn't KDE have something in the spirit of gnome-system-monitor?
                It has, but the point is if those tools are meaningful. Maybe they're meaningful in some part if you compare differences of amount of memory usage by applications on Linux, but they're probably meaningless comparing Linux to Windows according to link I gave you before. I'm looking for a link about Top now and even more



                If you go through the output, you will find that the lines with the largest Kbytes number are usually the code segments of the included shared libraries (the ones that start with "lib" are the shared libraries). What is great about that is that they are the ones that can be shared between processes. If you factor out all of the parts that are shared between processes, you end up with the "writeable/private" total, which is shown at the bottom of the output. This is what can be considered the incremental cost of this process, factoring out the shared libraries. Therefore, the cost to run this instance of KEdit (assuming that all of the shared libraries were already loaded) is around 2 megabytes. That is quite a different story from the 14 or 25 megabytes that ps reported.
                You see? Some tool can report application is using 14 or 25MB, but it's probably only 2MB.

                However, *-system-monitors can be correct:

                ps aux |grep kate
                pmap -d 7962

                mapped: 331000K writeable/private: 10260K (this is probably what counts) shared: 21856K (and some part of this - rw--- bits)

                KDE-System-Monitor shows it's: 12MB.

                For ksysguard:

                top:
                RES: 28MB SHR: 16MB

                pmap -d:
                mapped: 371284K writeable/private: 18976K shared: 20396K

                Ksysguard itself:

                RES: 12.2MB SHR: 16MB

                Which one is correct? :>
                Last edited by kraftman; 22 December 2009, 08:54 AM.

                Comment


                • If you're very interested about memory usage (from your RAM module), look on Resident memory. How much memory every instance will use? (at the same setup) Around the same Resident memory part. Virtual Memory talked in your article is not allocated upfront.
                  If you're aware of C programming malloc, if you alloc memory with it, it will not be commited (and setup as a part of used resident memory) until you will not write a physical page of it with data (of 4k size).

                  Comment


                  • Originally posted by BlackStar View Post
                    Give examples of performant Mono applications? They'll give insane excuses and ignore them.[*]Paint.Net may be fast, but it uses System.Drawing so it
                    Notice a pattern emerging?
                    Insane excuses? System.Drawing encapsulates the native highly optimized gdi+ library. Using a program that relies very heavily on native code calls as an example of managed code speed is rather pointless. Same goes for programs where hardware accelerated graphics do the heavy lifting. If you want to compare performance, do so with programs that actually uses the language in question for it's heavy lifting. For example the Language Shootout tests.

                    Comment


                    • Originally posted by XorEaxEax View Post
                      Insane excuses? System.Drawing encapsulates the native highly optimized gdi+ library.
                      Anyone calling GDI+ "highly optimized" is either insane or misinformed.

                      (Edit) To clarify, GDI+ is deprecated garbage developed by Microsoft. It's not hardware accelerated, it's leaky, buggy, slow and unwieldy, with terrible font and text layout support. WinForms 1.1 used to rely on GDI+ entirely, but that was changed for WinForms 2.0 which contains a messy mix of GDI and GDI+.

                      On the Mono side, GDI+ is implemented on top of Cairo and Pango - although the latter is not enabled in default builds. WinForms are entirely built upon System.Drawing (GDI+) and Xlib, which is slightly less insane than the approach in .Net - but every bit as slow and buggy.

                      System.Drawing is a .Net wrapper over GDI+, which adds yet another layer of indirection.

                      Using a program that relies very heavily on native code calls as an example of managed code speed is rather pointless. Same goes for programs where hardware accelerated graphics do the heavy lifting. If you want to compare performance, do so with programs that actually uses the language in question for it's heavy lifting. For example the Language Shootout tests.
                      *I* don't want to compare performance, I know exactly how managed code performs compared to native code. It's my job to know that.

                      The examples given here refute the claim that you cannot create performant managed applications. You seem to be interested in something else entirely, namely language microbenchmarks. Interesting topic, but not the point of the discussion you quoted. Real applications are not developed in a void.
                      Last edited by BlackStar; 22 December 2009, 01:13 PM.

                      Comment

                      Working...
                      X