Announcement

Collapse
No announcement yet.

Mono 2.6 Released, Supports LLVM Generation

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • On Vista and newer open the "resource monitor" app (task manager -> performance -> resource monitor) and check the "commit", "working set" and "shared" numbers under "memory". Also, doesn't KDE have something in the spirit of gnome-system-monitor?

    Comment


    • Originally posted by BlackStar View Post
      On Vista and newer open the "resource monitor" app (task manager -> performance -> resource monitor) and check the "commit", "working set" and "shared" numbers under "memory". Also, doesn't KDE have something in the spirit of gnome-system-monitor?
      It has, but the point is if those tools are meaningful. Maybe they're meaningful in some part if you compare differences of amount of memory usage by applications on Linux, but they're probably meaningless comparing Linux to Windows according to link I gave you before. I'm looking for a link about Top now and even more

      http://virtualthreads.blogspot.com/2...-on-linux.html

      If you go through the output, you will find that the lines with the largest Kbytes number are usually the code segments of the included shared libraries (the ones that start with "lib" are the shared libraries). What is great about that is that they are the ones that can be shared between processes. If you factor out all of the parts that are shared between processes, you end up with the "writeable/private" total, which is shown at the bottom of the output. This is what can be considered the incremental cost of this process, factoring out the shared libraries. Therefore, the cost to run this instance of KEdit (assuming that all of the shared libraries were already loaded) is around 2 megabytes. That is quite a different story from the 14 or 25 megabytes that ps reported.
      You see? Some tool can report application is using 14 or 25MB, but it's probably only 2MB.

      However, *-system-monitors can be correct:

      ps aux |grep kate
      pmap -d 7962

      mapped: 331000K writeable/private: 10260K (this is probably what counts) shared: 21856K (and some part of this - rw--- bits)

      KDE-System-Monitor shows it's: 12MB.

      For ksysguard:

      top:
      RES: 28MB SHR: 16MB

      pmap -d:
      mapped: 371284K writeable/private: 18976K shared: 20396K

      Ksysguard itself:

      RES: 12.2MB SHR: 16MB

      Which one is correct? :>
      Last edited by kraftman; 12-22-2009, 07:54 AM.

      Comment


      • If you're very interested about memory usage (from your RAM module), look on Resident memory. How much memory every instance will use? (at the same setup) Around the same Resident memory part. Virtual Memory talked in your article is not allocated upfront.
        If you're aware of C programming malloc, if you alloc memory with it, it will not be commited (and setup as a part of used resident memory) until you will not write a physical page of it with data (of 4k size).

        Comment


        • Originally posted by BlackStar View Post
          Give examples of performant Mono applications? They'll give insane excuses and ignore them.[*]Paint.Net may be fast, but it uses System.Drawing so it
          Notice a pattern emerging?
          Insane excuses? System.Drawing encapsulates the native highly optimized gdi+ library. Using a program that relies very heavily on native code calls as an example of managed code speed is rather pointless. Same goes for programs where hardware accelerated graphics do the heavy lifting. If you want to compare performance, do so with programs that actually uses the language in question for it's heavy lifting. For example the Language Shootout tests.

          Comment


          • Originally posted by XorEaxEax View Post
            Insane excuses? System.Drawing encapsulates the native highly optimized gdi+ library.
            Anyone calling GDI+ "highly optimized" is either insane or misinformed.

            (Edit) To clarify, GDI+ is deprecated garbage developed by Microsoft. It's not hardware accelerated, it's leaky, buggy, slow and unwieldy, with terrible font and text layout support. WinForms 1.1 used to rely on GDI+ entirely, but that was changed for WinForms 2.0 which contains a messy mix of GDI and GDI+.

            On the Mono side, GDI+ is implemented on top of Cairo and Pango - although the latter is not enabled in default builds. WinForms are entirely built upon System.Drawing (GDI+) and Xlib, which is slightly less insane than the approach in .Net - but every bit as slow and buggy.

            System.Drawing is a .Net wrapper over GDI+, which adds yet another layer of indirection.

            Using a program that relies very heavily on native code calls as an example of managed code speed is rather pointless. Same goes for programs where hardware accelerated graphics do the heavy lifting. If you want to compare performance, do so with programs that actually uses the language in question for it's heavy lifting. For example the Language Shootout tests.
            *I* don't want to compare performance, I know exactly how managed code performs compared to native code. It's my job to know that.

            The examples given here refute the claim that you cannot create performant managed applications. You seem to be interested in something else entirely, namely language microbenchmarks. Interesting topic, but not the point of the discussion you quoted. Real applications are not developed in a void.
            Last edited by BlackStar; 12-22-2009, 12:13 PM.

            Comment


            • Originally posted by BlackStar View Post
              Anyone calling GDI+ "highly optimized" is either insane or misinformed.
              It doesn't matter what you think of gdi+, it's still doing the heavy lifting in Paint.net. So if you think Paint.net is performant, than gdi+ can't be all that bad. Granted, I've only used gdi+ with native code, and thus I don't have to suffer the overhead generated when making calls from managed to unmanaged, but that has nothing to do with gdi+.

              Originally posted by BlackStar View Post
              The examples given here refute the claim that you cannot create performant managed applications. You seem to be interested in something else entirely, namely language microbenchmarks.
              No, I'm interested in comparisons between the performance of the code the actual languages create. Currently the closest thing to a fair comparison would be the Language Shootout benchmarks, since a) the programs are 100% done in the language they represent b) the programs solve the exact same problem.

              And no, I've never claimed that you can't create 'performant' managed applications. Since what qualifies as 'performant' differs greatly depending on application. What I am claiming is that native code will be more 'performant' than managed code, and thus if performance is of key importance to your application, native code is the choice.

              Comment


              • Originally posted by XorEaxEax View Post
                What I am claiming is that native code will be more 'performant' than managed code, and thus if performance is of key importance to your application, native code is the choice.
                Let's assume that C++ is on average 50% faster than C#. A typical game might spend 50% of its time in OpenGL drivers, 25% in physics, 5% in OpenAL, 5% in networking, 10% in scripting (AI, game code) and 5% in other CPU tasks (frame setup, input handling, timing etc).

                Using C++ would grant me a 2.5% speed advantage in CPU tasks and maybe another 2.5% due to interop overhead. The rest of the tasks are not affected by the choice of language.

                5% better framerate in C++? Big effing deal. You can get that 5% back by spending a couple of days tweaking your OpenGL shaders in the C# version. A couple of days? Yes, that's the aggregate compilation times for the C++ version.

                As I said, applications are not created in a void. Language performance is meaningless on its own in everything but purely math code. Every other task will have to rely on OS components, middleware libraries and tons of other modules with their own performance characteristics. Does it matter that Python is 50% slower than C, when all the program is doing is copying files on disk, manipulating XML, executing SQL queries or waiting the OS to finish redrawing the window?
                Last edited by BlackStar; 12-25-2009, 05:53 AM.

                Comment

                Working...
                X