Announcement

Collapse
No announcement yet.

Mono 2.10, Moonlight 4 Preview 1 Released

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • curaga
    replied
    Moving the mouse is not real-life usage? And that's the simplest example of how you spot a 15ms delay on such a screen.

    Leave a comment:


  • ciplogic
    replied
    Originally posted by curaga View Post
    +1 RealNC.

    Have you guys used the older LCDs with a lag of 15ms? It's definitely noticeable.
    And a lighting bolt will also remain transposed on your retina even its life span may be even shorter.
    Taking joke aside, the context was about in real life usage. where a 15 ms lag is unnoticeable. Will you do 30 operations per second and you want to jump to 60 operations per second and this 15 ms lag is what it stop you? Like you select 50 files on disk and when you want to delete them and you press delete, the dialog instead to start instantly, will appear in 15 milliseconds. Will you spot the difference? Also as the context was on difference, not on raw 15 milliseconds, if you will have a dialog that you will wait 30 milliseconds, will you spot that would take 15 milliseconds more? What about if the difference is in a context of 80 to 100 ms?

    Leave a comment:


  • curaga
    replied
    +1 RealNC.

    Have you guys used the older LCDs with a lag of 15ms? It's definitely noticeable.

    Leave a comment:


  • RealNC
    replied
    Originally posted by ciplogic View Post
    15 ms delay is noticeable? Or you mistype as 150 ms? Your eye can notice a change at 40 ms (mostly at 24/25 FPS) but on brain it gets like in 20 ms. So you "skip a frame".
    15ms is noticeable. Human limit is about 7ms.

    The "24FPS" thingy is a myth that people have been telling for a long time now. You think that 24FPS was chosen for film movies because that's the minimum latency between frames where humans can't perceive any more improvement in "smoothness"? Wrong. It's the other way around. It's the *maximum* latency between frames where humans can perceive the results as motion rather than a series of still frames.

    People know when something is 24 or 40 or 60FPS. Once response times go under 10ms, stuff starts looking extremely smooth. And we can't make out any further differences in response times when stuff starts happening quicker than about 7ms.

    Leave a comment:


  • ciplogic
    replied
    Originally posted by smitty3268 View Post
    Qt has it's own standard collection classes, iteraters, etc. so if that was really what was slowing down the c++ app you tested then maybe you shouldn't say the same is true of Qt without also testing it.
    OpenSuse 11.4/RC2/64 bit: $ gcc --version
    gcc (SUSE Linux) 4.5.1 20101208 [gcc-4_5-branch revision 167585]

    Original C++ code:
    $ g++ -O3 -msse2 pi.cpp -o pi-cpp
    $ time ./pi-cpp 20000 out.txt
    real 0m26.016s
    user 0m25.944s
    sys 0m0.005s

    Qt timing: (all code that was Stl was replaced with its Tulip equivalent: otsream with QFile, std::string with QString, anyway I don't know an equivalent of std::ldiv_t div = std::ldiv(10*a_elem + q*i, p); but I also don't believe that will be such of a benefit seeing timings, out.txt is hardcoded)
    time ./ComputePiQt 20000
    real 0m26.015s
    user 0m25.948s
    sys 0m0.003s

    Just for reference the mono timing: (mono --version: Mono JIT compiler version 2.8.2 (tarball Wed Feb 23 09:31:21 UTC 2011))
    $ time mono --gc=sgen -O=all pi.exe 20000 out.txt
    real 0m20.166s
    user 0m20.078s
    sys 0m0.033s

    C timings:
    $ time ./pi-c 20000 out.txt

    real 0m12.670s
    user 0m12.637s
    sys 0m0.001s

    Java timings:
    $ java -version
    java version "1.6.0_20"
    OpenJDK Runtime Environment (IcedTea6 1.9.5) (suse-2.2-x86_64)
    OpenJDK 64-Bit Server VM (build 17.0-b16, mixed mode)
    $ time java Pi 20000 out.txt
    real 0m14.343s
    user 0m16.081s
    sys 0m8.509s

    Leave a comment:


  • ciplogic
    replied
    Originally posted by smitty3268 View Post
    Sure, but the DB itself isn't written in Java, just a little language parser/optimizer running on top of it.
    Yes, it is "just the critical performance" part of a database and depend on how well HotSpot and adaptive optimization of Java JIT engine may shine. It makes no sense to rewrite the network stack in Java, as may be well written by the OS vendor.

    Originally posted by smitty3268 View Post
    Well, with photo editing the codec isn't the limitation, it's the filter/effect that you're applying to all the pixels. Which is basically the same thing a video codec is doing. And that was my point - you don't want to do the codec in Java, it should have it's hotspots done in assembly and probably the rest in an unmanaged language.
    We both agree here. I thought that a typical codec can be written in Java if you know that the optimizer will get similar performance (in a tested environment) than C++/asm gives. In the link that someone from F-Spot team creates, he did Mono.SIMD filters, that run 6.5 times faster on average than C based Gtk generic codebase. Even he will write in C or assembler, he will do it just for 5% of codebase, the "FilterImpl" code, not the full project code as it make no sense.

    Originally posted by smitty3268 View Post
    Qt has it's own standard collection classes, iteraters, etc. so if that was really what was slowing down the c++ app you tested then maybe you shouldn't say the same is true of Qt without also testing it.
    Yes, and overhead of creating iterators is mostly the same (or it should). I will hopefully test this today (may be tomorrow for you as we live in different timezone) but I think (I predict) that will be a similar overhead. I do understand that the overhead of using iterators keep you away of buffer overflows which can overweight the overhead of speed in most applications.

    Originally posted by smitty3268 View Post
    I love .NET on Windows. I'm not sure i'm really sold on it Linux, though. The language is great, but it doesn't have nearly as much support as other languages do. I can see how it's useful, but honestly if i was going to do lots of stuff in Mono I'd probably just switch to Windows.
    I don't think too that Mono is such of a great framework on Linux but is a decent one. If I will have to chose for a company that does development I will pick it in almost all instances where Python will not fit for whatever reason. Just in runtime terms is probably better than FreePascal, Python or Ruby implementations but worse than GCC, Java ones (at least regarding the decency of algorithms in their implementation). As of the today's times, I do think that Mono is comparable (as base classes, but not as tooling) with .Net today. I cannot think to many differences performance wise or runtime wise that .Net has and Mono doesn't. Extensible IDE with refactors? Decent debugger? Visual editor integrated in an extensible IDE? Decent GC (with SGen)? Runtime profiles depending on your application? (AOT, AOT_CACHE, default and LLVM) Moonlight as a migration path (using Moonlight desktop) for migrating Xaml, web based capabilities (XSP and Apache's ModMono), WinForms to some degree.

    Leave a comment:


  • smitty3268
    replied
    Originally posted by ciplogic View Post
    15 ms delay is noticeable? Or you mistype as 150 ms? Your eye can notice a change at 40 ms (mostly at 24/25 FPS) but on brain it gets like in 20 ms. So you "skip a frame".
    Ok, the traditional limit is 100ms. So i exaggerated a bit, but not that much. My point stands that tiny changes in speed can make a big difference in certain situations.

    Server apps are rarely CPU starved (excluding that is a rendering farm there), and are IO bound (network/disk/database code).
    That has not always been my experience. Especially when it comes to peak times, I've found that the CPU can be an issue. However, my main point was more that server apps are expected to scale to more than a single user, which means that many can scale way up if they're a little more optimized. I agree that in many cases that doesn't matter, and you just throw more hardware at the problem, but sometimes it does.

    If you talk like databases, for example Oracle query optimizer is written in Java (and it have a great throughput).
    Sure, but the DB itself isn't written in Java, just a little language parser/optimizer running on top of it.

    Even they are slower or faster, in a lot of instances if you can make your interface asynchronous, the feeling of being fast (think on BeOS times) can be achieved on a lower spec machine.
    That was kind of my point. I get the feeling you're trying to argue with me here, but i'm not sure why. I think we mostly agree.

    Another thing is that if a framework can distribute easier on all cores your algorithm, may solve your problem faster, even your runtime works supposedly 30% slower, if 2 cores algorithm may give to you a typical 60%-80% speedup (I don't talk on rendering cases witch is close to 100% speedup per core), at 4 cores around 250%-300% speedup.
    Again, the vast majority of applications are not limited on CPU usage.

    Your example with video/photo editing is also a really interesting case: in case you will write any of those applications, the main limitation is your codec (mostly written in assembly) and your framework (like DirectShow, GStreamer, QuickTime) and much less that is written in Mono (Diva), Java or Python (Pitivi).
    Well, with photo editing the codec isn't the limitation, it's the filter/effect that you're applying to all the pixels. Which is basically the same thing a video codec is doing. And that was my point - you don't want to do the codec in Java, it should have it's hotspots done in assembly and probably the rest in an unmanaged language.

    If you noticed, that was all my point, to measure, to leave most biases away, and so on. I don't think that Qt is a bad technology.
    Qt has it's own standard collection classes, iteraters, etc. so if that was really what was slowing down the c++ app you tested then maybe you shouldn't say the same is true of Qt without also testing it.

    For some still I think Mono is a key technology, at least in cases of migrating applications and for companies that will want to pick Linux as a future platform.
    I love .NET on Windows. I'm not sure i'm really sold on it Linux, though. The language is great, but it doesn't have nearly as much support as other languages do. I can see how it's useful, but honestly if i was going to do lots of stuff in Mono I'd probably just switch to Windows.

    Leave a comment:


  • ciplogic
    replied
    Originally posted by kraftman View Post
    It was winforms related afaik.
    All of these can be just pure FUD. I can say we can attack C# and other MS technologies as well. Like I mentioned before it's mainly about supporting friends and not supporting competitors. In example I will support ODF, but you (and gnome foundation...) will support MSOOXML instead.
    The talk isn't pointless, because Microsoft is Linux direct competitor unlike some other companies. MS didn't attack Linux so far, so it looks they're fearing something or waiting for mono to be more widely used.
    Did I ever said that I support MS OOXML or not? Even I'll do, it have no relation with Mono (I support JavaScript, does it mean I support Java or Mono with it as they are in some way related, machine independent and use a JIT and a GC!?).
    I said as you probably understand that patents are about implementation details and much less about copyright (Mono does not use ".Net" naming anywhere as far as I know, maybe C# as name, but if this is the issue, is not hard to rename things out) but about patents.
    WinForms is problematic, but 95% of code of all Mono applications do not link (or ever they do, this code may be easily rewritten, in case of a patent treat) to them. They likely link to Gtk+, GIO, GStreamer and so on.
    C# is a nice language, I recommend you to see why by using it. JavaScript is another nice one, or Python or QML (which is derived from JavaScript). I do think that Qt/C++ is not a golden hammer, even picking just desktop app development, but for certain is a good starting point.
    If you want to attack Microsoft, go for it, I am not going to stop you. Please stop use Google too, as they almost monopolize search business, their Android platform is Java alike (so is even uglier than Mono, and Java is today owned by two ex-evil corporations: Oracle and IBM) or Apple as they monopolize mp3 player and phone industry (I think you get the joke).
    Originally posted by kraftman View Post
    I don't support competition. Btw. I wanted to make some tests with Banshee and big collection, but I don't like to mess my system with gnome related libraries and you're saying Banshee handles 2K songs good, so it's ok. My main concerns related to mono were/are long startup time (jit related like you described) and big memory usage. Looking at some numbers (pi benchmark etc.) it looks mono is slow.
    I don't say that it support, I do encourage you to test and to see for yourself. You was the guy that said that AmaroK and QtCreator starts faster than MonoDevelop and Banshee. Proper numbers give proper information to talk with.
    Do you understand how works a registry allocator using a graph colorize or not? Did you see the benchmark made by oleid? He noticed for example that for him Mono did had no performance improvement using LLVM. For technical reasons I can think for some cases that why was happening: his benchmark was using a machine (AMD64) that had more free CPU registers than I've did the benchmark (I386). Also Mono does not do loop unrolling optimization and autovectorization (that he especially try to set using SSE3 optimizations). Those optimizations are powerful in complex rendering and like code, but are useless for desktop. Also this code is deceptively useless even for its times... do you use PI with 100 digits precision, 10.000 or 20.000 to matter the 1-3 seconds out of 10 that you may gain at 10.000 digits? I hope you wrote Qt code and you know what is Tulip. Look in Qt codebase and search if all codes use just C like constructs or they use Tulip. Look on KDE codebases and look for the same constructs. Those constructs will likely use Tulip (= collections like QList, QMap, and so on) and they have an overhead similar with slowest C++ timing in Pi benchmark. You will notice it? Unlikely! It will be as slow as Mono in typical usage (sometimes even slower, but is beyond the point). Also Mono is not used at full potential as most people don't write using Mono.SIMD, even some do.
    I just said to you that you move target. Performance wise: http://en.wikipedia.org/wiki/Computer_performance is a tricky business.
    55 vs 43 M in Tomboy vs GNote... nice. And your system memory is? I work on a machine with more than 4G (is a laptop) in my daily usage, but my netbook I've bought just 2 years ago had by default 1G of RAM (upgraded to 2G), so you argue that 1% of RAM of that machine, from 4% to 5% may make a difference. It's fine, use GNote if that percent bother you that much. And if you need to edit (from Gnome, as I don't know all KDE equivalents) your "start" menu, don't use Alacarte, as is written in Python, and don't use clock panel, because sometimes it leaks (it happened for me in a RHEL 5.1 at work, and I've got it like 300M before it crash) and use just your hand watch. You save memory and your system is snappier.

    Leave a comment:


  • kraftman
    replied
    Originally posted by ciplogic View Post
    If you relate on Mono technology post, you will see Moonlight dissected, and the single Mono "specific" parts of Mono that are only for Mono may be: the bytecode (MSIL - ECMA-334) and C# language specification (ECMA-335). The rest are in one way or another in a lot of other opensource projects.
    It was winforms related afaik.

    So considering that Microsoft (or whatever entity that will attack a Mono specific technology that exist in Linux desktop) can attack the things that may be a threat. Vala is C# inspired and you can attack a patent discretionary (as Apple attacked Nokia or HTC, but not Google), theoretically MS can attach with a close notify. Do you want a specific patent? Is the same that most of anti-mono crowd use (none, if you got my point, but the talk is just in case).
    Another part that MS can attack is to say about bytecode, that even MS made changes in .Net 4 to support multiple platforms and use the same way that Mono did from 1.0 release, so may be a two way attack. Even will be a certain think that MS will win, a Java-to-Dalvik like bytecode converter may be made to run in a modified fronted of Mono Mini VM (this is the core VM).
    All of these can be just pure FUD. I can say we can attack C# and other MS technologies as well. Like I mentioned before it's mainly about supporting friends and not supporting competitors. In example I will support ODF, but you (and gnome foundation...) will support MSOOXML instead.

    In rest, all the technologis are under threat as Gnome, KDE, Nokia, Cairo antialiasing, etc. It may be a patent anywhere that may be once attacked, but are not specific to Mono, so the talk is as pointless as Netscape/AOL will attack Nokia's QML as it resemble too much with JavaScript, and for this to recommend to anyone to not use QML.
    The talk isn't pointless, because Microsoft is Linux direct competitor unlike some other companies. MS didn't attack Linux so far, so it looks they're fearing something or waiting for mono to be more widely used.

    Do profiling or see which queries do. And put a bug report (if this is the concern). For my library (of 2K songs) works smooth (for a 2.4 GHz I5 CPU which may be enough fast for this medium collection).
    I don't support competition. Btw. I wanted to make some tests with Banshee and big collection, but I don't like to mess my system with gnome related libraries and you're saying Banshee handles 2K songs good, so it's ok. My main concerns related to mono were/are long startup time (jit related like you described) and big memory usage. Looking at some numbers (pi benchmark etc.) it looks mono is slow.

    I previously said that I like Qt. I was a programmer in Qt around 4 years ago, and GTK one (as GtkMM) around 3 years ago. Gtk applications also look ugly in KDE, and if you dislike them for this reason I also do undestand you fully.
    Exactly. Gtk looks like crap in KDE and I try to not use it at all.

    Some simple test made in vm which is another reason why I don't like mono:

    *from system monitor*

    The process tomboy (with pid 5486) is using approximately 54.9 MB of memory.
    It is using 51.4 MB privately, and a further 12.2 MB that is, or could be, shared with other programs.
    Dividing up the shared memory between all the processes sharing that memory we get a reduced shared memory usage of 3.5 MB. Adding that to the private usage, we get the above mentioned total memory footprint of 54.9 MB.

    The process gnote (with pid 5431) is using approximately 44.3 MB of memory.
    It is using 41.4 MB privately, and a further 12.8 MB that is, or could be, shared with other programs.
    Dividing up the shared memory between all the processes sharing that memory we get a reduced shared memory usage of 2.9 MB. Adding that to the private usage, we get the above mentioned total memory footprint of 44.3 MB.

    About productivity, I know GNote wasn't written from scratch, but afaik it was rewritten in very short time, so it probably wouldn't take too long to write it from scratch (and make it to use even less memory); the same about Shotwell.

    Leave a comment:


  • ciplogic
    replied
    Originally posted by smitty3268 View Post
    Most applications are not limited in cpu throughput these days - there are some obvious exceptions, like photo/video editing, server apps, web browsers, etc., but for the most part it's true.

    What people notice is if the UI is unresponsive. A 15ms delay is quite noticeable if it happens right when you are trying to open a menu. 100 times that is completely hidden from the user if it doesn't block the interface. Traditionally, garbage collection could cause some problematic delays in higher level languages, but that has been a significant focus of research and seems to be mostly corrected now. I'd really be much more worried about the memory use than CPU, except for a few particular applications.
    15 ms delay is noticeable? Or you mistype as 150 ms? Your eye can notice a change at 40 ms (mostly at 24/25 FPS) but on brain it gets like in 20 ms. So you "skip a frame".
    If you talk about games, I do believe that writing all logic in a VM may be a wrong idea today (excluding you don't do any allocations in game loop and you did previously AOT compiling). If you put on your menu case, most of toolkits are already preprocessing the interface (so layouting may appear at startup time, but not at runtime, for example). Try Pinta and Krita and try to see which is more responsive by your standard.
    Server apps are rarely CPU starved (excluding that is a rendering farm there), and are IO bound (network/disk/database code). If you talk like databases, for example Oracle query optimizer is written in Java (and it have a great throughput).
    Even they are slower or faster, in a lot of instances if you can make your interface asynchronous, the feeling of being fast (think on BeOS times) can be achieved on a lower spec machine. Another thing is that if a framework can distribute easier on all cores your algorithm, may solve your problem faster, even your runtime works supposedly 30% slower, if 2 cores algorithm may give to you a typical 60%-80% speedup (I don't talk on rendering cases witch is close to 100% speedup per core), at 4 cores around 250%-300% speedup.
    Your example with video/photo editing is also a really interesting case: in case you will write any of those applications, the main limitation is your codec (mostly written in assembly) and your framework (like DirectShow, GStreamer, QuickTime) and much less that is written in Mono (Diva), Java or Python (Pitivi).
    If you noticed, that was all my point, to measure, to leave most biases away, and so on. I don't think that Qt is a bad technology. Either Gtk/Vala (I love GObjectIntrospection idea initiated from Vala's creator). Either Etoile or XFCE. For some still I think Mono is a key technology, at least in cases of migrating applications and for companies that will want to pick Linux as a future platform.
    I know bad written technologies in Linux, like: VCL (the toolkit of Libre/OpenOffice), confuse TreeView (MVC) implementation in Gtk, buggy redraw apis of Qt combined with a Metacity hacks to make i work right, GCC pre 4.4 bad autovectorizer optimizer, almost non that useful "whole program optimization" that exists at least for 3 years in Visual Studio, and at least for 5 years in Intel Compiler. Even Mono from technological point of view was not that "amazing" 4-5 years ago, a fairly bad JIT (rewritten at least two times, the last reiteration, named LinearIL generates a decent code) and an awful garbage collector. I would love to see a "control panel" written in MoonLight using MoonLight desktop with animations and so on, as Gtk may look a bit outdated (I know that Clutter may give some hopes) or a better file manager. For certain as I see some people whining, will not happen fast (also RedHat/Fedora are typically against Mono so no hope for next one or two years). People may disagree with me and I agree with them, at one level Moonlight is risky, but is not more than OpenOffice that exist in any Ubuntu installation.
    I don't mind to use FF4 to use Phoronix even I know that JS code will work slower (than C++, whatever), or sometimes may be a 40 ms GC, because I do not notice the impact of it. Anyway, I get half-outraged when people get out of context, out of date, too biased and simply hateful for some that work a lot of time to offer to some a good technology to start with on Linux.

    Leave a comment:

Working...
X