Announcement

Collapse
No announcement yet.

Open-Source .NET On Linux Continues Maturing

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by gnufreex View Post
    Nicest is Ceylon. C$ is crippled Java without portability, stability, speed and compatibility. Kinda of anti-Java
    My initial impression was that D supports many of the same features as Ceylon, but after reading through the introduction, it really does have some very nice syntax, particularly around its type system. D can achieve union types using a Haskell-style Either, but it's definitely not as clean as using an operator for it.

    That said, I do have some doubts about it being JVM-based. If you're going to use a compiled language, you might as well compile it to machine code and take advantage of that opportunity to optimize it, rather than attempting to the optimize it at runtime when you have significantly fewer resources and less time. Of course, the same is true of C#, but as I understand it, Java also has a lot of undefined behaviour (similar to C), and it's not clear if all JVM-based languages are subject to that.

    Comment


    • #32
      Originally posted by bpetty View Post
      There is no reason to use C. It gives you nothing, no matter how low level you want to get.
      I disagree, and I say that as someone who is familiar with (Linux) kernel programming. The main thing you get from C is that you easily see the assembly the code maps to by just reading it, without needing to look at the definitions of the types and functions involved (except where macros are used). In C++, operator overloading means you can't rely on that, and you also need to worry about method overriding, as well as the cost of the vtable lookups.

      Comment


      • #33
        Originally posted by rdnetto View Post
        I disagree, and I say that as someone who is familiar with (Linux) kernel programming. The main thing you get from C is that you easily see the assembly the code maps to by just reading it, without needing to look at the definitions of the types and functions involved (except where macros are used). In C++, operator overloading means you can't rely on that, and you also need to worry about method overriding, as well as the cost of the vtable lookups.
        Other than for deep hand-optimization and picking out compiler bugs what's the benefit of being able to map code to assembly?

        Comment


        • #34
          Originally posted by Luke_Wolf View Post
          Other than for deep hand-optimization and picking out compiler bugs what's the benefit of being able to map code to assembly?
          Serious question by the way, I'd like to know.

          Comment


          • #35
            Originally posted by rdnetto View Post
            C# is indeed a very nice language, but D blows its pants off. Its syntax and features are basically a superset of C#'s, adding things like compile-time code generation and reflection (run-time reflection in C# can be expensive)
            That's possible in C#. In fact, I've done exactly that in a project, because as you say the run-time reflection isn't fast enough.

            , as well as generally better performance from being natively compiled.
            You can also natively compile C# code, with no problems.

            Comment


            • #36
              Originally posted by Luke_Wolf View Post
              Other than for deep hand-optimization and picking out compiler bugs what's the benefit of being able to map code to assembly?
              Mostly it comes down to being aware of what's going on behind the scenes, and the performance implications of it. For example, the circular buffer implementation supports lockless access in single producer/single consumer scenarios, but you need to use memory barriers to do so, because cache coherency between processors isn't guaranteed without explicit annotation. When you're messing around with memory barriers[1] or interrupts, you want to minimize the amount of abstraction between the programmer and the CPU, because that's just additional complexity that's going to burn you sooner or later. (Most of the abstraction that is present is for providing a platform agnostic way of doing architecture-specific things like toggling interrupts.) Despite the size of the kernel, it's actually pretty straight forward to understand the interaction between various subsystems, or at least it is in the parts I've seen.

              Another interesting example is the new keyword - it can't be defined in a meaningful sense in kernel-space, because we use kmalloc() instead of malloc(), and it takes an argument specifying what kind of memory needs to be allocated (only specific regions can be used for DMA, etc.). You could work around this by defining some kind of template function to use instead, but that's ugly, and more importantly the fact that fairly fundamental syntax is unusable should make you stop and think about it.
              It's also worth noting that the standard says new should throw an exception on failure, but exceptions are too expensive to be used in kernel code, and you really want the failure to be handled at the call-site (e.g. if there wasn't enough memory to initialize all 4 drives, the best outcome is to initialize the first 3 and print a warning message to the kernel log.) The idea that an exception should cascade to the top of the call stack is anathema to the idea of a robust kernel. (The kernel can crash in certain situations, but they're pretty well defined. e.g. null pointer dereferences and BUG_ON).

              Then there's the overhead of virtual functions - an extra memory access might not seem like much, but memory access is slow compared registers and cache. That's especially significant when applied to every function call - lots of kernel functions are defined with inline because the overhead of allocating another stack frame is so significant. And while virtual functions are optional, without them inheritance isn't that useful, and then you might as well be using C.

              Realistically, you absolutely could implement a kernel in C++, given that C++ is just a superset of C, but I think you'd need to ignore most C++ functionality to do it well. The only feature that I think would be of real benefit is templates, since the kernel basically replicates them with macros.

              I think there are definitely better languages than C that could be used - Rust is pretty nice if you can get used to pointer lifetimes (in fact someone's already written a kernel in it), and even just moving the kernel to C99 from C89 would be a pretty decent improvement IMO (not being able to write for(int i = 0; ...; i++) gets old real quick), but as it stands the kernel is one of the cleanest and most well designed pieces of C code I've seen, and I don't believe C++ would be able to preserve that.


              [1] Memory barriers cause all sorts of fun, because they basically involve you realizing that the compiler is free to completely re-arrange everything you wrote and completely change the order of various assignments. Add in a multi-core system, and you need to consider cache invalidation if you want to avoid using a spinlock. (Locks just don't scale, and kernel code needs to perform well on systems with hundreds of cores.)

              Comment


              • #37
                Originally posted by rdnetto View Post
                *snip*
                Thanks for the detailed answer.

                Comment


                • #39
                  killitwithfire

                  Comment


                  • #40
                    If you care about the FOSS GNU/Linux world then don't use C#.

                    Comment

                    Working...
                    X