Announcement

Collapse
No announcement yet.

John Carmack's Comments On C/C++

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by GreatEmerald View Post
    Hmm, I never liked C++ due to its weird style. I find C to be rather primitive, but at the very least it's consistent. I always felt that C++ was some sort of a hack glued onto C, as its syntax just doesn't fit with the rest... It has some really good features, but I can't stand this inconsistency, unfortunately. Hence why I prefer D, which is both consistent and provides all the powerful options of C++ and more. And yes, I have quite a bit of appreciation for immutable variables
    Can you particularise what do you mean with "syntax just doesn't fit with the rest"? I'm trying to figure out what inconsistencies they may be but I can't think of anything inconsistent in C++. In my view the C++ is very (more like absolutely) consistent but that's just my view.

    On the other hand C is lacking much functionality (starting with namespaces...) and is still exactly equally efficient as C++ when regarding performance. (Well, if you use polymorphism you have to consider VTABLEs etc. but if you want to implement something like that in C you still have to write a code that will do the same, if it's even feasible).

    Comment


    • #12
      Originally posted by mark45 View Post
      Such a language is impossible. You can't get people to agree on basic things, like a typed language:

      1) One says it must be strongly typed, but others don't wanna deal with types, they'd rather sacrifice some speed and let some errors trickle past the compiler.
      2) The second one says it must be weakly typed, the first group wouldn't agree.
      3) The third one says it should have both. The first two groups say it would make the language too sophisticated and bloated. Restart the circular logic from point one.
      Not really. The language could be fully strong-typed, but also support typeless parameters. The compiler would analyse the function, and determine the restrictions each typeless parameter required, then give compiler errors if the code tries to pass a variable which doesn't meet those restrictions. These functions would be template function, which new version being compiled out with each unique set of param types used (therefor, special restrictions would be required for key Shared Object functions). eg:

      Code:
      func foo(x, y:int) # 'x' is typeless, 'y' must be an int
      {
          return x * y
      }
      
      func main
      {
          var s = "text" # 's' is text
          var i = 0      # 'i' is an int
          var f = 0.0    # 'f' is a float
      
          var r = foo(s, i) # error: can't pass text as a first parameter
          var r = foo(f, i) # works: because compiler can multiply a 'int' and a 'float'
          var r = foo(i, i) # works: compiler can compile 'int' and 'int'
      }
      In the code above, two versions of 'foo' would be compiled out... one taking in a (float, int), and one taking in an (int, int).

      As far as memory safety without garbage-collection, the language could define a distinction between a 'var' (a memory "owner" which can't be null), and a 'ref' (a memory reference, which can't "own" memory). Vars would always be deleted at the end of their scope (unless returned), or when removed from arrays, etc.. whereas References to that data would simple be set to null. example:

      Code:
      func main
      {
          ref x : int
          ref y : num
          
          scope
          {
              var z = 0
              var w = 0.0
              
              x => z
              x = 1 # sets 'z' to 1
              
              y => w
              y = 1.0 # sets 'w' to 1.0
              
              # auto cleanup code injected here
              # In this case, the code would look like:
              # 
              #    x => null
              #    Memory.delete(z)
              #    Memory.delete(w)
              #
              # Optimization: We don't need to set 'y' to null
              # because it's not directly accessed after this scope.
              # 
              # PS. Technically, we also don't need to delete 'z' & 'w'
              # since they would be created on the stack, but I put it
              # in to illustrate what normally happens with heap vars.
          }
          
          x = 2 # error: 'x' is null
          
          var n = 2.0
          
          y => n
          y = 3.0 # sets 'n' to 3.0
          
          # auto cleanup code:
          # 
          #    Memory.delete(n)
      }
      There's a lot of other stuff to that, but I think it would be possible to do something like that to achieve memory safety without a GC or Ref-Counting. You'd also need a low-level 'ptr' type which wouldn't have any restrictions and require manual memory management for advanced situations, etc..

      Comment


      • #13
        Originally posted by Grawp View Post
        Can you particularise what do you mean with "syntax just doesn't fit with the rest"? I'm trying to figure out what inconsistencies they may be but I can't think of anything inconsistent in C++. In my view the C++ is very (more like absolutely) consistent but that's just my view.
        Well, even the very basic printing to console. C is consistent in that it uses a function for that, printf(). And C++ uses the "cout" notation that looks way out of place.

        Comment


        • #14
          Originally posted by GreatEmerald View Post
          Well, even the very basic printing to console. C is consistent in that it uses a function for that, printf(). And C++ uses the "cout" notation that looks way out of place.
          It's not notation, it's a bunch of function calls with your function being of the form:
          Code:
          std::ostream & operator<<(std::ostream &outputStream, const Type &value) {
             // ...
             return outputStream;
          }
          No trickery here.

          Comment


          • #15
            Yeap, that's what I mean. I don't see how using operators there is helping, all it does is make the code look inconsistent. Having a function that does all the stream piping for you simply makes more sense. Sure, you can use the operators for doing other things as well, but most of the time you don't need that. And the fact that it's an operator and not a function is also something I don't particularly like, especially given that it clashes with bitwise shifting operators.

            Comment


            • #16
              I have quite a bit of appreciation for immutable variables

              Comment


              • #17
                the gc from d is not suitable for AAA games.

                Comment


                • #18
                  Originally posted by disgrace View Post
                  the gc from d is not suitable for AAA games.

                  http://3d.benjamin-thaut.de/?p=20
                  That's why D has an option to turn it off.

                  Comment


                  • #19
                    From what I've heard, actual AAA games (which a 3-month senior project is certainly not) are so far beyond the "GC vs. manual" debate that they're practically on another planet.

                    Comment


                    • #20
                      Overall, I believe the discussion about which programming language is better and specifically spending time on discussing to which extent constants or likewise should be used or not is pointless. Most programming languages are the same and the differences are mostly, in my opinion, a question of personal preference and a lot of ego.

                      What I actually think one of the overlooked sides of many languages is intuitivity. If you're already designing a high-level language which is meant to be used by humans, why not do it right? Many languages are just horribly not intuitive in really bad and broken ways and that makes them bad languages. Of course, what is intuitive to one person may be completely arbitrary to someone else and that may be the reason why some languages have made such bad decisions, but take Java for example:

                      Why should, when considering,
                      String one, two;
                      // User inputs two strings, both are "number"

                      This evolve to false?
                      one == two;

                      In which cases does it make sense to check if two String objects are the same object? In which of the countless cases where a comparison of two strings of characters is involved would you not be interested in testing for character equality? Why should I be forced to use something so ugly as one.equals(two)? That makes absolutely no sense. Of course, if you treat Strings as non-scalars, and if you define the equal operator to work the same on all objects, you could claim you're only being consistent. But is that intuitive? Would someone who learned that 5 == 5 is true ever think that the above example with the strings should evaluate to false? What is the point in deciding to hide away the arraylike/char pointer nature of a C "string" in introducing a proper String object in java, but then keeping the old garbage by forcing you to use an object method for comparison? How dumb is that?

                      There are similar examples in PHP for why intuitivity is so important, and why lack thereof makes a language so bad.

                      Comment

                      Working...
                      X