Announcement

Collapse
No announcement yet.

Google Wants To Make C++ More Fun

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by bioinfornatics View Post
    Just for said:
    - A language with a good productivity
    - Good performance same as C++ or highter
    - Easier to use
    - All code one one file (no .h)
    - Standard D Library (Phobos) is much better than STL
    - many feature: metaprogrammig, CFTE ...
    - easy to use write a parallel program
    - No virtual machine as C# or Java
    - designed for modern programming

    And much more ...

    The name of this language is D, try it.

    Fedora provide ldc Compiler who use LLVM
    The only real compiled languages right now for open source projects is C and C++. Easy to use by people who download and build stuff. If you start using Pascal, D and whatever else, many people won't bother. They expect "./configure && make install" or "cmake && make install".

    Let's face it: C and C++ *are* the standard in open Unix systems. And it doesn't look like this is about to change. IMO other languages are more important for OSes where source code is not the primary software distribution method.

    Comment


    • #17
      D programming language - It fixes the outstanding issues of C++, adds safty and many innovative features (contracts, slices, scopes, templates, mixins, CTFE, UFCS, etc...), is easy to use (GC *optional*) with a familiar and clean syntax, and without sacrificing native power or efficiency.

      Nimrod programming language - also a very innovative and unique language with native power, great meta programming, and an awesome GC. It's as expressive as Lisp, as efficient as C, and the syntax is something similar to Python.

      Comment


      • #18
        I think C# is more productivity and fun than C++.

        Also, Microsoft Visual Studio together with the Resharper extension is awesome.

        I hope Linux gets something like this too...

        Comment


        • #19
          Originally posted by Lattyware View Post
          Agreed. D is a great language that hasn't got the following it deserves. I'm a Python man at heart, but if I have to go low-level, D is where I want to go.
          forced garbage collection, unstable (d2 came out very fast and breaks compat with d1). I'd say 'd is more like modern version algol in trying to do too much. I'm not very interested and c++11 just about removes most motivation for using D.

          I'd rather build on something more innovative like clay, although I have yet to fully test drive it.
          Last edited by bnolsen; 06-16-2012, 07:17 PM.

          Comment


          • #20
            Originally posted by mirv View Post
            If there's a whole heap of different semantics when using libraries...that's not the fault of C++. That's the fault of the people making them, or the people trying to mesh them together. C++ won't hold your hand and won't try to force "proper programming principles" onto you - and this is on purpose. I recommend reading some of Stroustrup's FAQ for the reasons on why C++ is the way it is.
            The fact that every single library redefines fixed-width integer types indicates a fundamental issue in the design of C++. Spinning this as a "feature" flies into the face of qint32, int32_t, boost::int32, GLint, DWORD and the dozens of other custom types that are written to work around this issue.

            The "performance" argument for is laughable at best. Using a native int type will not magically make your code faster when you recompile on a 16bit architecture - it will merely break your code, because a++ will suddenly overflow at 2^16 instead of 2^32 (and if you check for overflow then your code will be slower than if you used the correct fixed-width time from the beginning).

            In short, bollocks. Every well-written portable library (re)defines the same basic fixed-width integer types and uses them exclusively instead of "int", "short" and the like. C99 realized this problem and introduced <stdint.h>. It's high time C++ did the same.

            (As an interesting aside, the CLR handles fixed vs native integers in an even better way: integers are all fixed-width by default (int8, int16, int32, int64) and theres is a special "native int" type that conforms to the bitness of the underlying platform (and has properties like Size that you can query). Best of both worlds, since you get correct code by default and you can explicitly drop down to native int when necessary (and it almost never is).)

            Comment


            • #21
              Originally posted by bnolsen View Post
              forced garbage collection, unstable (d2 came out very fast and breaks compat with d1). I'd say 'd is more like modern version algol in trying to do too much. I'm not very interested and c++11 just about removes most motivation for using D.

              I'd rather build on something more innovative like clay, although I have yet to fully test drive it.
              There's nothing wrong with fixing inherent problems in a language even if it breaks compatibility across major versions. D was young(arguably still is) so it could afford to do that.

              Allowing deficiencies to linger across versions for the sake of backwards compatibility is one of the reasons why C++ garners so much hate among programmers.

              -- I'm not very interested -- Does not affect the price of bread
              -- ...and c++11 just about removes most motivation for using D. --Agreed

              Comment


              • #22
                I think these might be the worst slides i've seen in a while. Completely wortless on their own. Now I have to watch the whole video just to find out what they intend to do exactly, where LLVM/Clang fit in and why the standard features every decent IDE comes with (identation, renaming etc.) are not enough anymore.

                And I suspect nobody in this thread watched the video, all posts up to now just center about generic C++ discussion/bashing.

                Comment


                • #23
                  Originally posted by BlackStar View Post
                  The fact that every single library redefines fixed-width integer types indicates a fundamental issue in the design of C++. Spinning this as a "feature" flies into the face of qint32, int32_t, boost::int32, GLint, DWORD and the dozens of other custom types that are written to work around this issue.

                  The "performance" argument for is laughable at best. Using a native int type will not magically make your code faster when you recompile on a 16bit architecture - it will merely break your code, because a++ will suddenly overflow at 2^16 instead of 2^32 (and if you check for overflow then your code will be slower than if you used the correct fixed-width time from the beginning).

                  In short, bollocks. Every well-written portable library (re)defines the same basic fixed-width integer types and uses them exclusively instead of "int", "short" and the like. C99 realized this problem and introduced <stdint.h>. It's high time C++ did the same.

                  (As an interesting aside, the CLR handles fixed vs native integers in an even better way: integers are all fixed-width by default (int8, int16, int32, int64) and theres is a special "native int" type that conforms to the bitness of the underlying platform (and has properties like Size that you can query). Best of both worlds, since you get correct code by default and you can explicitly drop down to native int when necessary (and it almost never is).)
                  C++ does have cstdint. The point was that even though so many things are in the standard library, third party libs just keep reinventing the wheel to support ancient compilers.

                  Comment


                  • #24
                    Originally posted by BlackStar View Post
                    The fact that every single library redefines fixed-width integer types indicates a fundamental issue in the design of C++. Spinning this as a "feature" flies into the face of qint32, int32_t, boost::int32, GLint, DWORD and the dozens of other custom types that are written to work around this issue.

                    The "performance" argument for is laughable at best. Using a native int type will not magically make your code faster when you recompile on a 16bit architecture - it will merely break your code, because a++ will suddenly overflow at 2^16 instead of 2^32 (and if you check for overflow then your code will be slower than if you used the correct fixed-width time from the beginning).

                    In short, bollocks. Every well-written portable library (re)defines the same basic fixed-width integer types and uses them exclusively instead of "int", "short" and the like. C99 realized this problem and introduced <stdint.h>. It's high time C++ did the same.

                    (As an interesting aside, the CLR handles fixed vs native integers in an even better way: integers are all fixed-width by default (int8, int16, int32, int64) and theres is a special "native int" type that conforms to the bitness of the underlying platform (and has properties like Size that you can query). Best of both worlds, since you get correct code by default and you can explicitly drop down to native int when necessary (and it almost never is).)
                    Performance of C++ is very important. That it's fairly close to C makes it suitable in devices with ARM cores, or lower end devices (MSP430 series from TI comes to mind). I think you misread a little, because I never mentioned that using "int" would magically make your program faster - it's that C++ is only a step above C that allows you to code high performance with it.
                    And again, not every library has to reimplement things. That they do is not a reflection of C++ as a programming language, but more a reflection of the environments/compilers that library should work with, and the design goals of the people behind it.
                    The problem of using integer values and knowing their width will affect some interfaces, and typedefs to whichever is appropriate can solve quite a few issues. OpenGL for example doesn't specify "GLuint32", it uses "GLuint", which allows for the same interface across multiple target platforms, better code portability, etc. Forcing it to be 32bit would make it run a good deal slower on 16bit platforms.

                    C++ lets you do everything nicely, but won't force you to. It's a very open and free language in that regard, but does make it more difficult to work with for some people.

                    Comment


                    • #25
                      Originally posted by mirv View Post
                      And again, not every library has to reimplement things. That they do is not a reflection of C++ as a programming language, but more a reflection of the environments/compilers that library should work with, and the design goals of the people behind itlt.
                      I don't see the distinction. The mere fact that every *portable* library is littered with #ifdef MSVC ... #elif WHATEVER ... #else ... #endif reflects on the (lack of) design and foresight in C++.

                      I don't know if I am making myself clear here, but my argument is that portable C++ requires either ignoring or subverting the built-in types of the language. This is a unique problem that pretty much no other widely-used language suffers from. (And, please, we've been writing code for weaker CPUs than MSP430 with higher-level languages for decades.)

                      The problem of using integer values and knowing their width will affect some interfaces, and typedefs to whichever is appropriate can solve quite a few issues. OpenGL for example doesn't specify "GLuint32", it uses "GLuint", which allows for the same interface across multiple target platforms, better code portability, etc. Forcing it to be 32bit would make it run a good deal slower on 16bit platforms.
                      And autoconverting GLuint 16bit on 16bit platforms would break your code. Slower code >> broken code any way you slice it.

                      Btw, GLuint is defined as 32bit on all platforms - that's why it exists. (Otherwise you could have just used "unsigned" and be done with it.) I know, because I've checked.

                      C++ does have cstdint.
                      cstdint is not part of the language and many modern compilers don't ship with it. You can't fault library writers for working around problems such as this one. If they waited for the TR board to fix C++ and compiler writers to update their compilers, they'd probably die of old age.

                      Comment


                      • #26
                        Originally posted by BlackStar View Post
                        I don't see the distinction. The mere fact that every *portable* library is littered with #ifdef MSVC ... #elif WHATEVER ... #else ... #endif reflects on the (lack of) design and foresight in C++.

                        I don't know if I am making myself clear here, but my argument is that portable C++ requires either ignoring or subverting the built-in types of the language. This is a unique problem that pretty much no other widely-used language suffers from. (And, please, we've been writing code for weaker CPUs than MSP430 with higher-level languages for decades.)
                        Care to show me an example of higher level languages for weaker MCUs "for decades"? Because those things are still mostly done in C, or assembly. If you worked with those devices, you'd know that. I do know of one group that made a python interpreter for an MSP430 chip, but it was incredibly slow and definitely not for battery operation. I'll also say again: C++ is dependent upon the underlying machine architecture, and if you want to include the runtimes, then the operating system too. It's much closer to the metal than many other languages, by design, and that you're trying to compare it to languages that aren't, reflects on the well thought out design and foresight given to C++.

                        And autoconverting GLuint 16bit on 16bit platforms would break your code. Slower code >> broken code any way you slice it.

                        Btw, GLuint is defined as 32bit on all platforms - that's why it exists. (Otherwise you could have just used "unsigned" and be done with it.) I know, because I've checked.



                        cstdint is not part of the language and many modern compilers don't ship with it. You can't fault library writers for working around problems such as this one. If they waited for the TR board to fix C++ and compiler writers to update their compilers, they'd probably die of old age.
                        GLuint may be defined as 32bit now, but it doesn't mean it has to be. What about 64bit? Granted, I've not gone completely over the spec to see if GLuint is always meant to be a 32bit number (it may well be) - but then, what about OpenGL on environments without stdint? You'd have to either typedef yourself (bad for OpenGL portability), or shock horror, use a library typedef. If no assumptions were made about GLuint, converting it between 32bit and 16bit wouldn't break code, btw.

                        Comment


                        • #27
                          How the heck do can you fit anything C++ to the common MSP430? The models we used had 512 bytes of RAM and 16kb of flash, and we had trouble even fitting simple C programs in there.

                          Even the highest models seem to have at max 16kb RAM and about a meg of flash. That's certainly enough to fit some subset of C++ stl in the flash, but not enough RAM to do anything C++-y.

                          Comment


                          • #28
                            Originally posted by curaga View Post
                            How the heck do can you fit anything C++ to the common MSP430? The models we used had 512 bytes of RAM and 16kb of flash, and we had trouble even fitting simple C programs in there.

                            Even the highest models seem to have at max 16kb RAM and about a meg of flash. That's certainly enough to fit some subset of C++ stl in the flash, but not enough RAM to do anything C++-y.
                            There are larger models - the 5xxx series in particular. Just don't go playing with dynamic memory, and it's quite possible. I've used most of the MSP430 series with a coupled radio (CC1101 typically), but I'll admit I enjoyed the f2252 the most - it certainly teaches you to manage memory properly.

                            Comment


                            • #29
                              Don't make to much assumptions.
                              And everybody makes too much.

                              The type system currently in C/C++ is a mess the way it is.
                              They should have described default sizes.

                              I would love to use types the following way:

                              int() a = 4;

                              Notice the (), this would be used to declare the number of bits or other specific implementation things.
                              Without anything some default value should be chosen.
                              Just like other systems http://en.wikipedia.org/wiki/C_data_types.
                              This would unify declarations of types.
                              Make things, simpler and more static (as in not-dynamic) but at the same time more adaptable.

                              I can now separate behaviour of the types and implementation details (e.g. size in number of bits).

                              Can cover almost all ground with the following things:
                              int()
                              float()
                              uint()

                              Some advanced things might need this too:
                              ufloat() // unsigned float
                              normfloat() //normalized float between -1 and +1 inclusive: [-1,+1]
                              unormfloat() //unsigned normalized float [0,+1]

                              // tip for language creators:
                              // in default types always include as much as possible range, things
                              // if you don't need the extra symbols don't use it but
                              // if a programmer needs something that is not present in the default types
                              // there is going to be another library, that a million times

                              This would be nice for strings too:
                              string() mystring = "blabla";

                              Could declare what encoding it has: ansi, utf, utf16, utf32, utf8,... and other things if it's a c string, null terminated string or other things.

                              This all needs defaults fully specified.
                              With this explicitly defined we can make more valid assumptions and do things like conversions between types where possible and creating bindings for other languages more reliably. I'm sure there are with that system many other things possible to do more reliable and automatic.

                              With ( code completion can show me all the valid things I can use. Very handy and useful.

                              The key: make data types static defined but adaptable.
                              Last edited by plonoma; 06-17-2012, 09:47 AM.

                              Comment


                              • #30
                                Originally posted by jayrulez View Post
                                There's nothing wrong with fixing inherent problems in a language even if it breaks compatibility across major versions. D was young(arguably still is) so it could afford to do that.

                                Allowing deficiencies to linger across versions for the sake of backwards compatibility is one of the reasons why C++ garners so much hate among programmers.

                                -- I'm not very interested -- Does not affect the price of bread
                                -- ...and c++11 just about removes most motivation for using D. --Agreed
                                This exactly.
                                Many standards nowadays suffer from the fact that they can't change anything.
                                They have advertised compatibility and are now reaping the problems from that.
                                Working with major versions advertising compatibility could solve this.

                                (It is possible to run new and old programs on new stuff without breaking because the libraries and other things should be able to detect what version is asked and adapt accordingly.)

                                But for optimal effect (can work around it with old way has no version, detect that) it has to be present from the beginning. D understands this the makers of C and C++ in the beginning did not thought about this. (Since things were fairly new back then I'm going to overlook that.)

                                One thing that I don't like about C/C++ is that they don't have binary literals standardized.
                                Many embedded stuff uses them and for some things it's much more easier to spot mistakes in code written with it. (use of different bits as flags)
                                This requires me to do a kind of simple maths that's boring and not fun!
                                (Also distracts me from the things I'm trying to do.)
                                Last edited by plonoma; 06-17-2012, 09:52 AM.

                                Comment

                                Working...
                                X