Announcement

Collapse
No announcement yet.

Google Wants To Make C++ More Fun

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • archibald
    replied
    I think both the C++ variants need to include '<< std::endl' or '<< "\n"' :-)

    Leave a comment:


  • Drago
    replied
    DAFAQ, I just did read?!?!

    #include <iostream>

    int main()
    {
    std::cout << 42;
    return 0;
    }

    Originally posted by BlackStar View Post
    I mean something like this:
    Code:
    #include <iostream>
    #include <boost/lexical_cast.hpp>
    
    int main(int argc, char *argv[])
    {
        std::cout << boost::lexical_cast<std::string>(42);
        return 0;
    }
    versus

    Code:
    10 PRINT 42
    Which looks higher level to you?

    Leave a comment:


  • BlackStar
    replied
    Originally posted by hoohoo View Post
    I really want to say you two are aguing (English) semantics. What do each of you mean by "higher level"?
    I mean something like this:
    Code:
    #include <iostream>
    #include <boost/lexical_cast.hpp>
    
    int main(int argc, char *argv[])
    {
        std::cout << boost::lexical_cast<std::string>(42);
        return 0;
    }
    versus

    Code:
    10 PRINT 42
    Which looks higher level to you?

    Leave a comment:


  • hoohoo
    replied
    Originally posted by 1c3d0g View Post
    He probably meant Notepad 2, or any editor based on Scintilla, as far as I'm concerned.
    Notepad, notepad 2, whatever. Generally I mean that I like editors that do not try to think very much.

    As for no line numbers in notepad, yeah, that is a failing I agree.

    Leave a comment:


  • 1c3d0g
    replied
    Originally posted by uid313 View Post
    Notepad is shit, it doesn't even include line numbering or syntax highlighting.

    ...
    He probably meant Notepad 2, or any editor based on Scintilla, as far as I'm concerned.

    Leave a comment:


  • hoohoo
    replied
    Originally posted by mirv View Post
    I had a longer reply, but dodgy net connection here lost it.
    Short: you're saying Basic is higher level than C/C++? Go try using it. Also, do please look up what, exactly, an MSP430 series chip is before you try to compare it to a TI-8x graphing calculator.
    Go try keep portable code across different architectures using only uint8_t, uint32_t, etc. Sometimes, using plain old "unsigned int" is actually easier.
    I really want to say you two are aguing (English) semantics. What do each of you mean by "higher level"?

    Leave a comment:


  • mirv
    replied
    Originally posted by BlackStar View Post
    Microsoft was formed by writing a BASIC interpreter for the Altair 8800 in 1975-1976.

    TI-BASIC runs on TI calculators with a Z80 CPU (designed in 1976).

    Just two examples.



    This doesn't even make sense. Convoluted logical acrobatism.



    GLuint64, duh.



    Table 2.2, page 16 of the 4.2 core specification.

    Those library typedefs wouldn't - shouldn't - be necessary in any sane language with a properly defined type system. They are a kludge for the deficiencies in C/C++ legacy.



    Oh really? You are either trolling or you just lack experience in OpenGL. The ways your code would break when interfacing with the gfx hardware are too many to enumerate - which is why GLuint is defined as 32bit in the first place.
    I had a longer reply, but dodgy net connection here lost it.
    Short: you're saying Basic is higher level than C/C++? Go try using it. Also, do please look up what, exactly, an MSP430 series chip is before you try to compare it to a TI-8x graphing calculator.
    Go try keep portable code across different architectures using only uint8_t, uint32_t, etc. Sometimes, using plain old "unsigned int" is actually easier.

    Leave a comment:


  • plonoma
    replied
    Originally posted by Pseus View Post
    I'm not sure I follow you, what you describe appears to be possible in current C++.

    Code:
    myint<32> a = 3; //32 bits, specialized class template
    myint a = 3; //default, N bits
    And this is pretty much run-of-the-mill C++, nothing special about it.
    Somebody who says something, good.

    You're working with templates. Very different way of working than with value types like int, char.
    It's not that it looks the same way that it works the same way.
    I'm trying to have something more specific but still have some freedom, options to work with.
    While avoiding names explosions and typedef explosions / proliferation.
    (templates are more generalized than what I'm proposing)
    They won't work with each other.
    You won't get the behaviour out of it that I'm trying to create.
    Read about how templates work and the negative points about templates.

    I'm trying to find a good system for simple value types.

    By default, C/C++ does not have a fixed size (number of bits) for things like int.
    One compiler might have a fixed size but this is no way a guarantee that all compilers or even different versions of the same compiler have the same fixed size.
    Different sizes causes programs to behave differently! (overflow)
    Big problem for programming in higher languages (general property)!

    I'm trying to make things simpler by not using templates.
    But introducing types with declaration options.

    Allowing me to use the same name with different options.
    Allowing library writers to be adaptable with types without having to invent new types like the current situation.
    This system is expandable with new types and things if the need arises.
    And because of it's novel construction (use of rounded parentheses () ) allows for defaults fully specified without interfering with backwards compatibility with existing code.
    int <- old system
    int() <- new system with clearly defined defaults

    I'm also not only targetting number of bits here.
    You have seen how this could be used for strings.
    For strings this could make integrating new encodings and options much simpler, less adaptions needed to current code.
    There won't be an explosion of names like now is the case.
    Somebody with a new encoding now immediately has a way to use this new encoding in code.
    Without introducing new types (or using templates).

    One of the advantages is also that it does not interfere with current code.
    If this would get added to the standard and compilers. All current code would still compile since it's an addition.

    If a project would want to use this, they could just start using this for new code.
    Without having to re factor old code to make things that already work, work.
    Using it comes with zero penalty.

    Unlike the recycling of the auto keyword. They should have used autotype as word.

    It's also a clean slate for defaults!
    Can have things like standard UTF-8 encoding for strings when not specified explicitly from the beginning!

    Libraries and api's can specify what options they use, are used.
    Last edited by plonoma; 19 June 2012, 12:39 PM.

    Leave a comment:


  • BlackStar
    replied
    Originally posted by mirv View Post
    Care to show me an example of higher level languages for weaker MCUs "for decades"? Because those things are still mostly done in C, or assembly. If you worked with those devices, you'd know that. I do know of one group that made a python interpreter for an MSP430 chip, but it was incredibly slow and definitely not for battery operation.
    Microsoft was formed by writing a BASIC interpreter for the Altair 8800 in 1975-1976.

    TI-BASIC runs on TI calculators with a Z80 CPU (designed in 1976).

    Just two examples.

    I'll also say again: C++ is dependent upon the underlying machine architecture, and if you want to include the runtimes, then the operating system too. It's much closer to the metal than many other languages, by design, and that you're trying to compare it to languages that aren't, reflects on the well thought out design and foresight given to C++.
    This doesn't even make sense. Convoluted logical acrobatism.

    GLuint may be defined as 32bit now, but it doesn't mean it has to be. What about 64bit?
    GLuint64, duh.

    Granted, I've not gone completely over the spec to see if GLuint is always meant to be a 32bit number (it may well be) - but then, what about OpenGL on environments without stdint? You'd have to either typedef yourself (bad for OpenGL portability), or shock horror, use a library typedef.
    Table 2.2, page 16 of the 4.2 core specification.

    Those library typedefs wouldn't - shouldn't - be necessary in any sane language with a properly defined type system. They are a kludge for the deficiencies in C/C++ legacy.

    If no assumptions were made about GLuint, converting it between 32bit and 16bit wouldn't break code, btw.
    Oh really? You are either trolling or you just lack experience in OpenGL. The ways your code would break when interfacing with the gfx hardware are too many to enumerate - which is why GLuint is defined as 32bit in the first place.

    Leave a comment:


  • uid313
    replied
    Originally posted by hoohoo View Post
    If you are on Windows, you have Visual Studio.

    All I really hear from the guy in the video is that how he likes to do stuff is how stuff should be done.

    Me, I like Notepad and kedit and vi, they don't try to impose policy on me.
    Notepad is shit, it doesn't even include line numbering or syntax highlighting.

    Visual Studio is absolutely awesome (well, some bugs). Together with Resharper, it is amazing.
    You can code folding, auto-completion, class browser, built-in documentation, refactoring, automatic indention, debugging, code suggestions, it points out issues and helps you fix them, etc.
    It's invaluable!

    A simple text editor can work on small projects. But when you do large enterprise projects and object-oriented programming its good to have a IDE.

    Leave a comment:

Working...
X