Announcement

Collapse
No announcement yet.

Approved: C++0x Will Be An International Standard

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by AnonymousCoward View Post
    Look, this is just utterly worthless blah blah. If you can actually show me a piece of code that performs measurably better when using bit shifts etc. instead of a std::bitset, then do so. Otherwise, just admit you're wrong and STFU. And by the way, the same goes for almost all other STL containers. The abstractions chosen in the STL were chosen precisely *because* they can be implemented with minimal overhead.

    Plonoma also needs to be hit by a cluebat by the way. C++11 provides support for user-defined literals, which makes it trivial to add binary literals to the language if you need them.
    http://www.open-std.org/jtc1/sc22/wg...2008/n2765.pdf
    What is this? If you look at what I said, I firstly mistakenly said bitset wasn't optimal, then pointed out it was due to my usage. You've been consistently assuming I was claiming bitset was slow.

    But you really don't care much for how easy your code is for others to read do you? I've seen several examples of statisticians used to R but not C-languages, and even Java programmers not understanding C/C++ syntax properly, and I really don't think designing a language around only the needs of those who've mastered it makes sense. Plonoma's complaint about lack of binary literals is a good if not very important example: not everyone likes to think about binary flags using hexadecimal syntax. And what about nuances like
    Code:
    uint64_t x = (0x12345678 << 32) + 0x9ABCDEF0;
    not doing what someone might expect due to lack of marking the first literal as a "long int"? I'm sure AC knows exactly why this won't work but bet this will confuse some others, so why not try to avoid this type of unexpected behaviour where possible? (In this case it may not be obvious how to solve the issue; my solution would be to make the type of the entire expression uint64_t since that is the target type ? very definitely not C but overall I think this approach is advantageous.)

    Comment


    • #32
      Originally posted by mirv View Post
      If, however, you are quite desperate for an example, fine...

      16bit architecture using 32bit numbering can optimise numbering better by keeping the registers side-by-side and treated as literal instead of an array. Array data must be loaded from data memory, incurring an extra 4 cycles - furthermore, writes to the array must be pushed back, particularly if operating from volatile members, where as the programmer can more easily copy the volatile data to local non-volatile, manipulate data, and write it back later (useful from when inside of an interrupt context).
      That is not an example, that's just more hand-waving blah-blah. Why don't you just show some actual code and prove your assertions?

      Originally posted by mirv View Post
      Now there are some caveats here - mainly that if you're caring so much about it, you're likely to be using C instead of C++.
      That doesn't make sense. C++ was explicitly designed with the zero-overhead principle in mind. To quote Stroustrup
      - what you don't use, you don't pay for
      - what you do use, you couldn't hand code any better
      And they pretty much succeeded with that. There's nothing in C89 that you can't do as fast (or faster) in C++.

      Originally posted by mirv View Post
      You're also assuming the compiler maintainers have included STL, and that their implementation is sane.
      If a compiler doesn't include STL support, it's not a C++ compiler by definition, as the STL is part of the standard. And if their implementation sucks, well, then use another one! It's not like there's a lack of STL implementations. GNU libstdc++, LLVM libc++, STLport just to name a few.

      Originally posted by mirv View Post
      They might even check for 32bit numbering in that case, but likely only apply it with optimisations enabled.
      So what? Who cares about performance without optimization?

      Originally posted by mirv View Post
      Which leads me to the core thing I have against blindly using STL: you don't know for sure what it's doing internally! Perhaps you're designing for speed, or perhaps for less memory, but you're really better off doing it yourself in many cases because you know for certain what it's doing.
      Sorry, but this is just bullshit. Again, the abstractions that were chosen in the STL were chosen explicitly because they allow for an obvious implementation so that you pretty much know how things work internally. For example vector<T>::iterator can be defined as just a T*, because the iterator invalidation rules were chosen such that this is possible. And while in a modern STL implementation it is probably a wrapper around T* (in order to provide better type safety), that doesn't affect the code that is generated by any remotely modern compiler.

      Comment


      • #33
        Originally posted by AnonymousCoward View Post
        That is not an example, that's just more hand-waving blah-blah. Why don't you just show some actual code and prove your assertions?
        Sorry, I stopped reading the rest of your post after that line. It shows you have zero understanding of the problem, and why bitsets are not guaranteed to be faster, or just as fast. I pointed out a case where they will be slower. If that caves your world in, then don't ever touch an embedded system.

        Comment


        • #34
          Originally posted by mirv View Post
          I pointed out a case where they will be slower.
          No you didn't. You made some argument about keeping stuff in registers as opposed to loading it from RAM etc.. But this can be trivially optimized away by the compiler. This is exactly what compiler optimizations are for. The fact that you don't believe it's possible reveals more about your (lack of) knowledge about modern compilers than about C++ or std::bitset.
          Of course, you may still prove me wrong. Just provide some code that proves your point, that is, code that will run faster using manual bit-fiddling instead of a std::bitset. The fact that you still didn't do this suggests that you can't though.

          Originally posted by Cyborg16 View Post
          What is this? If you look at what I said, I firstly mistakenly said bitset wasn't optimal, then pointed out it was due to my usage.
          Oh well, I must have misunderstood that bit.

          Originally posted by Cyborg16 View Post
          But you really don't care much for how easy your code is for others to read do you?
          I care about how easy it is to read for other programmers. I don't care about non-programmers, as they will by definition not be able to understand it anyway.

          Originally posted by Cyborg16 View Post
          Plonoma's complaint about lack of binary literals is a good if not very important example
          Um, actually it's not an example at all, because C++11 does have binary literals. As I said earlier, user-defined literals are there for just that.

          Comment


          • #35
            Originally posted by AnonymousCoward View Post
            I care about how easy it is to read for other programmers. I don't care about non-programmers, as they will by definition not be able to understand it anyway.
            I do. I work with a bunch of bio-statisticians who sometimes need to look at my code.

            Um, actually it's not an example at all, because C++11 does have binary literals. As I said earlier, user-defined literals are there for just that.
            Yeah, sure. What's the syntax, something like "10011011_binary" assuming you have a custom type named "binary"? Not too bad...

            Sounds like we're on the same page now at least.

            Comment


            • #36
              Originally posted by AnonymousCoward View Post
              No you didn't. You made some argument about keeping stuff in registers as opposed to loading it from RAM etc.. But this can be trivially optimized away by the compiler. This is exactly what compiler optimizations are for. The fact that you don't believe it's possible reveals more about your (lack of) knowledge about modern compilers than about C++ or std::bitset.
              Of course, you may still prove me wrong. Just provide some code that proves your point, that is, code that will run faster using manual bit-fiddling instead of a std::bitset. The fact that you still didn't do this suggests that you can't though.
              You didn't even read what I had written did you? Or perhaps you've never dealt with embedded systems. If you had, you'd also know that I can't give you said code - it would be meaningless. It's compiled and run on an embedded system - not a desktop pc. So maybe it's you who has no idea about compilers, or maybe you just lack the experience to realise that all of computing is not limited to a desktop pc, or even something that has an operating system.
              My point still stands - you can't guarantee that a bitset will be just as fast (and really, the onus is on you to prove that you can). So again, bitsets have their place from the STL (Standard Template Library, in case you missed what that means - it's not part of the language itself), but you should never rely upon them being as fast as direct bitwise modifiers.
              If the STL was always just as fast, they wouldn't have required any updates to improve move performance. Except they did. Internally that was hidden from the user - so again, it's a nice interface and all, and very useful for code portability, but just because you can use it doesn't mean that you should.

              Comment


              • #37
                Originally posted by mirv View Post
                You didn't even read what I had written did you? Or perhaps you've never dealt with embedded systems. If you had, you'd also know that I can't give you said code - it would be meaningless. It's compiled and run on an embedded system - not a desktop pc.
                You're making excuses. I don't care if you show me x86, 68k or z80 or ARM assembly or something else. Just show me *something* that proves your point on *any* platform.

                Originally posted by mirv View Post
                My point still stands - you can't guarantee that a bitset will be just as fast
                I didn't start this argument; somebody else claimed that using bit operators was faster than using a std::bitset, and you seem to agree, so YOU are the one who needs to prove something.

                Originally posted by mirv View Post
                If the STL was always just as fast, they wouldn't have required any updates to improve move performance. Except they did. Internally that was hidden from the user - so again, it's a nice interface and all, and very useful for code portability, but just because you can use it doesn't mean that you should.
                Again, you're just spreading bullshit. When one uses the STL, one has to understand how it works. For example, one has to understand that returning a std::vector from a function may be expensive because its copy constructor may be invoked and thus all elements in the vector may be copied. But that doesn't mean that std::vector is slow and that one should avoid it. It just means that the programmer shouldn't invoke the copy constructor when it's not necessary. In C++98, that means that you shouldn't return a std::vector from a function, but instead take another std::vector argument by reference and store the result of the function therein. Of course, in C++11 the whole issue is moot due to rvalue references.

                Comment


                • #38
                  Originally posted by AnonymousCoward View Post
                  You're making excuses. I don't care if you show me x86, 68k or z80 or ARM assembly or something else. Just show me *something* that proves your point on *any* platform.
                  I pointed out reasons why on an MSP430 series. You waved it off, apparently without reading. That was using your code example.

                  I didn't start this argument; somebody else claimed that using bit operators was faster than using a std::bitset, and you seem to agree, so YOU are the one who needs to prove something.
                  My statement was that you can't guarantee that they're just as fast. Try again.

                  Again, you're just spreading bullshit. When one uses the STL, one has to understand how it works. For example, one has to understand that returning a std::vector from a function may be expensive because its copy constructor may be invoked and thus all elements in the vector may be copied. But that doesn't mean that std::vector is slow and that one should avoid it. It just means that the programmer shouldn't invoke the copy constructor when it's not necessary. In C++98, that means that you shouldn't return a std::vector from a function, but instead take another std::vector argument by reference and store the result of the function therein. Of course, in C++11 the whole issue is moot due to rvalue references.
                  So.....you have to know what the STL does internally in order to use it properly? Which you really don't know (I see you've used "may" a fair bit in there). So I see no reason to use a bitset<32> over a uint32_t when with the latter it's far more directly shown what is happening with your data (and it's part of the language, and not a library).

                  (I could also go on quite a bit about the need to debug time critical systems without optimisations applied, but that's starting to get into the quite unusual cases).

                  Comment


                  • #39
                    Originally posted by mirv View Post
                    You didn't even read what I had written did you? Or perhaps you've never dealt with embedded systems.
                    I don't think argueing about embedded systems makes any sense when discussing latest version of the C++ standard, all the embedded platforms I ever dealt with were using old versions of gcc, sometimes ancient ones, from 10 years ago. Don't know about the one you are using, but maybe that's the problem. Until several years ago the C++ implementation in gcc might have had some problems with the performance of STL. Also, if you're using some other embedded operating system then Linux your options for changing the compiler might be limited.

                    Comment


                    • #40
                      Originally posted by mirv View Post
                      I pointed out reasons why on an MSP430 series. You waved it off, apparently without reading. That was using your code example.
                      Great! We're getting somewhere. Which compiler in which version with which flags did you use? Where's the assembly code it generated?


                      Originally posted by mirv View Post
                      So.....you have to know what the STL does internally in order to use it properly? Which you really don't know (I see you've used "may" a fair bit in there).
                      No, this has nothing to do with STL internals. When you return a complex object from a function, the compiler is allowed, but not obligated, to call the object's copy constructor. It may also employ return value optimization and thus not call the copy constructor. So no, you don't have to know any STL internals, as you'll be on the safe side in either case if you use the technique I explained earlier. You just need to know that returning a vector from a function may invoke its copy constructor (this is part of the language spec), and you need to know that a std::vector's copy constructor runs in O(n), which is part of the STL spec. And of course, in C++11 you don't have to care at all, because the move constructor is guaranteed to be invoked if you construct an object from an rvalue.

                      Comment

                      Working...
                      X