Announcement

Collapse
No announcement yet.

Approved: C++0x Will Be An International Standard

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by plonoma View Post
    I was not talking about bit manipulations!

    When you have a control register with each bit being a flag for something.
    Then it's much more practical to be able to write a binary literal.
    In this situation, working with hex makes it more error prone and complicated.
    If your documentation goes like '... the third bit enables <some random thing>...', especially then it's really practical to be able to use binary literals.
    My work is quite a lot of controlling register flags, and they're typically bit manipulation. Any documentation is written in plain text anyway, so you can write binary there just fine (often the manuals use a table to group bit meanings together), but actual code will almost definitely use a macro, which is far more readable than directly using a literal anyway. Even displaying serial line output is far better done in hex as it's more readable (assuming you don't care about the start/stop bits, but if you do then you're looking at timing and probably using a logic analyser anyway).
    So sorry, still don't understand why you think binary literals would be better than hex.

    Comment


    • #17
      I'm not saying they are better everywhere.
      In some situations they are better as in more practical to use.
      It's not about writing the documentation. It's about when documentation does things per bit it's sometimes more practical to use binary literal.

      You seem to think that it's a contest between the two. Loose that thought.
      Sometimes Hex is more convenient, sometimes binary is.

      Just because you're used to working in it doesn't make it more natural.
      And using a macro for something that should be a core language feature is, in my eyes, a fail!

      Your example of a serial output would be an example, depending on what you're doing where showing hex or bin would be better.
      Last edited by plonoma; 08-15-2011, 03:18 PM.

      Comment


      • #18
        Originally posted by Cyborg16 View Post
        You don't use bitset then? But truth be told bit-manipulation is most often used in performance-critical code and
        Code:
        bitset<2> bits;
        ...
        if( bits[1] ){ ... }
        isn't quite as fast as 'if( bits & FLAG ){...}'
        That's a load of bullshit, this can be trivially optimzed away by the compiler.

        Comment


        • #19
          Originally posted by plonoma View Post
          In some situations they are better as in more practical to use.
          It's not about writing the documentation. It's about when documentation does things per bit it's sometimes more practical to use binary literal.
          I have to agree with mirv here. When would a binary literal ever be a better solution than hex? Can you give an example? In actual code rather than a generic description?

          0x8 = 4th bit set. Each alphanumeric = 4 bits. versus having to count out all those zeroes and making sure you aren't off by one.

          Maybe it just has to do with how comfortable someone is thinking in hex? I've always found it very easy.
          Last edited by smitty3268; 08-16-2011, 12:15 AM.

          Comment


          • #20
            Originally posted by AnonymousCoward View Post
            That's a load of bullshit, this can be trivially optimzed away by the compiler.
            Hmm, I tested bitset in the past and definitely found some performance let-down. You're probably right that the example given could be optimised; it might have been a function like
            Code:
            void testBit(size_t n){
                return bits[n];
            }
            which prevented the optimisation.

            Comment


            • #21
              Originally posted by Cyborg16 View Post
              Hmm, I tested bitset in the past and definitely found some performance let-down.
              Then you did something wrong. Take this program:
              Code:
               #include <bitset>
              bool f1(std::bitset<32> x, unsigned pos) {
                return x[pos];
              }
              bool f2(unsigned x, unsigned pos) {
                return x & (1 << pos);
              }
              With g++ 4.6.1-6 (from debian unstable, with flags -O2 -S), this is what f1 compiles to:
              Code:
              _Z2f1St6bitsetILj32EEj:
              .LFB693:
              	.cfi_startproc
              	movl	8(%esp), %ecx
              	movl	$1, %eax
              	sall	%cl, %eax
              	testl	%eax, 4(%esp)
              	setne	%al
              	ret
              	.cfi_endproc
              and this is what f2 compiles to:
              Code:
              _Z2f2jj:
              .LFB694:
              	.cfi_startproc
              	movl	8(%esp), %ecx
              	movl	$1, %eax
              	sall	%cl, %eax
              	testl	%eax, 4(%esp)
              	setne	%al
              	ret
              	.cfi_endproc
              It's exactly the same code. So please, just stop making up nonsense.

              Comment


              • #22
                Now that this thread has derailed, I have to ask: what does bitset<33>, or bitset<129> give?
                In any case, I would argue against bitset when using machine native formats (so 8, 16, 32, 64 bit) as then it's more clear what you're doing, but that's likely down to personal preference. I also prefer fopen, fread, etc, to iostream.
                Iterators on the other hand, now they can be useful, especially with the ranged for loop.

                Comment


                • #23
                  Originally posted by AnonymousCoward View Post
                  Then you did something wrong. Take this program:
                  Code:
                   #include <bitset>
                  bool f1(std::bitset<32> x, unsigned pos) {
                    return x[pos];
                  }
                  bool f2(unsigned x, unsigned pos) {
                    return x & (1 << pos);
                  }
                  Don't jump to conclusions. Of course that can be optimised to the same code: it's equivalent implementations of the same thing but then you might as well not use bitset anyway. Where bitset is useful is if you want to store a set of boolean options and not mess around explaining to non-programmers what's going on. E.g. the following code should be clear even you don't know what <</&/| binary operations do:
                  Code:
                  enum SomeOptions {
                    option1=0, option2, option3, NumOptions
                  };
                  bitset<NumOptions> options;
                  options[option1] = false;
                  options[option2] = true;
                  ...
                  cout << "option 3 is: "<<options[option3]<<endl;
                  Why do you think the bitset class was created? Certainly not so that people familiar with binary arithmatic could point out "ooh, I have snazzy new way of writing the exact same code as before with exactly the same performance"!

                  @mirv: one of the advantages of the above is that it works with any number of options up to whatever limits there are on the size of a static array or something.

                  Comment


                  • #24
                    Oh yes, realise that it's size is arbitrary, but I'm curious what the actual compiled code is for those lengths, and how bitset internally handles them. Only a curiosity though.

                    Comment


                    • #25
                      Originally posted by mirv View Post
                      Oh yes, realise that it's size is arbitrary, but I'm curious what the actual compiled code is for those lengths, and how bitset internally handles them. Only a curiosity though.
                      Well, it's handled essentially like one would handle it in a C program. Say you want to store NUM_BITS in a bit field. You'll just use an array to store them:
                      Code:
                      #include <stdint.h>
                      #include <stdbool.h>
                      enum { NUM_BITS = 42 };
                      struct bitset {
                        uint32_t bits[NUM_BITS/32];
                      };
                      bool get_bit(struct bitset *bits, unsigned n) {
                        return bits->bits[n/32] & (1 << (n%32));
                      }
                      std::bitset does the same thing internally.

                      Originally posted by Cyborg16 View Post
                      Don't jump to conclusions. Of course that can be optimised to the same code: it's equivalent implementations of the same thing
                      Well of course it's the same thing! There's no point in comparing two functions that do something different.


                      Originally posted by Cyborg16 View Post
                      Where bitset is useful is if you want to store a set of boolean options and not mess around explaining to non-programmers what's going on.
                      That doesn't make sense, non-programmers don't look at source code.

                      Originally posted by Cyborg16 View Post
                      Why do you think the bitset class was created?
                      They're easier to use than integer bit masks and, more importantly, they can contain an arbitrary number of bits.

                      Comment


                      • #26
                        Originally posted by AnonymousCoward View Post
                        Well, it's handled essentially like one would handle it in a C program. Say you want to store NUM_BITS in a bit field. You'll just use an array to store them:
                        Code:
                        #include <stdint.h>
                        #include <stdbool.h>
                        enum { NUM_BITS = 42 };
                        struct bitset {
                          uint32_t bits[NUM_BITS/32];
                        };
                        bool get_bit(struct bitset *bits, unsigned n) {
                          return bits->bits[n/32] & (1 << (n%32));
                        }
                        std::bitset does the same thing internally.


                        Well of course it's the same thing! There's no point in comparing two functions that do something different.



                        That doesn't make sense, non-programmers don't look at source code.


                        They're easier to use than integer bit masks and, more importantly, they can contain an arbitrary number of bits.
                        That may not be what the compiler spits out....don't make that assumption. It probably does, though doubtless uses a " >> 5" instead of "/32", and "n&0x001F" instead of "n%32". It might contain bounds checking too - if I had motivation (which I don't) I'd check it, but it's just an example.
                        No mistake, STL provides some great things, just don't be overly reliant on it for everything.

                        Comment


                        • #27
                          Originally posted by mirv View Post
                          That may not be what the compiler spits out....don't make that assumption. It probably does, though doubtless uses a " >> 5" instead of "/32", and "n&0x001F" instead of "n%32". It might contain bounds checking too - if I had motivation (which I don't) I'd check it, but it's just an example.
                          No mistake, STL provides some great things, just don't be overly reliant on it for everything.
                          Look, this is just utterly worthless blah blah. If you can actually show me a piece of code that performs measurably better when using bit shifts etc. instead of a std::bitset, then do so. Otherwise, just admit you're wrong and STFU. And by the way, the same goes for almost all other STL containers. The abstractions chosen in the STL were chosen precisely *because* they can be implemented with minimal overhead.

                          Plonoma also needs to be hit by a cluebat by the way. C++11 provides support for user-defined literals, which makes it trivial to add binary literals to the language if you need them.
                          http://www.open-std.org/jtc1/sc22/wg...2008/n2765.pdf

                          Comment


                          • #28
                            Originally posted by AnonymousCoward View Post
                            Well of course it's the same thing! There's no point in comparing two functions that do something different.
                            That depends entirely on what you're trying to do: test how well the low-level implementation of a function can work in the best possible circumstances, or test the cost of simplifying your code. I sometimes get the impression coders have their head in a bucket and don't see others' points of view.

                            Comment


                            • #29
                              Originally posted by Cyborg16 View Post
                              That depends entirely on what you're trying to do: test how well the low-level implementation of a function can work in the best possible circumstances, or test the cost of simplifying your code. I sometimes get the impression coders have their head in a bucket and don't see others' points of view.
                              Again, this is totally useless chatter besides the point. You claimed that using a std::bitset isn't as fast using bit masks. Now, please either prove it or just shut the fuck up and admit that you were wrong.

                              Comment


                              • #30
                                STL containers are also generic, and though chosen and written by some clever people, there are some things which you simply can't get around by being generic. If you're going to provide an example of why bitsets are oh-so-much better, don't use the generic perfect case. Now I never said that bitsets were bad, just that they're not the answer for everything. Standard bit manipulation is often more useful (how often do you use bit flags greater than that supported by the machine architecture when using C++?) and you can mix numbers and bit flags in the one word a bit easier (pun intended).
                                If, however, you are quite desperate for an example, fine...

                                16bit architecture using 32bit numbering can optimise numbering better by keeping the registers side-by-side and treated as literal instead of an array. Array data must be loaded from data memory, incurring an extra 4 cycles - furthermore, writes to the array must be pushed back, particularly if operating from volatile members, where as the programmer can more easily copy the volatile data to local non-volatile, manipulate data, and write it back later (useful from when inside of an interrupt context).

                                Now there are some caveats here - mainly that if you're caring so much about it, you're likely to be using C instead of C++. You're also assuming the compiler maintainers have included STL, and that their implementation is sane. They might even check for 32bit numbering in that case, but likely only apply it with optimisations enabled.

                                Which leads me to the core thing I have against blindly using STL: you don't know for sure what it's doing internally! Perhaps you're designing for speed, or perhaps for less memory, but you're really better off doing it yourself in many cases because you know for certain what it's doing.

                                Comment

                                Working...
                                X