Announcement

Collapse
No announcement yet.

Google Engineers Lift The Lid On Carbon - A Hopeful Successor To C++

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by Sergey Podobry View Post
    At first atomics work as a memory barrier so they stop execution of the current core to flush the store buffer.
    Whether it actually stops the core is an interesting question, but I'd bet it's not true for most modern microarchitectures.

    Originally posted by Sergey Podobry View Post
    Then they lock the cache line or the whole CPU bus (depending on the architecture and address alignment)
    That's quite a bit of wiggle-room you left there. As I said, the only case where it should lock the CPU bus is for split locks, which most ISAs don't allow (except x86, unfortunately). But, those are so bad they're typically considered a bug, and wouldn't be generated by modern compilers. So, it's really a far corner case where what you originally said is true.

    Originally posted by Sergey Podobry View Post
    and signal CPUs on other sockets. If any other core tries to access the memory address on which an atomic operation is performed (or any address in case of the CPU bus lock) it will stall its execution.
    If you know anything about cache coherency protocols for copy-back caches, there's nothing exceptional about asserting exclusive ownership of a cacheline. Indeed, every time a core simply writes to a cacheline, it must assert exclusivity!

    Originally posted by Sergey Podobry View Post
    Atomics are 10-300 times slower than non-atomics plus affects other CPU cores.
    That's not at all bad, but I'm sure it's also highly-dependent on a lot of factors. If you have multiple cores contending for the same atomic counters, it's just another form of lock contention -- which is a performance pitfall in whatever form it takes. Otherwise, it should be towards the lower end of that scale.

    Originally posted by Sergey Podobry View Post
    The trick with allocation-free list insertion is to store list pointers along with the data that will be stored in the list. So you need to allocate the data and then you can insert/remove it from the list without additional allocations.
    Okay, so you're just redefining the operation so that allocation is considered a prerequisite rather than part of the actual insertion.

    Comment


    • Originally posted by coder View Post
      there's nothing exceptional about asserting exclusive ownership of a cacheline
      Of course there is nothing exceptional. But it requires extra inter-CPU communication (and extra latency). And if other CPUs don't have the atomic variable in their caches I'm not sure what they do. Probably they load it into the cache and then lock the cache line.

      Originally posted by coder View Post
      That's not at all bad, but I'm sure it's also highly-dependent on a lot of factors.
      It's not good too. Otherwise all memory will be reference counted. Just use std::shared_ptr for everything and forget about memory issues.

      Comment


      • Originally posted by Sergey Podobry View Post
        Of course there is nothing exceptional. But it requires extra inter-CPU communication (and extra latency).
        You edited out the parts about writing to a cacheline. Literally every time a x86 CPU core writes to a new cacheline, it needs to assert exclusive ownership of it. That's exactly how unremarkable it is.

        I think other ISAs have more relaxed memory models which don't make such guarantees, but a well-known performance pitfall that at least applies to x86 and some similar ISAs is called "false sharing", where two different cores fight each other for ownership of a cacheline, by writing variables in adjacent addresses.



        Originally posted by Sergey Podobry View Post
        It's not good too. Otherwise all memory will be reference counted. Just use std::shared_ptr for everything and forget about memory issues.
        Um, no. Ref-counting has a well-known deficiency of leaking cyclic references. Weak references are one way to break such cycles, but it takes a bit of thought & explicit intent to use them properly.

        std::shared_ptr<> is awesome, but you'd better know its limitations or you'll eventually get burnt. Another reason not to blindly use it everywhere is that an object's lifetime is sometimes a matter of correctness, in which case you'd like to know if it's owned exclusively or if it has shared ownership. If ownership is never shared, then using std::unique_ptr<> (or simply making it a direct member of the parent scope) clearly communicates that it's not.
        Last edited by coder; 26 July 2022, 05:13 PM.

        Comment


        • Of course I will not use any lang from Google.
          I also not like rust lang. (Firefox itself is becoming 'outdated' compared to Vivaldi)
          However I have my own language, Textfrog in name, and I'm quit happy with it.

          Comment


          • Originally posted by coder View Post
            You edited out the parts about writing to a cacheline. Literally every time a x86 CPU core writes to a new cacheline, it needs to assert exclusive ownership of it. That's exactly how unremarkable it is.
            For non-atomic operations writing to a cacheline is completely asynchronous. That's why they are fast.

            Originally posted by coder View Post
            Um, no. Ref-counting has a well-known deficiency of leaking cyclic references. Weak references are one way to break such cycles, but it takes a bit of thought & explicit intent to use them properly.

            std::shared_ptr<> is awesome, but you'd better know its limitations or you'll eventually get burnt. Another reason not to blindly use it everywhere is that an object's lifetime is sometimes a matter of correctness, in which case you'd like to know if it's owned exclusively or if it has shared ownership. If ownership is never shared, then using std::unique_ptr<> (or simply making it a direct member of the parent scope) clearly communicates that it's not.
            That's true. But it's still a valid option for garbage collected languages as they can deal with cycles. However all modern GCs don't use atomic reference counters.

            Comment


            • Originally posted by neoe View Post
              I also not like rust lang. (Firefox itself is becoming 'outdated' compared to Vivaldi)
              You don't like Rust because of Firefox? Explain how its obsolescence or whatever else you don't like about it is the fault of the language.

              Comment


              • Originally posted by phoronix View Post
                Phoronix: Google Engineers Lift The Lid On Carbon - A Hopeful Successor To C++

                In addition to Dart, Golang, and being involved with other programming language initiatives over the years, their latest effort that was made public on Tuesday is Carbon. The Carbon programming language hopes to be the gradual successor to C++ and makes for an easy transition path moving forward...

                https://www.phoronix.com/scan.php?pa...ccessor-To-CPP
                I chalked off developing Fuchsia instead of using something like Genode/seL4 as their Linux successor on Android devices as a better/more optimal technical approach than what Genode/seL4 could provide...rather than pure hubris/NIH syndrome/desire for control and greed...

                Now I realize my assessment of Google was off-they have become legends in their own mind who would rather reinvent the wheel than contribute resources to develop/tweak Rust further to their needs as well the community's...

                I am quite confident there's a perfectly valid reason in some executive at Google's mind for throwing resources at Carbon.... and heck, given Google's size, this may in fact outlast similar efforts by Shuttleworth to sustain Mir...which ironically, is now resting in Mir (peace), after Mark realized FOSS doesn't care about what he thinks.....

                So, I hope this is merely a pulse/temp check of the dev community by Google....and that Carbon rather quickly fossilizes into petrified sh..I mean, carbon.. Google is healed from NIH and focuses on making Rust a premier player, a first class citizen at Google, on their ChromeOS, Android and Fuchsia platforms rather than reinvent Rust badly.

                Of course, Carbon may very well be the next great thing since sliced bread...so I apologize in advance for my brainfart, if that's the case.

                G'day Bruce.
                Last edited by MartinN; 27 July 2022, 03:45 AM.

                Comment


                • Originally posted by Sergey Podobry View Post
                  For non-atomic operations writing to a cacheline is completely asynchronous. That's why they are fast.
                  But writing to a cacheline manages to ensure exclusivity without having to lock the CPU bus. And it also signals all CPUs in the system. Two things you indicated made atomics so bad.

                  There's something else we're glossing over, which are solutions like Intel's HLE. Granted, they withdrew it for security reasons, but it's a way to implement atomics without blocking the core, in the typical case. ARM also added atomic instructions to ARMv8.1, which can also potentially run without blocking.

                  I'm not arguing that atomics can't be costly, but it has to be seen it in terms of both how you use them and relative to the cost of memory allocation or garbage collection. If they're used judiciously, their overhead is typically a non-issue. Furthermore, the factors which make atomics more expensive will tend to cause you pain, even if you aren't using them.

                  Comment


                  • Originally posted by coder View Post
                    You don't like Rust because of Firefox? Explain how its obsolescence or whatever else you don't like about it is the fault of the language.
                    It means 'If Firefox becomes not that awesome, Rust can gone without my needs'.
                    BTW, I don't like Rust because it's verbose in grammar and concept, which seems to me is 'much pay less gain', making my programing experience crippled which is not acceptable, especially when I use the language not for job but for self innovation.

                    Comment


                    • Originally posted by kpedersen View Post
                      Carbon is a close superset of C++. This might actually have a chance of succeeding C++.

                      It is basically Rust with a C and C++ compiler bolted on. No need for creating / maintaining bindings or marshalling data via the FFI.
                      Question-why develop a brand new language? Why not solve FFI between Rust and C++ at the ABI level plus any mods/extensions to Rust's syntaxes that will enable easier migration away from C++? GOOG would be in a unique position to tackle this via the ABI.

                      I would love to hear a technical explanation of why Rust-C++ FFI can't be solved at the ABI level almost entirely, if not entirely...

                      Comment

                      Working...
                      X