Announcement

Collapse
No announcement yet.

Google Engineers Lift The Lid On Carbon - A Hopeful Successor To C++

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • coder
    replied
    Originally posted by piotrj3 View Post
    Evolution of C into early C++? No.

    Evolution of C into C++20? I could agree.

    C++ originally was written like C with classes. Later you got many stuff, and i would say that smart pointers were true giant improvement. But originally? No way.
    Templates + STL is a big deal. Constructors + destructors are a big deal. Exceptions are a big deal. IMO, these are the 3 killer features of C++. Everything else you really need is implementable atop them, as Boost showed us, and they were all in C++98.

    That said, I really like the stuff added in C++11, and some of what's followed. First-class support for lamdas (instead of Boost's template hack) is a breath of fresh air.

    Originally posted by piotrj3 View Post
    The issue it can't be (or rather shouldn't be) used in kernel is because if you impose all sane coding style guides for C++ for kernel mode, you almost have C.
    I strongly disagree.
    • Constructors help cut down on uninitialized variables & reduce the verbosity of explicit initialization. Destructors help cut down on leaks, dangling references, and dangling locks.
    • The kernel would benefit a lot from generic algorithms. I'm sure they hacked something in C for doing that, but using C++ templates would make it safer and more concise.
    • You can have smart string & buffer classes that minimize the chance for buffer overruns.
    • Namespaces are a nice way to get concise names without all the manual prefixes that C programmers normally have to do.
    • I'm on the fence about operator overloading, but they could put some rules & clear guidelines around it, to make sure it's used sanely.

    The only major feature I really wouldn't use in the kernel is exceptions.

    Originally posted by piotrj3 View Post
    Function overloading? Due to name mangling sounds problematic.
    But you really want name mangling, to ensure type-safety. Even if you don't allow overloading, that's still a win, because it sanity-checks that the function signature a caller thinks it's using actually matches what it's linking against.

    Originally posted by piotrj3 View Post
    Another issue is that you can't quite use STL or Boost in kernel.
    Obviously. The kernel would have its own generic algorithms and data structures, which are customized to its needs, priorities, and constraints.

    It's all a moot point, but still kinda fun to contemplate.

    Leave a comment:


  • coder
    replied
    Originally posted by arQon View Post
    has anyone with any nontrivial C experience, ever, NOT implemented fake OO/class support?
    Structs with function pointers are common to see. These are often used in a manner equivalent to what you can do with C++ and single or multiple inheritance. In other cases, they're modified on-the-fly -- such as to model a state machine -- which I would do in C++ using a struct or class with std::function<> members.

    The coolest thing I ever did with C was to use Boost's amazing preprocessor metaprogramming library to implement something like C++'s STL. It's more verbose, because you have to explicitly pass in a bundle of type information to every generic operation, but it scales in the same way (i.e. you can have nested containers and operate on them with generic algorithms).

    Originally posted by arQon View Post
    MI is something that in reality almost never solves more problems than it creates,
    Check out the "mix-in" pattern, sometime. The 3 ways I use multiple inheritance are: mix-ins, interfaces (Java has these), and the diamond pattern (i.e. to separate common implementation from common interface). I'm not sure what you mean about MI creating problems. I've done all 3 for more than 15 years and never really had any problems with them (once I figured out how virtual inheritance interacts with constructors!).

    Originally posted by arQon View Post
    a language that doesn't have it is not in any way inherently unsuitable for general purpose,
    The counter-argument to multiple-inheritance you always see is that just about any MI hierarchy can conceivably be modeled as a single-inheritence hierarchy, but you incur a lot more complexity and fragility in doing so. It sounds counter-intuitive to argue that multiple-inheritance leads to simpler code, but this has exactly been my experience (okay, maybe not in the diamond pattern case, but if you really wanted to maintain abstraction over the implementation, then you'd have to use single-inheritance + some other mechanism, like pImpl).

    Originally posted by arQon View Post
    IIRC it was driven entirely by the *potential* for people to do really stupid things (hello operator overloading).
    Obviously, operator overloading has caveats, but it does tend to cut down on the number of named temporaries you need, and ties in nicely with templates. The downsides can be managed with clear guidelines about what operators can be overloaded and which semantics they should have for which sorts of types.

    Originally posted by arQon View Post
    ... would have made adopting C++ for the kernel a massive win even if those were explicitly called out as the only C++ "extensions" permitted
    I think there's a lot to be said for constructors and destructors, especially when the RAII style is used. It definitely reduces potential for leaks and significantly improves code density, allowing you to focus more on the algorithmic aspects. I also can't believe there wouldn't be a lot of opportunities to benefit from templates, in the kernel.

    Originally posted by arQon View Post
    - but the impression I got was that the decision was based solely against the ability to write code that was not just even more illegible than bad C, but simply couldn't be trusted to actually be sane *in isolation*, i.e. unless you also had the header file open etc.
    A lot of thought would need to go into whether & how various aspects of the language are applied, and maybe the majority of senior kernel devs were uncomfortable with that undertaking, given their relative lack of experience, and worried about the consequences of getting it wrong.

    You make a very good point about code needing to support some baseline level of readability without either a header file or a smart IDE that can show you type definitions & function prototypes. This is one of my main reservations about the auto keyword, which I use more sparingly than many.

    Originally posted by arQon View Post
    I still think it was a poor decision, but after some of the clown code I've seen over the years I can certainly sympathize with the concern.
    You can write bad code in any language, but C++ definitely has more potential pitfalls than many. Good C++ style, which is mostly about negotiating that minefield, takes a long time to learn.

    It would be very instructive to investigate how Haiku has negotiated all of these issues. And they were hardly the first. I think BeOS was the first big OS project to use ISO-C++.

    I attended a talk by some of their devs, back in the late 90's, but I knew a lot less about C++ and therefore didn't get so much out of that part of the presentation. I actually had a copy, and it ran amazingly on my quad-Pentium Pro. I was so impressed that it could even mount my Linux ext2 filesystems and had a bash shell.

    Originally posted by arQon View Post
    The fact that drivers, more than almost any other code, will be highly dependent on using "unsafe" is also something that never seems to be mentioned,
    LOL, good point.

    Yeah, it'll be interesting to see how the Rust driver experiment shakes out. I haven't been following it too closely, but I wonder how many drivers are actually using it, so far.

    Leave a comment:


  • piotrj3
    replied
    Originally posted by coder View Post
    I doubt that. C -> C++ is a vastly greater evolution than C++ -> Rust. This means one of two things: either we're converging on the "ideal" programming language (for mainstream uses), or that Rust is much more of an interim evolutionary step than anything wholly new. And for something to have staying power for the next 50 years, I think it needs to do a more comprehensive job of addressing the next 50 years' evolution in computing. I doubt Rust is up to that task, but that's just my opinion.


    I think you lean too heavily on this point. C++ has a lot of baggage, including emotional baggage, cultural, bad press, and historical issues with compiler support. It could have been used in the kernel and could have been done to good effect. The reasons it wasn't weren't purely technical.

    Also, they're only enabling Rust for use in drivers. Maybe more, in the future, but even that much hasn't landed in a shipping kernel release.



    Oof. You sure won that argument! The best way to win over the doubters is obviously to call them insane.


    If you didn't notice, that post did have some claims of a technical nature that you conveniently ignored. Just because you'd rather talk about its features doesn't mean there aren't other potentially legitimate concerns.
    Depends.

    Evolution of C into early C++? No.

    Evolution of C into C++20? I could agree.

    C++ originally was written like C with classes. Later you got many stuff, and i would say that smart pointers were true giant improvement. But originally? No way.

    The issue it can't be (or rather shouldn't be) used in kernel is because if you impose all sane coding style guides for C++ for kernel mode, you almost have C.

    Could you use class? Eh not really as big codebase uses struct and sounds sane to stick to it.

    Exceptions ? Hahaha, No.

    Function overloading? Due to name mangling sounds problematic.

    Another issue is that you can't quite use STL or Boost in kernel.

    Leave a comment:


  • coder
    replied
    Originally posted by sinepgib View Post
    Under whatever they're testing on, most likely some AWS instance due to the niche they cover.
    It sure would be interesting if Google used AWS instances...

    Leave a comment:


  • coder
    replied
    Originally posted by arQon View Post
    That's an absolutely nonsensical statement. "Under 10ms" on *what*, exactly? A 5990X? A Raspberry Pi? A single-core 600MHz washing machine CPU? Under what sort of allocation load?
    I have no substantial experience with garbage collectors or languages that use them, but I think it's instructive to look at it as a proportion of execution time. That would mean if it runs once per second, that it takes 1% of execution time. Since faster CPUs are faster both at allocating memory and freeing it, an efficient garbage collector could conceivably impose a similar amount of overhead on a fast CPU as a slow one.

    Of course, it's unreasonable to assume you need to do the same amount of GC for all workloads. Something like a compiler should lean on GC very heavily, while deep learning is mostly dominated by raw computation. Hence, the target of 10 ms rather than an absolute percentage, because certain programs might have to run it more frequently than others. The other consequence of a 10 ms target is simply that the GC be partitioned into parcels of work small enough that you can always complete useful amounts of work in that amount of time -- even on low-end CPUs.

    Perhaps a bigger variable than the CPU type is actually the amount of RAM a garbage collector has to manage. Here's where something like a big server app could be at more of a disadvantage than a small web app.
    Last edited by coder; 25 July 2022, 07:22 AM.

    Leave a comment:


  • cynic
    replied
    Originally posted by coder View Post
    A few years after Go's introduction, I read an article about how one of Go's designers was marveling at its failure to grab the interest of the general C++ community. Replacing C++ was one of their explicit aims.
    I haven't been in the Go world long enough to remember those statements, so I might be wrong here.

    However, I've seen many Rob Pike's talks about the reasoning behind the choices they made when designing Go and there is absolutely no clues it was trying to replace C++ as a general language: they're basing it on completely different premises.

    As far as I know, Go was meant to get rid of some C++, Java and Python codebases inside Google (for different reasons for each language) but was not intended as a general replacement for C++

    Anyway, as I stated, I'm still learning the Go background and history, so, if you have interesting resources on the topic I'd like to learn!




    Leave a comment:


  • cynic
    replied
    Originally posted by arQon View Post

    That's an absolutely nonsensical statement. "Under 10ms" on *what*, exactly? A 5990X? A Raspberry Pi? A single-core 600MHz washing machine CPU? Under what sort of allocation load?
    what? you can set and satisfy a time constraints regardless of the system you're running on.
    Some Java GCs do the same. G1GC has a 200ms pause time but you can modify it if you need to.

    Of course there might be extreme load situation where it is not sufficient, but your question is wrong anyway.

    Leave a comment:


  • arQon
    replied
    Originally posted by coder View Post
    I doubt that. C -> C++ is a vastly greater evolution than C++ -> Rust. This means one of two things: either we're converging on the "ideal" programming language (for mainstream uses), or that Rust is much more of an interim evolutionary step than anything wholly new. And for something to have staying power for the next 50 years, I think it needs to do a more comprehensive job of addressing the next 50 years' evolution in computing. I doubt Rust is up to that task, but that's just my opinion.
    Rust certainly doesn't "feel" up to it, but a lot of that is for the same reasons that C "isn't" - that is, that you have to manually implement a lot of "stuff" that should ideally be part of the language, with all the guarantees and checks that come from it being so. IOW: has anyone with any nontrivial C experience, ever, NOT implemented fake OO/class support?
    MI is something that in reality almost never solves more problems than it creates, and a language that doesn't have it is not in any way inherently unsuitable for general purpose, but "simple" inheritance DOES map to an absolutely enormous set of real-world problems, and Rust's workaround for not supporting that is as clumsy as "switch(p->type) {" is.

    > C++ ... could have been used in the kernel and could have been done to good effect. The reasons it wasn't weren't purely technical.

    I haven't actually seen *any* technical reason claimed for it - or if I did, I've forgotten that part. IIRC it was driven entirely by the *potential* for people to do really stupid things (hello operator overloading). That's not to say there weren't some small technical differences at the time: const, for example, is still fundamentally broken in C with limitations that don't exist in C++, and that and proper locality for variables *alone* would have made adopting C++ for the kernel a massive win even if those were explicitly called out as the only C++ "extensions" permitted - but the impression I got was that the decision was based solely against the ability to write code that was not just even more illegible than bad C, but simply couldn't be trusted to actually be sane *in isolation*, i.e. unless you also had the header file open etc.
    I still think it was a poor decision, but after some of the clown code I've seen over the years I can certainly sympathize with the concern.

    > Also, they're only enabling Rust for use in drivers. Maybe more, in the future, but even that much hasn't landed in a shipping kernel release.

    Yeah, don't remind the fanboys that this isn't the first time this exact scenario has played out. :P

    I do think Rust is very likely to make it in someday, since the Rust ecosystem fully understands that if they don't hitch their wagon to a star of some kind Rust will simply be the 37th fad language to vanish into obscurity. The fact that drivers, more than almost any other code, will be highly dependent on using "unsafe" is also something that never seems to be mentioned, but I'm sure it'll all work out in the end. Even only having the trivial parts of the code be subject to some sort of additional static analysis does still have *some* value, regardless of how far it is from being the silver bullet that non-developers imagine it to be.

    Leave a comment:


  • sinepgib
    replied
    Originally posted by arQon View Post
    That's an absolutely nonsensical statement. "Under 10ms" on *what*, exactly? A 5990X? A Raspberry Pi? A single-core 600MHz washing machine CPU? Under what sort of allocation load?
    Under whatever they're testing on, most likely some AWS instance due to the niche they cover.

    Leave a comment:


  • arQon
    replied
    Originally posted by sinepgib View Post
    I thought the Go devs made it a point to keep stop the worlds under 10ms.
    That's an absolutely nonsensical statement. "Under 10ms" on *what*, exactly? A 5990X? A Raspberry Pi? A single-core 600MHz washing machine CPU? Under what sort of allocation load?

    Leave a comment:

Working...
X