Announcement

Collapse
No announcement yet.

Google Engineers Lift The Lid On Carbon - A Hopeful Successor To C++

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by kpedersen View Post
    Carbon is a close superset of C++. This might actually have a chance of succeeding C++.

    It is basically Rust with a C and C++ compiler bolted on. No need for creating / maintaining bindings or marshalling data via the FFI.
    Question-why develop a brand new language? Why not solve FFI between Rust and C++ at the ABI level plus any mods/extensions to Rust's syntaxes that will enable easier migration away from C++? GOOG would be in a unique position to tackle this via the ABI.

    I would love to hear a technical explanation of why Rust-C++ FFI can't be solved at the ABI level almost entirely, if not entirely...

    Comment


    • Originally posted by Sergey Podobry View Post
      Oh, it seems the correct term is "supply and demand".
      Okay - in that case, this idea that "Salary depends on supply and demand in the market" is actually fairly wrong. (Sorry: there are going to be some terms in this that are likely to be awkward for you as a non-native speaker. I'll try to keep them to a minimum).

      The "supply and demand" principle is sort-of mostly true for simple commodities - food, for example - and it's pretty much where we, as children, get our first piece of understanding about economics. It becomes a lot less true as the *fungibility* of the commodity declines. To try and put that more concretely:

      Say you have a bushel of wheat. That wheat is fundamentally *completely interchangeable* with any other bushel of wheat. Likewise for a pound of gold, or a gallon of oil, etc etc. You can get it from place A, or place B on the other side of the world, and for all practical purposes it doesn't matter at all: at the end of the day, it's all wheat.

      Not all commodities work like that though. A bottle of this year's wine from Chile, aged in a plastic drum for 10 days, costs $3. A bottle of wine from Chateau Pretentious 1949 though might cost $50,000 (even though it will almost certainly be even less drinkable).

      People, depending on their role etc, can also span a similar spectrum. Staffing a McJob is very much like the wheat case: the company cares very little about *who* fills that job, only that they have *somebody* to do the work. The same mentality is what drives Amazon to consider its warehouse staff etc to be utterly disposable, and so on.
      Now take a look at what Amazon pays its VPs, whose "work" is arguably just as mentally demanding, and infinitely less so physically. (Or, if you prefer, a politician - a job which can be done by someone who didn't even manage to graduate high school).

      The "supply" of people capable of being an Amazon VP is nearly infinite. The demand for people to fill that job is extremely small. By your reasoning, there's an argument it should pay less than being a warehouse worker.
      Take a look at nursing in almost every country in the world: the demand is enormous, the supply is tiny, and yet nursing salaries are nearly always terrible.

      Say a consultant gets paid $200K for six months work to turn a failing project around. It ships on time, and generates $1B in revenue in its first year. Was the consultant overpaid, underpaid, both, or neither? How about if the company had hired 5 entry-level developers instead, none of whom had the specific knowledge that saved the project?

      Developers are not bushels of wheat, and aren't thought of as such by even the lowest-tier outsourcers etc. (Despite how often they're *treated* as if they were once they're in a job). It's only at the very lowest levels of development that you can think of them as being fungible. The "supply" isn't "people as raw body count", it's "people who can get *this job* done".

      How that relates to the "my language is better than yours" garbage earlier in this thread is... well, I can't answer that, because there wasn't anything like enough relevant information in that post, unsurprisingly. But, for example:
      * If you're hiring for a Rust project, it's pretty much guaranteed that the project is new, comparatively small, and comparatively simple. You're also hiring for a language that nobody has much experience with. All of those are good reasons to, as a hiring manager, be a *lot* more willing to take on junior staff for it and train them up.
      * Rust is "safe", so why pay for the sort of expertise required to get a C/C++ developer whose code isn't going to bring down production because they missed a race or overwrote 30MB of already-freed memory?
      * Rust is the current fad, so I can probably get good developers at a discount.
      * Even a mediocre C/C++ dev can become proficient with Rust in a few weeks at most. How many *years* do you think it takes a Java/Go/Rust/web developer to become equally capable going in the other direction?

      And that's just off the top of my head. In short, things are a lot more complicated than a one sentence basic principle from 6th grade; and certainly far more complicated than a random list of languages with just a single number next to them that was pulled from thin air with no context.

      Some advice for you that will have value for your entire life: any time someone tries to offer you a massively-simplified "answer" to a complex question, you should be very wary of it. There are only two possible reasons for them doing so: either they don't understand it in the first place, or they are trying to deceive you. Either way, it would be a mistake to trust them. gl.

      Comment


      • Originally posted by arQon View Post

        That comment is so hopelessly wrong that it took me several minutes to even understand what (I *think*) you were actually trying to say, but even if so it's still wrong, just in a more normal way.
        It doesn't help that you changed the context from what I was actually replying to, so maybe reading the post would have helped.

        > Of course there might be extreme load situation where it is not sufficient, but your question is wrong anyway.

        Two quick points: first, you mean you don't have the experience to understand the question; and second, it was rather obviously rhetorical.

        Yes, "load" - not "extreme load" or other attempts to hedge - is one of the reasons why "10ms" obviously isn't, and can't be, guaranteed, or even close to it.

        Anyway: I'm going to interpret your statement as attempting to say that what you think is that "When Go runs GC, it tries to do so in slices of not more than 10ms", right? If so, there may be some point in continuing with this. If that wasn't it then lets get out of here before wasting any more time on this.
        Man, firstable stay cool.

        10ms probably was a typo (it was supposed to be 100ms).

        Secondly, I've been fighting with the Java GC for the last 20 years, so I think I know a thing or two on the matter.
        But don't worry: I'm not going to waste your precious time on this, since all that you wrote was arrogant insults with 0 technical content.

        Have a nice day.

        Comment


        • Originally posted by arQon View Post
          Okay - in that case, this idea that "Salary depends on supply and demand in the market" is actually fairly wrong.
          No, it's pure market from the lowest to top level positions. Now we're short of software developers (and especially good ones) that's why their salaries are high. If (or when) there will be more developers than available jobs the salaries go down.

          So if there are 5 Rust jobs and 100 suitable candidates you (as a Rust developer) are in a bad situation. Rust may be a super cool language. But the industry is not ready for it.

          Comment


          • Originally posted by arQon View Post
            * Even a mediocre C/C++ dev can become proficient with Rust in a few weeks at most...
            First off, don't put c and c++ in the same boat. C is easier than python. It just takes a lot of math background and formal education in algorithms and data structures to get comfortable with manual memory management. And this is backed by Intel's own stats on how long it takes them to bring their fresh recruits to a proficient level when (cherry) picked off uni.

            Having said that, consider the following:

            1. Even a mediocre English literate can become proficient in *phonetically consistent language script* in a few weeks at most. How many *years* do you think it takes a *phonetically consistent language* literate to become equally capable going in the other direction?

            2. Even a mediocre hanzi literate can be come proficient in latin script in a few weeks at most. How many *years* do you think it takes a latin script literate to become equally capable going in the other direction?

            That is, there's plenty of examples of really shitty languages that dominate not because of their merits or even despite of their faults, but actually BECAUSE of their faults. That is, when you raise the barrier of entry to compiler developers and make it hard for devs to simply throw away 5+ years of their professional careers when making future product choices, not only does having a shitty broken language isn't a problem, it's the very feature that lets you vendor lock.

            Comment


            • Originally posted by cynic View Post
              10ms probably was a typo (it was supposed to be 100ms).
              I made the claim, it was not a typo: https://go.dev/blog/ismmkeynote
              The math is completely unforgiving on this.

              A 99%ile isolated GC latency service level objective (SLO), such as 99% of the time a GC cycle takes < 10ms, just simply doesn’t scale. What matters is latency during an entire session or through the course of using an app many times in a day. Assume a session that browses several web pages ends up making 100 server requests during a session or it makes 20 requests and you have 5 sessions packed up during the day. In that situation only 37% of users will have a consistent sub 10ms experience across the entire session.

              If you want 99% of those users to have a sub 10ms experience, as we are suggesting, the math says you really need to target 4 9s or the 99.99%ile.

              So it’s 2014 and Jeff Dean had just come out with his paper called ‘The Tail at Scale’ which this digs into this further. It was being widely read around Google since it had serious ramifications for Google going forward and trying to scale at Google scale.

              We call this problem the tyranny of the 9s.
              Originally posted by cynic View Post
              Secondly, I've been fighting with the Java GC for the last 20 years, so I think I know a thing or two on the matter.
              But don't worry: I'm not going to waste your precious time on this, since all that you wrote was arrogant insults with 0 technical content.

              Have a nice day.
              The quarrel I've seen with some Java programmers is that they see as "bad" a GC without many knobs and that sacrifices throughput so hard for latency.
              In reality it's just a matter of use case. Real time requires low latency, period, that's the top priority. Go is focused on soft real time nowadays.
              Besides, knobs don't usually scale, you need someone with a real deep understanding of how GC does its job to tune them correctly, which doesn't abound, and they often don't really translate when you switch computers. Keeping it simple in the end forces reasonable defaults on you, at the expense of flexibility.

              Comment


              • Originally posted by Sergey Podobry View Post
                No, it's pure market from the lowest to top level positions. Now we're short of software developers (and especially good ones) that's why their salaries are high. If (or when) there will be more developers than available jobs the salaries go down.
                Or we realize we don't need as many developers. The financial bubble is bursting, and IT will be one of the most affected areas because it's one of the most overly pumped up ones. We're already seeing that with hiring freezes and mass layoffs from the bigger companies, everything else that doesn't have a clear use will follow. That's the all time compromise between "boring" lower paying but stable jobs in the real economy vs "edgy" high paying ambitious jobs with potential of making you very rich, those also have the potential of not delivering on the high ambitions and leaving you on the street. On recession, most companies take a conservative approach and stop going for the bold-but-not-yet-profitable ventures, which are a big part of IT right now. Salaries will go down soon, even before the market gets flooded by new programmers.

                Comment


                • Originally posted by c117152 View Post
                  First off, don't put c and c++ in the same boat. C is easier than python. It just takes a lot of math background and formal education in algorithms and data structures to get comfortable with manual memory management.
                  Those two claims pretty much contradict each other. If you need a lot of math background and formal education then it's not easier. It's like claiming advanced calculus is actually easier than basic arithmetic, because you just need a strong math background and arithmetic builds on nothing, which makes it harder to exploit those foundations. And yet, everyone else will easily understand it's the other way around and that's why most people "get" basic arithmetic but can't think of multivariate calculus off the top of their head.
                  Besides, the problem with C is seldom manual memory management. That's a big pain point, and something you can screw up for your whole career, but the real issue is it's hard to always keep in mind all the foot guns in terms of undefined behavior all the time. In that sense, C and C++ are more or less equally bad, although at least some coding guidelines make it easier to not need the dangerous parts in C++. Heck, it's hard enough to make someone understand what UB even means.
                  Being able to write code that seemingly works is not the same as knowing the language well enough.

                  Originally posted by c117152 View Post
                  1. Even a mediocre English literate can become proficient in *phonetically consistent language script* in a few weeks at most. How many *years* do you think it takes a *phonetically consistent language* literate to become equally capable going in the other direction?
                  That is irrelevant. The point of replacing a language is that you don't need to cover the other way around. If Rust actually becomes mainstream, you simply won't need as many C programmers, so it doesn't matter whether your Rust programmer can gain proficiency in C. But now the industry have "good enough" programmers at cheaper prices.

                  Comment


                  • Originally posted by coder View Post
                    I have no substantial experience with garbage collectors or languages that use them, but (snip)
                    There are a few families of GC, and they're pretty much all worth a read if you have time. You want to do so chronologically, so you can see how they evolved.
                    Early ones operate pretty much the way you're imagining; later ones are, unsurprisingly, better both in general and with edge cases. Fundamentally though, the problems with them remain unsolved (obviously, or we'd all be using them :P), and they simply aren't functional for certain work.

                    > I think it's instructive to look at it as a proportion of execution time. That would mean if it runs once per second, that it takes 1% of execution time. Since faster CPUs are faster both at allocating memory and freeing it, an efficient garbage collector could conceivably impose a similar amount of overhead on a fast CPU as a slow one.

                    Not really, no. "Could conceivably", as you say, and you can certainly create scenarios where it does, but you're missing a critical factor in that train of thought: when the GC completes affects your footprint *and* your performance.

                    > Hence, the target of 10 ms rather than an absolute percentage, because certain programs might have to run it more frequently than others.

                    Not sure if what you're suggesting there is really what you meant to say. Regardless, yes: per the parable, there's a big difference between one boulder and many pebbles.

                    > The other consequence of a 10 ms target is simply that the GC be partitioned into parcels of work small enough that you can always complete useful amounts of work in that amount of time -- even on low-end CPUs.

                    Not really, which is where I was heading earlier. You can work out why pretty easily (the more recent GC designs *are* a lot better on that front). The guarantees also tend to get a bit unreliable pretty quickly unless the machine is idle, for obvious reasons, but the real problems start when the collection timeslice isn't enough for the GC to complete. It's *really* easy to have something appear to work fine even under "typical" load, but either balloon or stall under heavy load, just like every other GC ever. That leads to either significant extra cost from over-provisioning and/or having to expand the fleet, or a system that just "doesn't work" if you don't have that option (i.e. embedded).
                    Depending on what the rest of the system is doing and what's waiting on what, you can end up breaking things just because the performance characteristics of a *non-Go* piece of the system have changed, because of additional throughput or buffering, depending on the direction of the change. (Yes, that's a curiously specific example - ask me how I know... :P)

                    > Perhaps a bigger variable than the CPU type is actually the amount of RAM a garbage collector has to manage.

                    Less "perhaps" and more "sometimes", really. It's also generally less about the total memory use and more about *how* that memory is used, c.f. the parable again. 120MB of scattered small allocs needs lot more management than a single 200MB chunk, though the allocators tend to be pretty good about drawing from pools, so the smaller case at least doesn't usually also claim more pages than the larger one, though obviously you can still be "unlucky" depending on what else is going on.

                    Comment


                    • Originally posted by sinepgib View Post
                      That is irrelevant. The point of replacing a language is that you don't need to cover the other way around. If Rust actually becomes mainstream, you simply won't need as many C programmers, so it doesn't matter whether your Rust programmer can gain proficiency in C. But now the industry have "good enough" programmers at cheaper prices.
                      Right on all counts, but the point is that those "unneeded" C programmers can easily be turned into Rust ones, but the opposite is not true. If you have a pool of 20 "average" C devs and 10 "average" Rust devs, where neither have any experience with the other language, you effectively have 25+ Rust devs a month from now if you want them. OTOH, if your pool is 10 C devs and 20 Rust devs under the same conditions, you effectively still only have 10 C devs a month from now, maybe a dozen or so if you were really lucky.

                      This is a hugely important difference, because the majority of any given developer's value to you isn't their technical skill, it's their understanding of the company's systems and their business. Being able to keep all of that knowledge makes any sort of transition far more likely to succeed. For a lot of scenarios, Rust's "safety" has a lot more value in terms of development time than it does in any runtime aspects.

                      Comment

                      Working...
                      X