Announcement

Collapse
No announcement yet.

Go 1.8 Baking Garbage Collector Improvements, Lower Cgo Overhead

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Go 1.8 Baking Garbage Collector Improvements, Lower Cgo Overhead

    Phoronix: Go 1.8 Baking Garbage Collector Improvements, Lower Cgo Overhead

    The first release candidate of Google's Go 1.8 programming language is now available ahead of the official launch expected next month...

    http://www.phoronix.com/scan.php?pag...&px=Go-1.8-RC1

  • brianmanee
    replied
    Originally posted by Palu Macil View Post

    I read that article a couple weeks ago and found it to be pretty sloppy. Overall it's obviously written by someone with a lot of experience, but it focuses in the initial release of the Go GC when there have been major updates in the year and a half since. Why didn't it talk about Go's current GC instead? Another thing it was weird about was saying that Java has a GC with similar latency, and then it mentions the latency of this Java GC low latency option, which is an order of magnitude slower than Go's. It also doesn't discuss that Go has better escape analysis, making the GC not needed much of the time where it would be in Java. Overall, the author seemed very smart, but I couldn't help but wonder if someone with so much experience and such huge errors in their writing was blinded by an affection for Java and picked facts in a way that could be misleading.
    • A zero-pause (or almost zero-pause) collection regime significantly increases overheads for normal program execution. This is particularly true with multithreaded languages like Java.
    • Schemes that use a dedicated thread or threads to do garbage collection can get swamped if the application generates too much garbage.
    • Any GC scheme will give you poor performance if the application's memory usage patterns are too "lumpy" and/or you don't have enough physical and virtual memory.

    Brian

    How Garbage Collection Works in Java : The garbage collector is a program which runs on the JVM which gets rid of unused objects which are not being used by a Java application anymore. What is the garbage collector in Java? Why Garbage Collection in Java?

    Leave a comment:


  • andreano
    replied
    Tradeoffs: If the goal is to suck less, then as a user, I know for sure what I would appreciate: A GC that runs often enough that the memory usage goes up and down like a normal non-GC program!

    With Java programs, I get the impression that they take more and more memory until the max heap size is consumed (half your RAM by default), then runs GC to keep it at that level, and never releases anything back to the OS. That means, if you have 128 GB ram, Java will use 64; 63 of them will be swapped out … oh, that's right, you can't have swap with Java, or the world will stop for real. But what really makes me avoid Java like the plague, is that I, the user/sysadmin has to tweak the maximum heap size depending on the workload – which varies.

    The Java GC is a leaky abstraction, but users/sysadmins hate to be bothered by it!

    If the Go GC doesn't give a damn about anything else than latency, and that can be leveraged to run it more often, that's music to my ears.
    Last edited by andreano; 01-13-2017, 03:10 PM.

    Leave a comment:


  • Zan Lynx
    replied
    There's this from C++17: http://en.cppreference.com/w/cpp/str...ic_string_view

    Leave a comment:


  • andreano
    replied
    Originally posted by Zan Lynx View Post
    instead of making a copy for each piece
    Ah, slices! They make sense also in non-GC (Rust) and interpreted (python) languages. Avoids both the allocation and the copying. It's a shame C++ lacks a standard concept of that (since passing pointer+length is more inconvenient and error prone than std::string, I don't know how many times I have reimplemented the Slice class in C++).
    Last edited by andreano; 01-13-2017, 02:39 PM.

    Leave a comment:


  • Zan Lynx
    replied
    The particular application I was thinking of was similar to the this example: Read large chunks of bytes, say 64K, from a file or network. Like log entries, for example. Now, instead of writing functions which -- wait for it -- create a copy of each line and then more functions which chop that up into timestamp, system, program, PID, etc, and make even more copies of each piece, what you do is use Go byte slices directly from the read buffer. And when you're done with it you put that chunk buffer's pointer into a buffered channel of chunk pointers, where it gets pulled out again and used to read more bytes instead of waiting for GC and being reallocated while getting all kinds of memory fragmentation.

    As always, test for yourself, profile, etc.

    Leave a comment:


  • patstew
    replied
    Originally posted by Zan Lynx View Post

    That isn't quite what I meant. Read better.

    Object recycling and avoiding copies is not anything like reimplementing malloc.
    What do you mean by "Use large byte buffers and slices of those buffers instead of copies. Recycle instead of releasing them." then? It sounds like you're reusing chunks of buffers, which means you need to know when you start and stop using a particular piece of the buffer, which means you're basically doing manual memory management in a garbage collected language.

    I don't mean to pick on you, and I'm not disagreeing that memory reuse is a good idea. My beef is more that you need to understand what's being allocated when if you care at all about performance, and a GC just gets in the way of knowing what's happening, and makes your program have random latency spikes and excessive memory usage as a bonus.
    Last edited by patstew; 01-12-2017, 09:14 PM.

    Leave a comment:


  • Zan Lynx
    replied
    Originally posted by patstew View Post

    Seriously? Use a GC to fix your problems using malloc, then make your own implementation of malloc on top because GCs are slow. That's completely ludicrous.
    That isn't quite what I meant. Read better.

    Object recycling and avoiding copies is not anything like reimplementing malloc.

    Leave a comment:


  • bug77
    replied
    Originally posted by Zan Lynx View Post

    You might think so, but I find there's quite a lot of pressure to control memory usage.

    It makes all the difference in how many containers can be run on each host. Using Kubernetes, I can define memory limits. Say that a micro-service is known to stay under 64 MB. I can set its memory limit to 64 MB and now I can run hundreds of copies on one node. Or I can build it differently and make a single gigantic THING that tries to serve hundreds of clients per second using 2 GB and 200 threads. The second choice is a lot more fragile though, and harder to scale across a bunch of cheap, temporary EC2 nodes.
    True, but memory model has relatively little to do with overall memory consumption. There are things (like Java's string intern), but at the end of the day it comes down to the program's use/store/cache of data.

    Leave a comment:


  • patstew
    replied
    Originally posted by Zan Lynx View Post
    If I remember correctly. the other thing that the Go GC does which is useful is that when it needs extra time for GC it takes it from the threads that are doing allocation. This works out great because those are the same threads creating more garbage to collect later. so it is assigning the cost to the right place.

    Like every GC language, the ultimate way to speed up the program is to not create any garbage memory. Use large byte buffers and slices of those buffers instead of copies. Recycle instead of releasing them. That technique alone can increase program speed dramatically and it is a lot easier to do in Go than it is in Java.
    Seriously? Use a GC to fix your problems using malloc, then make your own implementation of malloc on top because GCs are slow. That's completely ludicrous.

    Leave a comment:

Working...
X