Originally posted by Zan Lynx
View Post
Announcement
Collapse
No announcement yet.
Go 1.8 Baking Garbage Collector Improvements, Lower Cgo Overhead
Collapse
X
-
Originally posted by Zan Lynx View Post
You might think so, but I find there's quite a lot of pressure to control memory usage.
It makes all the difference in how many containers can be run on each host. Using Kubernetes, I can define memory limits. Say that a micro-service is known to stay under 64 MB. I can set its memory limit to 64 MB and now I can run hundreds of copies on one node. Or I can build it differently and make a single gigantic THING that tries to serve hundreds of clients per second using 2 GB and 200 threads. The second choice is a lot more fragile though, and harder to scale across a bunch of cheap, temporary EC2 nodes.
Comment
-
Originally posted by patstew View Post
Seriously? Use a GC to fix your problems using malloc, then make your own implementation of malloc on top because GCs are slow. That's completely ludicrous.
Object recycling and avoiding copies is not anything like reimplementing malloc.
Comment
-
Originally posted by Zan Lynx View Post
That isn't quite what I meant. Read better.
Object recycling and avoiding copies is not anything like reimplementing malloc.
I don't mean to pick on you, and I'm not disagreeing that memory reuse is a good idea. My beef is more that you need to understand what's being allocated when if you care at all about performance, and a GC just gets in the way of knowing what's happening, and makes your program have random latency spikes and excessive memory usage as a bonus.Last edited by patstew; 12 January 2017, 09:14 PM.
Comment
-
The particular application I was thinking of was similar to the this example: Read large chunks of bytes, say 64K, from a file or network. Like log entries, for example. Now, instead of writing functions which -- wait for it -- create a copy of each line and then more functions which chop that up into timestamp, system, program, PID, etc, and make even more copies of each piece, what you do is use Go byte slices directly from the read buffer. And when you're done with it you put that chunk buffer's pointer into a buffered channel of chunk pointers, where it gets pulled out again and used to read more bytes instead of waiting for GC and being reallocated while getting all kinds of memory fragmentation.
As always, test for yourself, profile, etc.
Comment
-
Originally posted by Zan Lynx View Postinstead of making a copy for each pieceLast edited by andreano; 13 January 2017, 02:39 PM.
Comment
-
-
Tradeoffs: If the goal is to suck less, then as a user, I know for sure what I would appreciate: A GC that runs often enough that the memory usage goes up and down like a normal non-GC program!
With Java programs, I get the impression that they take more and more memory until the max heap size is consumed (half your RAM by default), then runs GC to keep it at that level, and never releases anything back to the OS. That means, if you have 128 GB ram, Java will use 64; 63 of them will be swapped out … oh, that's right, you can't have swap with Java, or the world will stop for real. But what really makes me avoid Java like the plague, is that I, the user/sysadmin has to tweak the maximum heap size depending on the workload – which varies.
The Java GC is a leaky abstraction, but users/sysadmins hate to be bothered by it!
If the Go GC doesn't give a damn about anything else than latency, and that can be leveraged to run it more often, that's music to my ears.Last edited by andreano; 13 January 2017, 03:10 PM.
Comment
-
Originally posted by Palu Macil View Post
I read that article a couple weeks ago and found it to be pretty sloppy. Overall it's obviously written by someone with a lot of experience, but it focuses in the initial release of the Go GC when there have been major updates in the year and a half since. Why didn't it talk about Go's current GC instead? Another thing it was weird about was saying that Java has a GC with similar latency, and then it mentions the latency of this Java GC low latency option, which is an order of magnitude slower than Go's. It also doesn't discuss that Go has better escape analysis, making the GC not needed much of the time where it would be in Java. Overall, the author seemed very smart, but I couldn't help but wonder if someone with so much experience and such huge errors in their writing was blinded by an affection for Java and picked facts in a way that could be misleading.- A zero-pause (or almost zero-pause) collection regime significantly increases overheads for normal program execution. This is particularly true with multithreaded languages like Java.
- Schemes that use a dedicated thread or threads to do garbage collection can get swamped if the application generates too much garbage.
- Any GC scheme will give you poor performance if the application's memory usage patterns are too "lumpy" and/or you don't have enough physical and virtual memory.
Brian
Comment
Comment