GC proved to have some good parts and drawbacks. The biggest drawbacks I think are that GC process can be slow, unpredictable, and is memory bandwidth dependent. The biggest features are in code that you don't know what it does, like for example big web frameworks. When you have to do a desktop application, sometimes the code can become messy, (I don't talk about me and you) because are multiple programming styles, broken paradigms, multiple contributors. For this kind of applications, C++ can be painful to be maintained. And for this reason people try to use another paradigms that will remove the small leak from here, the buffer overflow form there. Some may use an SqlLite database, or maybe a shell script to do a specific task, so if that shell script will leak memory, will basically not matter as the process will stop, the process will give back the used memory.
At the end not all GCs are equal. Most GCs are optimized for enterprise, meaning throughput. This means: if the pauses can be overcomed (it is done in most cases with a balancer) what matters is to remove most objects in the shortest time.
Another class of collectors are the D one, or in general the Boehm's ones, meaning they are C based collectors where they scan the stack and are conservative ones. They are the slowest in their class, but they do achieve a safe running program.
Is hard to achieve perfect GC and there is a lot of tuning to make it smooth. Is for a server bad when a GC happen in a cluster for a minute? In some way it does, but not to such of a great extend. On the other way is better to have a node be blocked for some time just doing GC if the task is migrated to another node. Is it bad if the GC will happen in a Mono application, like Pinta? Mostly you will not notice it, because the extra big allocations will be made in a non moving collector, so even a picture is referenced (or not referenced) the GC will let it as it was placed in memory. And about the small pauses that may appear at Undo/Redo, most users will not notice that was an extra 0.1 seconds as GC triggered.
At the end the question I think is: if someone can make a well behaving application that works under our expectations, can it use std::strings, or it must use char* because is faster? It have to use direct arrays or STL? I think the answer is in developers' hands, not in complaining users. The same should be done on GCs: use them but when you should. Focus to make all of us (mostly happy), and don't take any rumor is around!