Originally posted by bridgman
View Post
1) Reinvent the wheel
2) ...poorly
3) Creating your own thing means everyone else must understand it all the way from the ground up, following no common patterns.
4) Takes no advantage of shared libraries meaning you actually increase memory use if there's a hundred applications like yours.
5) Your "light-weight" toolkit ends up taking on more and more crap eventually becoming what you wanted to avoid, only worse.
6) Try optimizing out of trying to use shorts and ints and whatnot to save 2-4 bytes here and there in completely non-critical code
7) Because your documentation sucks and noone knows what it really does, people copy-paste snippets that they know works. Spaghetti...
What I have found is that there are very few cases where you're doing it "right" and still get unacceptable performance. Usually there's some other problem, for example that you put something stupid inside a loop or the wrong loop and the compiler didn't save your ass. Been there, done that - the difference between O(n) and O(n^2) is pretty high when n=50000. But I'd take 5 lines of library-calling code over 100 lines of DIY code any day.
This kind of you see on both kinds of the stupid scale - those that are just so clueless they don't know what the library can do for them, and those that know it can and reject it because it's not "good enough". If so, get a patch upstream to the library and if you think that's too hard, the red blinkenlights should be on already.
Comment