Some key points.
* GL's object handles are super error-prone. All objects are identified by an int. Easy to fail to understand what to pass in to what. Furthemore, there is a requirement for a level of indirection between client-side API objects and handles, since you can't stuff a 64-bit pointer into a 32-bit int and there are rules about the IDs being generated sequentially in some cases any, which you can't do with a pointer-cast-to-an-int.
* The GL latch/state system is and poorly specified. Some GL|ES vendors implement it differently, all within spec. The latch system is wher you don't just modify an object, but instead set an object with one API call and then modify the bound object with a second. In some cases, what happens when you unbind objects varies wildly, like VAOs on different hardware vendors. Larger game engines need to worry about significant work-arounds to deal with a variety of devices and vendors.
* The GL API was originally released in 1991 for rasterization acceleration hardware. It has been extended - poorly - to keep up with modern hardware, usually years late, and way behind D3D. Many parts of the 3.0 GL API were design around what people "thought" hardware would be, and they guessed wrong. Many of those assumptions were not worked aorund in the API until 4.3, and some still remain. For back-compat reasons, those assumptions are still present, it's usually harder to do things the "right" way than the wrong way, and inexperience users are constantly doing things wrong because out-dated tutorials and the highly redundant GL API lead them astray. D3D on the other hand rather sensibly just releases a new version of the API when things have changed enough to warrant it, as well as to fix old mistakes.
* GL tries to hold your hand more often than D3D. This results in some "GL is easier to use" comments from beginners, but actually squeezing out performance of GL is a complicated process compared to D3D. You have to use obtuse, rdundant APIs in non-obvious ways to get the best performance. Using GL|ES or GL Core Profile is about as much work as D3D11, but most novices don't deal with those.
* GL Core Profile / Compatibility Profile was a misguided attempt to solve the crufty API problem. Of course, the only things they did was disable old redundant APIs without replacing some of the existint dual-usage APIs that could stand for a refresh. Given the lack of a test suite, most drivers were (still are) buggy as hell when in Core Profile mode. This has resulted in everyone everywhere still writing Compatibility Profile code, except on OSX where you can't get GL 3+ without enabling Core Profile. This causes a lot more portability problems than necessary. API cleanups have been abandoned by Khronos because they think their idiotic attempt at "profiles" is the only way to do API cleanups.
* Dekstop GL drivers are buggy and incomplete. Both NVIDIA and AMD have yet to deliver bug-free fully-compatible GL 4.3 implementation almost a full year after release. The drivers are barely used. Khronos provides no test suite. Games using GL on the desktop typically have much higher support costs than D3D games due almost entirely to the GL drivers. This is not super relevant to the API itself, of course, but is a huge deal to developers shipping actual games - they can't write GL versions of the game if they want to target the PC.
* The GL tools and debugging facilities are under-featured. Finding out why any particular GL call fails is an exercise in maximum frustration. Tools have gotten better in recent years, but the d3d tools have improved even further. Part of this is an API issue; the GL error reporting facilities are weak and the API prone to far more errors in the first place.
* Parts of the API are overloaded and awkward. glVertexAttribPointer takes a void* as the final argument... but in all modern GL usage, you're actually passing in an integer here. Likewise, glTexImage2d and friends have redundant, pointless parameters that still must be set just right (and different than most existing documentation will state) to work properly.
* GLSL does not reflect how the shader pipeline actually works up until GLSL 4.30... which again, you can't actually use in real life. Even then, assuming a console "does it right," the syntax in GLSL to deal with doing things properly is incredibly cumbersome compared to doing it wrong, since back-compat was kept to keep some mythical CAD folks who might want to use 4.3 and currently use 2.x happy.
* Despite years of clamoring from some of the biggest GL proponents and even NVIDIA, Khronos rejects many API improvements like "direct state access" into the core API.
* Extensions are a huge pain in the ass. You can't just write a GL app. You have to write slightly-different-for-each-hardware-vendor GL path in your code in order to make things work well. Consoles require this anyway, of course, but on the desktop space this matters.
* EGL is not really a thing except for GL|ES. Many of the features provided by DirectX's DXGI are unsupported without EGL and even with EGL some bits are missing.
* Did I mention what a terrible error-prone pain in the ass using the API is?
* Hardware features that you should rely on are treated as "potential optimizations" in the GL API. This is especially harmful to newer users. It's still common here in 2013 to see people asking whether they should use vertex buffers or not.
* Some GL interfaces do not reflect how the hardware works, even in 4.3. Vertex Array Objects are kinda sorta like ID3D11InputLayout except they also specify buffer bindings, which are dealt with separately on hardware. Likewise, FrameBuffer Objects do not reflect how render targets are bound on the hardware. It's difficult to minimize state changes and have you to hope that the driver does the right thing (completely unacceptable on consoles where you really, really need direct control).
* GL misses some features found in hardware, like DXT texture compression, in the name of patents. That's great for Mesa and FOSS but shitty for people who need to make actual products that actually use the expensive graphics hardware fully. Kind of a wash as D3D is missing some texture modes and such that GL has (not ones you really need, but still, I've heard noises from a developer at Microsoft that they want to fix this in a future update).
* GL does not support any kind of multi-threaded object management or command list generation. The old Display List functionality is deprecated and not equivalent in any case (if you send a draw call with a VBO to a display list, the VBO contents are copied, rather than merely recording the VBO handle). You can with some vendor-specific work-arounds do object management on multiple cores but this also creates a full rendering context (bad!) and again doesn't handle command list generation.
* GL's global states mentioned previously don't just make the API more complex, but also cause frequent bugs, and don't reflect hardware. Real hardware has state blocks. D3D mirrors these. Granted, D3D11 is a little out of date in this regard as its concept of state blocks do not match 100% modern hardware, but this will most likely be corrected in D3D12. It will probably never be dealt with in GL, ever.
* GL binds its output surface/rendering context to its concept of a device handle. Recreating a rendering surface implies regenerating the whole context.
* GL is managed-resources only... sometimes. Don't want a buffer you upload kept in both GPU and CPU space? Up to the driver, not you. Necessary because...
* There is no proper "surface lost" or "device lost" signal for GL. Implementing proper display adapter switching or efficient hardware resets on GL is impossible without some new can't-be-integrated-properly extension that doesn't yet exist. This is part of why the previous point exists. Of cousre, GPU-only objects will still be lost.
* GL's API is designed for the days before compositing. On the desktop, D3D10+ assumes compositing. On a console, you can assume you might have overlays but not that you will be forced to share a single framebuffer with other processes. GL still maintains the concept of "pixel ownership" and this whole legacy cruft imposes some API nastiness that could be done away with.
* GL is slow to update. Microsoft works with hardware vendors during the definition of new hardware and APIs, acting as a liason between the development community's requests for D3D features and hardware vendors. Microsoft ensures that their API supports the common functionality on hardware and helps push vendors towards progress. GL languishes in a pit until well after a D3D update is released and then vendors scramble to define new extensions and combinations of old vendor-specific ones to get a half-assed spec out the door. It usually takes multiple version to "get it right." GL 3.x was the worst; it's gotten better, but not enough. GL 3.0 was released years after D3D 10 and didn't hit "almost" feature parity until GL3.3 even later. Likewise it took until 4.3 to catch up _mostly_ to D3D 11, minus the raft of features missing still. If you're releasing a new cutting-edge platform, using GL as a basis means your API will be years behind your hardware.
* Numerous other small API warts, missing features, and a disconnect with hardware.
I used to be a pro-GL bigot too back before I had any clue what the hell I was talking about. when I first moved to the games industry and was forced to Windows from Linux I came kicking and screaming. I still recall some of the long debates I had trying to arguing pro-GL merits to a variety of developers. After having every single point refuted, the sole two advantages I could keep for GL were "some platforms require it and it also works in Windows so it's 'more portable'" and "extensions give you early access to some vendor-specific features." Not much.
What a long post of FUD and lies.