This multi-topic discussion was started by a user, Geert Uytterhoeven, asking (the email) about the differences between kernel mode-setting (KMS) and simple fbdev drivers in the Linux kernel. At the same time, Geert was wondering if Wayland can be run on a "dumb frame-buffer" while letting the CPU handle advanced operations like image transparency.
Since last year there's been experimental work to run Wayland on a Linux frame-buffer. Kernel mode-setting also provides support for panic messages and greater debugging support (like a Linux "Blue Screen of Death") when the system is hosed, but so far that hasn't made much of a major debut yet. There's also Red Hat's Plymouth as a better boot-screen (that's now also used by Ubuntu and other Linux distributions) and it depends explicitly upon the KMS APIs for support.
The response to Geert's question by Intel's Jesse Barnes in the email was that DRM KMS APIs provide everything fbdev can provide, but it also adds in memory management, a method of exposing hardware acceleration (via GEM/TTM), and an effective way to manage multiple display outputs.
There was also another rant about KMS and fbdev by another user. "So if KMS is so cool and provides many advantages over fbdev and such... Why isn't more widely used instead of still relying on fbdev? Why still using fbdev emulation (that is partial and somewhat broken, it seems) instead using KMS directly? I know the graphic driver situation is quite bad on Linux, especially on the embedded world. Fbdev seems is still quite used there by binary blob drivers." In summarizing his message, "I hope all this gets to suck a bit less."
The reasons that Jesse Barnes provides for fbdev still being widely used in the embedded world is the inertia with fbdev having been around for a long time already and the DRM/KMS APIs providing more than what most developers looking for a basic frame-buffer really need. Alan Cox also interjected over more documentation being available for fbdev than KMS.
Corbin Simpson also added that Linux KMS is still missing basic user-space utilities for manipulating KMS drivers/displays and a direct KMS console rather than relying upon any emulation. He's also after a "xf86-video-modesetting" driver which would work with various KMS drivers that may not have an actual DDX driver available. It would be a generic KMS DDX driver as he's been working on kernel mode-setting support for some old GPUs, but without any user-space X driver, that work is of limited use until Wayland is mainstream. "One of the big goals of KMS was a generic userspace-facing API, like FB, but without the suck."
Tiago Vignatti also jumped on the list to say that he's worked on Wayland from a frame-buffer.
AMD's Alex Deucher was yet another developer taking part in this discussion. He brought up the SoC vendors that are adding an fbdev emulation layer on top of V4L. With V4L there is its own EDID, HDMI, and CEC handling to deal with.
Bringing up SoCs led Robert Fekete of Linaro to write about the situation. It's just a big mess with SoC/embedded graphics right now and trying to get them to use DRM/KMS rather than V4L/fbdev.
- Developments within V4L2 has mainly been driven by embedded devices while DRM is a result of desktop Graphics cards. And for some extent also solving different problems.
- Embedded devices usually have several different hw IP's managing displays, hdmi, camera/ISP, video codecs(h264 accellerators), DSP's, 2D blitters, Open GL ES hw, all of which have a separate device/driver in the kernel, while on a desktop nowadays all this functionality usually resides on ONE graphics card, hence one DRM device for all.
- DRM is closely developed in conjunction with desktop/Xorg, while X11 on an embedded device is not very 2011...wayland on the other hand is :-), but do wayland really need the full potential of DRM/DRI or just parts of it.
- Copying buffers is really bad for embedded devices due to lower memory bandwidth and power consumption while on a Desktop memory bandwidth is from an other galaxy (copying still bad but accepted it seems), AND embedded devices of today records and plays/displays 1080p content as well.
- Not all embedded devices have MMU's for each IP requiring physical contiguous memory, while on a desktop MMU's have been present for ages.
- Embedded devices are usually ARM based SoCs while x86 dominates the Desktop/Laptop market, and functionality provided is soon the very same.
- yada yada....The list can grow very long....There are also similarities of course.
The outcome is that SoC vendors likes the embedded friendliness of v4l2 and fbdev while "we" also glance at the DRM part due to its de-facto standard on desktop environments. But from an embedded point of view DRM lacks the support for interconnecting multiple devices/drivers mentioned above, GEM/TTM is valid within a DRM device, the execution/context management is not needed,, no overlays(or similar), the coupling to DRI/X11 not wanted. SoCs like KMS/GEM but the rest of DRM will likely not be heavily used on SoCs unless running X11 as well. Most likely this worked on as well within the DRI community. I can see good features all over the place(sometimes duplicated) but not find one single guideline/API that solves all the embedded SoC problems (which involves use-cases optimized for no-copy cross media/drivers).
He then brought up a Linaro discussion that is currently taking place regarding memory management. With open-source DRM/KMS drivers there's GEM (Graphics Execution Manager) and TTM (Translation Table Maps) to pick from. Intel uses GEM while the ATI/AMD and Nouveau drivers are using TTM while interfacing with the GEM APIs. The VIA Linux DRM work is also using primarily TTM. With the SoC vendors though, they have a few other choices. There's also HWMEM, UMP, CMA, VCM, CMEM, and PMEM as possible memory choices when developing a driver, but those are not in the kernel.
That's a quick overview of where the discussion is at right now.