Wayland Color Management Protocol Might Finally Be Close To Merging

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • elvis
    replied
    Originally posted by caligula View Post
    average users have
    I think this is becoming a theme on these forums lately - the concept that future facing development is wasted because of what "average users have right here and right now".

    Colour management is critical to any display system where things are encode in different colour standards. As above, we've been lucky for a short while now where computer, phone and TV standards were all close enough that it didn't matter. That's changing right now. There's growing divergence between things already, and that gap widens by the day.

    Beyond that, development needs to happen now so that it's ready and faultless by the time "average users" catch up. There will absolutely be a day when all devices have wide colour gamut screens, because everything becomes cheaper and commodity over time. The cheap phone your hypothetical average user uses today is a supercomputer beyond the wildest dreams of someone living in 1980, and thank goodness software systems evolved along with it.

    Getting things like colour management working well today is absolutely worthwhile development effort, even if 20% of the market have the equipment to use it right now. At some point, that will absolutely be 100%.

    Originally posted by caligula View Post
    barely can hit 100 nits anymore
    Colour management, and even HDR as a general concept, is about more that "nits". This is often lost in these discussions, but gamuts with different primaries are now common enough across devices that we need better tools to deal with the problem.

    Originally posted by caligula View Post
    Also manipulating colors with look up tables costs precious CPU cycles. Their Geforce GT 710 or Radeon 540 barely can render the double buffered compositor desktop, there are no CUDA cores available for these extra tasks. Doing additions or multiplications per pixel is too much for their Atom / Core i3 / Celeron / Pentium Bronze. It's the same with full disk encryption. Too costly for cheap devices.
    GPUs already do colour management right now. Various colour functions aren't done in sRGB - you can't do that sort of maths that way. Most colour functions are done in linear colourspaces, and even then most things convert from sRGB -> Linear RGB -> XYZ, perform the your various translations and changes here, and then convert back to linear RGB and finally back to sRGB.

    Colour management changes one of those steps to whatever other colourspace is required. It's not adding some crazy amount of overhead, because we're already doing colourspace conversions internally anyway. More to the point, these are standard fixed functions inside GPUs. Colour management is a standard function of things like the tiniest of GPUs on small SBCs, and every single budget phone on the market. Every single TV on the market has a GPU in it that's about on par with a Raspberry Pi, and these things do colour management 100% of the time.

    This isn't crazy new stuff. This isn't extra overhead. This is fixing a thing that currently runs off bad assumptions to do things better, takes zero extra processing on top of what's already happening now.

    I have a feeling you've assumed this is some herculean task. It isn't. Colour management existed all the way back on 1990s desktops. By every measure, even small SBCs are orders of magnitude more powerful today. You're not going to suddenly shed desktop performance because you've applied correct colour management. Not only because you're already doing it and didn't realise it, but also because the issue is entirely around development effort way down at the screen drawing layer, and has nothing to do at all with processing grunt.

    Leave a comment:


  • caligula
    replied
    Originally posted by elvis View Post
    So responding to your original point - EVERYONE needs colour management. The reason we're seeing so many problems with HDR adoption and wide colour gamut content right now is because computers, phones, and all sorts of devices DON'T have proper colour management, and just assume everything is always a specific colourspace. And we're seeing absolutely terrible options like this new "Gain Map" concept in photos as a really terrible way to solve the problem for end users, when the right solution is having standards-compliant colour management in every OS.

    Regardless of their implementations, Windows and Mac have colour management built in already. Wayland absolutely needs this as a base level protocol so that any tool on top, from entire DEs to individual apps, can correctly show colour information to the end user without the developer of those applications needing expert level understandings on colour theory, nor the end user needing to know anything about the technical specs of their displays.
    I appreciate that brought this up, but my point was, average users have < $200 phones with crappy cameras and screens. Heck, even the $2000 Librem phone has a very crappy sensor. People don't know what to expect. Usually the devices just suck. That's ok and what they expect. They know Apple's cameras are good and also high end Samsung cameras can be good (maybe even OnePlus and Xiaomi flagship phones), but average budget devices just implement the bare minimum so that they can state the device come with a camera, nothing else. Also manipulating colors with look up tables costs precious CPU cycles. Their Geforce GT 710 or Radeon 540 barely can render the double buffered compositor desktop, there are no CUDA cores available for these extra tasks. Doing additions or multiplications per pixel is too much for their Atom / Core i3 / Celeron / Pentium Bronze. It's the same with full disk encryption. Too costly for cheap devices.

    Also their monitors are 10 years old, barely can hit 100 nits anymore. If you do calibration with some Colormunki gadget, and adjust the RGB values, you have to decrease brightness because there's too much difference between the R, G, and B brightness. Also they use the lower eye strain low-blue mode which makes all photos look weird. Budget screens like my Benq GW2765 don't support 100% sRGB, they always display some oversaturated 120% sRGB image. It won't look the same on any calibrated quality device.

    Leave a comment:


  • elvis
    replied
    Originally posted by caligula View Post
    Well FWIW only a minority of people need color management. You basically need to calibrate your monitor. The colorimeters's sensors wear out even if you don't use them. I don't think this is that common practice among suckless and other small WM developers. Basically Gnome and KDE cover 99% of the relevant users.
    Not quite. Colour management is more than "calibrating your screen".

    For the longest time we've had sRGB on computers and BT.601/BT.709 on our TVs. Those standards are, by design, quite close. What that meant was that home-consumer video games, movies, photography, computer graphics and the rest could all be assumed somewhere in the same ballpark. There are occasionally slight errors in gamma or black levels, but by and large people don't generally notice these and/or can slightly adjust their brightness and contrast to solve the problem.

    It's very different now. Most phones are Display-P3. Most "HDR" TVs are BT.2020. Windows HDR internally can use scRGB (VERY different to sRGB). The end result is very confusing for end users when they try to capture or photograph something, share it with friends, and the colour or saturation levels are very, very wrong. Google "HDR screenshot washed out", and witness pages and pages of people not understanding what's going on behind the scenes, and struggling to share simple screenshots online.

    Apple iOS photo sharing is the same - sending a nice wide-gamut HDR photo between Apple devices is fine. Trying to export that is incredibly difficult, as Apple more or less force conversions back down to sRGB/SDR, and all that beautiful colour information is lost in a bid to remove user frustration.

    Colour management is absolutely necessary. When configured correctly, it seamlessly handles things like:
    • A HDR-only image being displayed on an SDR-only web browser or screen
    • An SDR-only image being displayed correctly on a HDR capable screen
    • Tone mapping HDR content down to SDR for older screens
    • Enabling mixed SDR/HDR windowed content on specific screens
    • Enabling users to have dual-monitors, where one is SDR and one is HDR, and apps can be dragged from one to the other and still correctly show colours
    And again, remember that "HDR" isn't a single standard. Phones are commonly Display-P3, TVs are commonly BT.2020, Windows HDR games can be literally anything (BT.2020, scRGB, Display-P3, something else all together), PC displays are a mish mash of standards. And likewise, not just colourspaces, but EOTFs are all over the place.

    The other factor here is the physical limitation of screens. No screen on Earth right now can display the full BT.2020 gamut nor the full PQ brightness. But you can create images and write software that does. Ensuring the content you create looks half decent on a screen requires you know something about the screen. Many screens provide some sort of physical colour display limit information in their EDID (and manufacturers that do this are actually doing a good job of being honest about it, because it helps their screens look better - I have a budget 2009 BenQ PC monitor on my desk that does this). KDE Plasma 6, for example, can be set with a single click to read your monitor's EDID information and adjust colour on the fly, and it can do this on a per-screen basis. This completely removes the end-user's need to know anything about colour at all, or the need to install a "monitor driver" like we used to back in the dark ages (these "drivers" were quite literally nothing more than an ICC profile).

    All operating systems with graphical displays need to handle this better. Historically, colour management has always been an after-thought in computing, precisely because of the above attitude that "it's only for professionals who calibrate". Indeed, all of these concerns about colour management were brought up early in Wayland's planning, and people like Graeme Gill (colour science guru, X11/Xorg colour specialist, and author of ArgyllCMS) were completely ignored when they brought up their concerns. Fast forward nearly 15 years, and we're seeing all sorts of shims and hacks going into projects like Steam Deck and Gamescope to compensate for the lack of good, standardised colour management protocols.

    So responding to your original point - EVERYONE needs colour management. The reason we're seeing so many problems with HDR adoption and wide colour gamut content right now is because computers, phones, and all sorts of devices DON'T have proper colour management, and just assume everything is always a specific colourspace. And we're seeing absolutely terrible options like this new "Gain Map" concept in photos as a really terrible way to solve the problem for end users, when the right solution is having standards-compliant colour management in every OS.

    Regardless of their implementations, Windows and Mac have colour management built in already. Wayland absolutely needs this as a base level protocol so that any tool on top, from entire DEs to individual apps, can correctly show colour information to the end user without the developer of those applications needing expert level understandings on colour theory, nor the end user needing to know anything about the technical specs of their displays.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by anda_skoa View Post
    Do you have an example of an interface definition in JSON?
    So far I've only encountered it as a data transfer format.


    I gave one anda_skoa there are others in jSON but most don't target C/C++/common programming languages.

    All the json interface definition items I have seen end up being worse that the alternatives.

    Big thing that special about XCB and Wayland XML based iDL is they are designed to generate the documentation for the interfaces and to be processed to make the interfaces from a single file. Most IDL is just designed to generate just the interfaces.

    Yes notice that jIDL that a json is not doing generate documentation of the interfaces from a single file.

    Leave a comment:


  • anda_skoa
    replied
    Originally posted by caligula View Post
    A significant part of industry has switched from legacy XML formats to schema free JSON. It's great for SPA web apps.
    Do you have an example of an interface definition in JSON?
    So far I've only encountered it as a data transfer format.


    Leave a comment:


  • oiaohm
    replied
    Originally posted by caligula View Post
    A significant part of industry has switched from legacy XML formats to schema free JSON. It's great for SPA web apps.
    Remember the Wayland case we are talking about "interface description language" or IDL with Wayland using XML form of a IDL.



    JSON IDL is not exactly the nicest.



    Turns out XML IDL is very nice to read. JSON IDL becomes very hard to read very quickly.

    The opening and closing statement of XML instead of just {} like JSON has does have some major human readability reasons.

    Look at the json example how would spot a missing }.

    These example IDL files demonstrate the fundamental constructs of interface definition.


    The reality is json IDL you end up back at old school painful to debug IDL files.

    caligula you are right that lot of industry has changed to scheme free JSON one of the problems is you cannot do IDL files without schema even in JSON. Because a IDL file need to be transformed from what it is to programming code be it C/C++/python....... IDL file is not the final product this is why all IDL files have to have some form of schema or the transformation system not going to work.

    Remember the XML IDL files of wayland only effect application building where they are processed into C/C++/... what ever require programming language in the targeted use case. XML turns out to more human readable than JSON and simpler for human to spot errors..

    There are particular corner cases where XML fits very well. XML fits very well for IDL files.

    JSON advantage over XML is that is more compact but that not helpful for a IDL file due to how it used. Human will be reading and manually editing iDL files.



    Leave a comment:


  • caligula
    replied
    Originally posted by anda_skoa View Post

    That is something the developers had established for X11 development and applied again when working on Wayland.

    XML is a good compromise for something that should be both machine and human readable.

    It is a widely used approach, e.g. for D-Bus, SOAP, SCXML (state machines), etc.
    A significant part of industry has switched from legacy XML formats to schema free JSON. It's great for SPA web apps.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by caligula View Post
    But how efficient is that? Doesn't wayland use xml quite a lot? I've seen some serialization formats where each pixel is stores as xml entity and rgb attributes. What a waste of space.


    The wire-format of wayland is not XML as shown above..

    Wayland XML is really way of doing like interface description files and documentation for those interfaces in a single file. Yes this is like X11/xcb, d-bus and others.


    Yes this tool takes the XML of the Wayland protocol and makes the C code that generates the wire protocol. A different tool can take that xml file and make manual for that section of the protocol.

    One of the big problem of doing interface description files and documentation for those interfaces as two files is they have habit of falling out of sync. This is not a free lunch of course a interface description file without all the documentation would speed up building libwayland-client and libwayland-server a little bit. Notice building this makes zero difference on applications using libwayland parts performance because the XML is not in the binaries.

    Leave a comment:


  • anda_skoa
    replied
    Originally posted by caligula View Post
    Doesn't wayland use xml quite a lot?
    Not at runtime.

    Like X11/xcb or D-Bus it uses XML to describe the interfaces between server and clients.

    With tools that then generate APIs so that most developers don't have to deal with the messaging and wire-format parts.

    Leave a comment:


  • caligula
    replied
    Originally posted by MrCooper View Post
    What you guys are describing isn't what this colour management protocol is about; in fact, that doesn't require any Wayland protocol at all. Wayland compositors such as mutter have supported applying monitor calibration profiles for years.

    The colour management protocol allows the client to describe to the compositor what colour space the contents of a surface are based on. This allows the compositor to display the contents correctly under all circumstances, even while multiple surfaces are visible with contents based on different colour spaces. This is something which X has never supported.

    It also allows the client to use colour spaces which extend beyond sRGB, i.e. wide gamut & HDR.
    But how efficient is that? Doesn't wayland use xml quite a lot? I've seen some serialization formats where each pixel is stores as xml entity and rgb attributes. What a waste of space.

    Leave a comment:

Working...
X