Announcement

Collapse
No announcement yet.

Linux Developers To Meet Again To Work On HDR, Color Management & VRR

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Anux
    replied
    Originally posted by Quackdoc View Post
    It wouldn't be a lot of work, but even if it was, scRGB covers the vast majority of the visible spectrum and then some. It would be a great option, since going with non RGB based colorspaces can cause computational overhead that we would rather not do.
    There is no more computational overhead than converting scRGB to any of the common color spaces at least with oklab. It's just a few 3x3 matrices which are fast on the CPU and nearly cost free on GPUs. While scRGB doesn't cover the whole visible spectrum and in turn covers a huge invisible part, is not perceptually uniform and not hue correct.

    Leave a comment:


  • Myownfriend
    replied
    [QUOTE=skeevy420;n1444452]GTK3 and the GNOME insistence on CSD started fracturing how well that worked.../QUOTE]

    I like CSD. Even before I started using Linux I felt like the title bar was a huge waste of space. Unrelated to that, I also felt like menu bars are extremely lazy design decision for a lot of applications, so Gnome's design language appeals to me.

    Chrome started the trend of browser putting tabs in their title bar. Then Adobe started styling Photoshop and Illustrator to use CSDs as well. Spotify on most platforms uses CSDs. All of macOS's applications use CSDs. All of these things happened outside of Linux. CSD was kind of inevitable because there's always going to be applications that have their own their own look to them whether it be because of their choice in toolkit or intentional widget theme. In those scenarios, it's more important to me that everything feels cohesive within a window/application than between applications. For example, I don't have an issue with Blender and Spotify having their own looks but I wish their headers matched lol

    Originally posted by skeevy420 View Post
    and then Wayland really put a hold on interoperability. Getting hit with the 1-2 combination of CSD/SSD and X11/Wayland probably gave Wayland a much worse reputation than it deserved.
    Personally, I feel like I would have made a lot of the same decisions as Wayland if I had to develop a windowing environment. I get that some people saw Linux as being more like a bunch of puzzle pieces that people can use to make their own OS but that was always going to hurt it more than anything. You can't expect a bunch of parts to be similar enough to each other that they can work with each other in any combination while also being unique enough for someone to pick one over the other.

    You can still use whatever file managers you want, that doesn't really have anything to do with X11 or Wayland. You can also still run different shells over Wayland compositors even some of the existing shells have ways to modify how the shell looks.

    Originally posted by skeevy420 View Post
    I never had tearing the few times I used multiple monitors. Granted, they all ran at 60hz, I used v sync, and DPI scaling is rather moot at 1080p and lower for the majority of people with good enough eyesight. But when you limit back in the day to a single monitor at a resolution where DPI scaling isn't necessary, the UIs from back then can be just as good or better than modern day systems.
    I have to disagree. I had wanted to try out Linux for years before I actually did. I would check back and see what was new but I always distinctly remember thinking that Linux UIs felt dated. The UIs that did feel modern at the time, like Gnome 3, looked like a lot of Android UIs at the time which I thought was really ugly. Now I feel like Gnome, KDE, and Cosmic look pretty modern but the rest are lagging behind a bit. I've always been a bit of UI snob.

    I used to screenshot application that I thought looked bad and Photoshop my own concepts for how they could improve and stuff lol

    Originally posted by skeevy420 View Post
    When you consider that CSD/SDD, Wayland/X11, ALSA/Pulse, 1080p/4K, fixed HZ/VRR, 60/120/144/240, VGA/DP-HDMI, and more all hit us around the same time about 12 years ago, the Linux desktop is doing alright with all the changes all that entails. Shit, ALSA/Pulse is now Pulse/Pipe; 1080p/4K is irrelevant because scaling is considered these days, frame rate and VRR is mostly done, CSD and SSD apps mostly blend together outside of stuff that tries to go its own way....the proverbial glass might not be all the way full, but it's well past half full.
    That's a nice outlook. It's good to see some positivity here.

    Leave a comment:


  • Quackdoc
    replied
    Originally posted by Anux View Post
    It would be very unwise to settle for rec 2020 as internal color space like it was with sRGB (or linear RGB) in the past. What you need is a color space that covers all visible colors like CIEXYZ, CIELAB or oklab.

    It might be fine in the short term but as soon as we get a new color space say REC 2030 we would have to rework the whole HDR stack. Why not make it right from the beginning?
    It wouldn't be a lot of work, but even if it was, scRGB covers the vast majority of the visible spectrum and then some. It would be a great option, since going with non RGB based colorspaces can cause computational overhead that we would rather not do.


    It's as hard as tone mapping, maybe a little less visible because most of the time we don't use much of the available gamut.

    As we have with tone mapping but it's not a solved problem since there is no mathematical correct way to do it. You are mapping bigger gamuts to smaller ones. It's a lossy process you can clip it or map/compress it (with a plethora of different transfer curves), but there is no right way to do it you can only do what looks subjectively good to you.

    It's as hard as gamut, it's just more recognizable since we have much better brightness perception than color.
    Tonemapping is significantly harder to do then gamut mapping, we have perceptually good gamut mapping, You can map dci-p3, rec2020 to sRGB primaries and it will look perceptually fine to the vast majority of people. This is not the case with tone mapping. There many many tone mappers and not a single one can be agreed upon to be "perceptually good", not even 4 methods like gamut mapping can cover this use case.


    Looking good/better is very subjective. Imagine 2 viewing conditions, both with a 400 nits display but one in a totally dark room and one in a lit room. For the dark room you could just make everything darker and still have a nice HDR experience but in the lit room the same result would be much to dark or if you adjust average brightness most highlights will be blown out, both will look bad.
    I don't see how this is relevant to what I said the transfer and display characteristics, on any single given display that can do both, HDR400, assuming the display isnt trash, will look better then SDR assuming you can actually see it.
    Yes as is always the case if we don't know about display properties.

    Even more in my opinion, EDID should contain a complete color profile, peak and average brightness as well as a room sensor to know the viewing conditions. This would allow for the best possible experience each device can offer.
    Agreed. though im not sure about a room sensor, that may be a bit overkill, I don't trust samsung with that power lol

    Leave a comment:


  • skeevy420
    replied
    Originally posted by Myownfriend View Post

    Even though I've only been properly using Linux since 2020, my earliest experience with Linux was in maybe 2005 when I messed with a live ISO of Ubuntu. I wanna say it was Dapper Drake or Intrepid Ibex? Anyway I'm just saying I've been some what aware of what's been going on in the Linux world for awhile and I remember seeing videos of Linux desktops with transparency, wobble effects, virtual desktop on a cube, windows burning away, and stuff and I definitely thought it was cool but even when I was 18, some of it seemed kind of excessive. These days I think people would find those effects kind of cute but also way too campy and impractical.

    My earliest experience was around 1999 with Debian, I had a dual boot era between 2002-2006, was Linux full time from 2006-2019, and have been dual booting again since 2018...some stuff requires Windows, dammit...

    While it is easy to go very overboard with effects, slight effects make the difference between a smooth and clunky feeling system. Those effects, however, aren't what made 2006-2015 such a great time for the Linux desktop. They helped, but they weren't everything. It was how we were able to mix and match things from different environments to create really unique systems. Like swapping around panels, window managers, file managers, etc from one project to the next. Being able to do all that and have spiffy effects was what was so great. Like, the KDE cube nowadays is a piece of shit joke when compared the the old cube. Sorry not sorry. It is.

    GTK3 and the GNOME insistence on CSD started fracturing how well that worked and then Wayland really put a hold on interoperability. Getting hit with the 1-2 combination of CSD/SSD and X11/Wayland probably gave Wayland a much worse reputation than it deserved.

    In another 5 years, Wayland might be where X11 systems were a decade ago. That comes out bad, but you gotta put that into perspective. X11 was nearly 20 years old 10 years back and Wayland is about 15 years old now. I figure that more and more Wayland stuff will be interchangeable once more and more of it supports the same parts of the protocols.

    That's more along the lines of stuff that I was thinking. I know a lot of people that use multiple displays and all the tearing, the lack of multi-DPI scaling, the inability to have mismatched frame rates, etc, would immediately turn them off. I can only imagine what they would think if they tried running a game that changes their desktop gamma settings and changes their resolution. Those things just feel hacky and archaic now.
    I never had tearing the few times I used multiple monitors. Granted, they all ran at 60hz, I used v sync, and DPI scaling is rather moot at 1080p and lower for the majority of people with good enough eyesight. But when you limit back in the day to a single monitor at a resolution where DPI scaling isn't necessary, the UIs from back then can be just as good or better than modern day systems.

    When you consider that CSD/SDD, Wayland/X11, ALSA/Pulse, 1080p/4K, fixed HZ/VRR, 60/120/144/240, VGA/DP-HDMI, and more all hit us around the same time about 12 years ago, the Linux desktop is doing alright with all the changes all that entails. Shit, ALSA/Pulse is now Pulse/Pipe; 1080p/4K is irrelevant because scaling is considered these days, frame rate and VRR is mostly done, CSD and SSD apps mostly blend together outside of stuff that tries to go its own way....the proverbial glass might not be all the way full, but it's well past half full.

    Leave a comment:


  • Anux
    replied
    Originally posted by Quackdoc View Post
    Gamut isn't really that hard, rec2020/2100 colorspace already is that.
    It would be very unwise to settle for rec 2020 as internal color space like it was with sRGB (or linear RGB) in the past. What you need is a color space that covers all visible colors like CIEXYZ, CIELAB or oklab.
    The majority of content is mastered for DCI-p3. But we also have scRGB which is a modification of sRGB to allow a much greater gamut, and a linear transfer. Either scRGB or bt.2020 would be fine.
    It might be fine in the short term but as soon as we get a new color space say REC 2030 we would have to rework the whole HDR stack. Why not make it right from the beginning?

    This is two things, gamut mapping and tone mapping, Gamut mapping isn't that hard
    It's as hard as tone mapping, maybe a little less visible because most of the time we don't use much of the available gamut.
    we have been doing gamut mapping since the 90s or even before. it's mostly a solved problem.
    As we have with tone mapping but it's not a solved problem since there is no mathematical correct way to do it. You are mapping bigger gamuts to smaller ones. It's a lossy process you can clip it or map/compress it (with a plethora of different transfer curves), but there is no right way to do it you can only do what looks subjectively good to you.

    Tone mapping is much harder
    It's as hard as gamut, it's just more recognizable since we have much better brightness perception than color.

    Tone mapping actually doesn't look that bad, even when mapping down to 600 or even 400 nits, it still looks way better then sRGB, it's just a matter of properly mapping it.
    Looking good/better is very subjective. Imagine 2 viewing conditions, both with a 400 nits display but one in a totally dark room and one in a lit room. For the dark room you could just make everything darker and still have a nice HDR experience but in the lit room the same result would be much to dark or if you adjust average brightness most highlights will be blown out, both will look bad.

    Fixing the peak nits of the display fixes the majority of the issues.
    Yes as is always the case if we don't know about display properties.
    IMO this is something that should be signaled in edid, but it's usually fine to ask the user or default to 1knits.
    Even more in my opinion, EDID should contain a complete color profile, peak and average brightness as well as a room sensor to know the viewing conditions. This would allow for the best possible experience each device can offer.

    Leave a comment:


  • yump
    replied
    Originally posted by Noitatsidem View Post

    Back when you couldn't get a linux desktop without tearing too - that was indeed an instant turn off for many when it came to linux.
    Actually, Compiz did have an OpenGL vsync option that would prevent tearing. Back in my "every frame is perfect" phase I ran it on my EEE 701 for that reason.

    Leave a comment:


  • Myownfriend
    replied
    Originally posted by skeevy420 View Post

    Modern Linux or back in the Golden Era of Linux desktops when we had Compiz, Emerald, etc? Because if it's back in the day when 1080p was the highest resolution ever, before GTK3, when we had all those spiffy desktop effects with X11 compositors, and we're limiting it to using a single monitor, they'd be sold on old-school Linux. It's modern Linux, X11 or Wayland, that just isn't as good as it used to be. Make UIs Great Again.

    MUGA

    Build That Firewall
    Even though I've only been properly using Linux since 2020, my earliest experience with Linux was in maybe 2005 when I messed with a live ISO of Ubuntu. I wanna say it was Dapper Drake or Intrepid Ibex? Anyway I'm just saying I've been some what aware of what's been going on in the Linux world for awhile and I remember seeing videos of Linux desktops with transparency, wobble effects, virtual desktop on a cube, windows burning away, and stuff and I definitely thought it was cool but even when I was 18, some of it seemed kind of excessive. These days I think people would find those effects kind of cute but also way too campy and impractical.

    Originally posted by Noitatsidem View Post
    Back when you couldn't get a linux desktop without tearing too - that was indeed an instant turn off for many when it came to linux.

    That's more along the lines of stuff that I was thinking. I know a lot of people that use multiple displays and all the tearing, the lack of multi-DPI scaling, the inability to have mismatched frame rates, etc, would immediately turn them off. I can only imagine what they would think if they tried running a game that changes their desktop gamma settings and changes their resolution. Those things just feel hacky and archaic now.
    Last edited by Myownfriend; 20 February 2024, 01:42 AM.

    Leave a comment:


  • Myownfriend
    replied
    Originally posted by skeevy420 View Post
    I don't know what you said, but it's OK. I can see how any talk of a CoC can be taken wrong, especially on Phoronix where we're allowed to voice our out-there thoughts about subjects.
    Thanks for understanding!

    Originally posted by skeevy420 View Post
    The people I know that aren't good people, a CoC isn't going to make them suddenly become a better person. They're the kind of people who see violating a CoC in that manner as a badge of honor and will act like the punishment to their bigotry makes them a First Amendment, Freedom of Speech, Champion...which is funny because this is happening in Spain and I'd love to see someone go off about 1A in not-America. Granted, I'd be surprised if any of them even knew WTF a Linux even was to manage to make it to an HDR event in Spain. Big-headed people like them are why airports have "Foreigners and Americans This Way" signs.
    I don't think the CoC is meant to make bad people into good people. It's meant more-so to tell contributors what a community's stance is on certain things so they feel assured that the project will stand up for them if and when they get harassed by the aforementioned bad people. There are people who contribute to open source projects who get a significant amount of abuse outside of open source and participating in open source projects can put a larger target on them so they need some level of protection.


    Originally posted by skeevy420 View Post

    I'd like to think that us Linux users are better than that. Regardless of our backgrounds, we all have a respect for FOSS, openness, cooperation, and participation and I'd like to think that those ideals would expand into us also having tolerance, acceptance, and kindness.

    Thoughts and assumptions. Silly me
    I wish that were the case, too, but I've seen so much contempt to developers both on these forums and elsewhere. I know Georges Stavracas said his daughter received death threats from Linux users.

    Leave a comment:


  • Theriverlethe
    replied
    I'm really glad they're working together on this to hopefully come up with agreed upon standards. I'd love to use Linux on my desktop but it's just not there yet in terms of modern features like HDR and VRR support. I was surprised to learn that Wayland has no standard way of allowing applications to switch refresh rate, resolution, etc. Unless the different compositors create a standard API for games, video players, etc. to tell the environment they need 24Hz or HDR/SDR, Linux will never be suitable for gaming or HTPC use.

    (Granted, very few applications on Windows use HDR/SDR switching correctly. Kodi, Jriver Media Center and Doom Eternal are the only ones I can think of off the top of my head.)
    Last edited by Theriverlethe; 19 February 2024, 08:10 PM.

    Leave a comment:


  • Quackdoc
    replied
    Originally posted by Anux View Post
    HDR has many obstacles that need to be addressed, not all can be addressed by software developers though.

    First you need an internal color space that is a superset of all color spaces you want to support. And this alone is an imperfect decision because there is no perfect color space to use. Oklab is probably one of the best in terms of color accuracy but it can not account for viewing conditions (dark vs. bright room) and is only optimized for D65 white point.
    Gamut isn't really that hard, rec2020/2100 colorspace already is that. The majority of content is mastered for DCI-p3. But we also have scRGB which is a modification of sRGB to allow a much greater gamut, and a linear transfer. Either scRGB or bt.2020 would be fine.

    Then you need to clip or map the different color spaces to what the display supports and we have no idea which display has which limits. Only calibrated displays can get good tone mapping.
    This is two things, gamut mapping and tone mapping, Gamut mapping isn't that hard, we have been doing gamut mapping since the 90s or even before. it's mostly a solved problem. (it's not super simple which is why we have the rendering intent system). Tone mapping is much harder but this would ideally your HDR should be calibrated to your display PC side, but only a basic "general nit brightness" is necessary for that calibration. (This is actually a major issue on windows since default is something stupid like 1200 or 1500nits, window's HDR looks a lot better when you correct this.)

    Most modern HDR displays will do the necessary mappings themselves when feed with REC2020 and HDR metadata but like sRGB, quality highly depends on that implementation.
    While this is true, but this is also just the status quo as it has been for decades now, so not really relevant IMO

    HDR material (movies) will typically get mastered with 1000 nits or higher and mapping that down to less nits will result in blown out details. With games you could do a much more flexible approach, if you know the characteristics of the display.
    Tone mapping actually doesn't look that bad, even when mapping down to 600 or even 400 nits, it still looks way better then sRGB, it's just a matter of properly mapping it. MPV's spline tonemapper is probably the most "Flexible" but bt.2446a is also quite reliable too. It is for sure less then ideal, but it still looks and works fine. Even when feeding my crappy HDR10 only display, PQ1000 looks way better then sRGB content. even when using a variety of tonemappers like spline, bt.2446a, 2390, hable etc.

    When you look at Microsofts HDR implementation it is really shit because it can't know what display properties it's working with, while Apple has equipped all their devices with P3 displays and can therefore make much better assumptions about how the result will look like.
    And Valve has a similar (known display) situation with their steam devices but lack the proper support OS wise.
    see my above statement on MS' HDR. the vast majority of the issue is microsoft's very dumb defaults. Fixing the peak nits of the display fixes the majority of the issues. IMO this is something that should be signaled in edid, but it's usually fine to ask the user or default to 1knits.

    Leave a comment:

Working...
X