Announcement

Collapse
No announcement yet.

Wayland 1.19 Is Set To Come Soon As First Update In Nearly One Year

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by microcode View Post
    Huh, no. Modes are requested by userspace in most cases, not sure why you are so keen on your position on this, and capable of writing so much fluff boiling down to you not understanding either the subject matter or the use case others see. It's great if you've never dealt with a monitor not exposing your preferred mode, but you are better left out of this conversation.
    I have absolute dealt with monitor not exposing the mode I need. I have also hit case where the only way to enabled the mode was use the custom EDID option.

    This is the problem I have dealt with this.

    There are many reasons why a mode you will want to use will not work.

    1) Monitor does not have mode in the EDID. This is fixable with a custom EDID this is what you fix with Xrandr at the moment.
    2) Monitor has the mode in the EDID but it does not work. This can be another EDID fault where the EDID tell video card to use a HDMIa when its a really a HDMIb device result is the HDMI not being clocked up to the speed the mode needs. These faults are happening more often.
    3) your monitor colour is badly wrong. Colour profile information for modern HDR monitor comes across by EDID the processing of these corrections is hardware accelerated in GPUs.
    There is in fact more faults you can hit.

    microcode I have handled true garbage tier monitors. These are monitors with issues that bad that the maker is not willing to sell them to customers for anything other than spare parts. Yes when you are wanting to scrap these monitors you are wanting to fire them up and find out if the LCD screen is any good before going to all the effort to decase them.

    Firing up with any mode that works with the default EDID in a garbage tier you call lucky. So the idea that I have never dealt with a monitor not exposing my preferred mode is absolutely wrong. Big difference here I get to see all the different ways a screwed up EDID can do you over and there are quite a few screwed up EDID things that will result in custom modes by Xrandr not working.

    Yes on test benches you are using cheapest arm boards you can use just in case the reason the monitor is in garbage tier is something like sending live 240v straight down the HDMI cable. This is why I know about different Arm gpu not allowing xrandr to modes not in the EDID because I use to use xrandr to put in custom modes then one day it did not work and had to find out why.

    Comment


    • Originally posted by microcode View Post
      Huh, no. Modes are requested by userspace in most cases, not sure why you are so keen on your position on this, and capable of writing so much fluff boiling down to you not understanding either the subject matter or the use case others see. It's great if you've never dealt with a monitor not exposing your preferred mode, but you are better left out of this conversation.
      Also dealing with garbage tier monitors gives you a lot extra knowledge why particular monitors don't expose particular modes even that the chipset inside them supports them. Its stupid reality that different mode scaling cause the chipset inside the monitor to generate different levels of heat. The cooling solution of the monitor may not be good enough to take this heat away if particular modes are used. So you set your preferred mode that the monitor is not exposing by it EDID and in side 6 months your monitor is dead because you did that and overheated the chipset.

      The fact that you can be destroying your hardware setting custom modes not in the EDID is some of the reason why I don't see a problem with adding custom modes requires a reboot and a few extra hoops to jump though as this may make you double check your manufactures details on the monitors before you do it and ruin your monitor.

      PS running custom modes that are not in the EDID with lots of monitors/tv voids you warranty as well and they can have a nice fuse inside setup to blow as soon as you use a non approved mode.
      Last edited by oiaohm; 20 December 2020, 08:10 PM.

      Comment


      • Originally posted by Grinness View Post

        I replied on the specifics a few posts above:

        Since the introduction of KMS -- Kernel Mode Settings -- creating modes that are not in the EDID is done differently. See:
        https://www.kernel.org/doc/html/late...uide/edid.html

        It seems to me that all boils down to the use of the nvidia blob.
        If this is the case, complain with your vendor -- not with the community driven efforts of improving the overall stack.

        If the above is not the case, in the link provided you find info on how to create your EDID and get the kernel to load it
        I don't disagree that EDID should be handled by the kernel, but from a user's perspective, xrandr is a lot easier to use than creating your own edid file (and finding what your bootloader is, and how to change kernel parameters), as well as it's applied instantly rather than requiring a reboot, and that's if there aren't issues with the parameters requiring even more changes and as such more reboots.

        As wxedid was mentioned (by another user) I decided to install it:

        1) I open the program and there really isn't anything to interact with, I checked the menu and saw the option to import EDID.

        2) I started searching how to export monitor's EDID as to have something to use with wxedid.

        3) From search I find that get-edid can dump to a binary file, however, with the tested chinese tv, I got no actual edid, in fact get-edid never stopped running until I manually ended the process after 30 minutes running.

        4) Search for another way to get edid, ddccontrol seemed promising but ended up being a dead end. Found a script that actually relies on xrandr to generate the edid binary ( https://gist.github.com/mvollrath/9aa0198264e6b4890914 ), I'd have liked to avoid relying on xrandr when the point is it isn't available on wayland, but I grow tired of the searching and bite the bullet, but even that failed as xrandr failed to get the EDID of this cursed display. So I decided to use https://github.com/akatrevorjay/edid-generator to get one from scratch.

        5) After cloning and running make I got several EDID files, decided to grab the 1920x1080 file but wxedid threw an error at me, and suggested I disabled EDID error checks, but even then, it didn't display anything , there was just a message in the footer of the program saying allowed sizes ranged from 256 to 4096 bytes and sure enough, the EDID was only 128 bytes. Compiling edid-decode I was able to read the EDID file and it output the mode information.

        6) Feeling daring, I moved the EDID file to /usr/lib/firmware/edid, then added the following line to the kernel parameter: drm.edid_firmware=edid/1920x1080.bin

        7) Problems started as soon as I saw systemd's output in low res, and then when attempting to load either a xorg or wayland session the display would freeze not even allowing to switch TTY.
        Perhaps I'm a big idiot and there's a super easy way to do this, but even if everything were to went fine (and it didn't in my case) it'd still require a lot more steps, and end up being a lot harder than this https://wiki.archlinux.org/index.php...ed_resolutions

        Considering how much harder it is to get a _bad display_ to ouput its native resolution in wayland, I don't think it's surprising there are people saying on xorg to avoid the headache.

        ----

        Reading comments I see that kde now also has a tool to add custom resolutions on wayland, perhaps both kde and sway recognized how much of a pain it can be to edit an EDID file and then adding it to your kernel parameters in order to have a _bad display_ working.

        Comment


        • Originally posted by verude View Post
          I don't disagree that EDID should be handled by the kernel, but from a user's perspective, xrandr is a lot easier to use than creating your own edid file (and finding what your bootloader is, and how to change kernel parameters), as well as it's applied instantly rather than requiring a reboot, and that's if there aren't issues with the parameters requiring even more changes and as such more reboots.
          To be correct I have been doing it the old way but more dependable way.


          Since kernel 3.15, to load an EDID after boot, you can use debugfs instead of a kernel command line parameter. This is very useful if you swap the monitors on a connector or just for testing. Once you have an EDID file as above, run:

          # cat correct-edid.bin > /sys/kernel/debug/dri/0/HDMI-A-2/edid_override
          And to disable:

          # echo -n reset > /sys/kernel/debug/dri/0/HDMI-A-2/edid_override
          You can change the EDID in 3.15 and later without reboots. But on true worst problem child monitors fully power cycling the system with a override on kernel so the EDID of the monitor is never probe is required. I am in the habit of presuming the worse of a monitor not the best.

          Originally posted by verude View Post
          1) I open the program and there really isn't anything to interact with, I checked the menu and saw the option to import EDID.
          Right at this point.

          Originally posted by verude View Post
          2) I started searching how to export monitor's EDID as to have something to use with wxedid.
          Starting to take wrong turn at this point. You are after a working template EDID does not have to come from the problem monitor. I have a stack extracted from different monitors that work that I use as my base templates.

          Originally posted by verude View Post
          3) From search I find that get-edid can dump to a binary file, however, with the tested chinese tv, I got no actual edid, in fact get-edid never stopped running until I manually ended the process after 30 minutes running.
          Yes this is exactly what happens where you can end up with a never ending edid.

          Originally posted by verude View Post
          4) Search for another way to get edid, ddccontrol seemed promising but ended up being a dead end. Found a script that actually relies on xrandr to generate the edid binary,
          No that is not generate a edid its just xrandr equal to get-edid and of the monitors edid is broken that is not going to work either.

          Originally posted by verude View Post
          So I decided to use akatrevorjay/edid-generator to get one from scratch.
          These are bare bones edid will not help you with true jackass monitors.

          Originally posted by verude View Post
          5) After cloning and running make I got several EDID files, decided to grab the 1920x1080 file but wxedid threw an error at me, and suggested I disabled EDID error checks, but even then, it didn't display anything , there was just a message in the footer of the program saying allowed sizes ranged from 256 to 4096 bytes and sure enough, the EDID was only 128 bytes. Compiling edid-decode I was able to read the EDID file and it output the mode information.
          The 256 bytes min is that you have all the extra meta data for connection and so on. 128 bytes is if you only have the information of the mode but none of the monitor description like is a a HDMIa or HDMIb connection protocol to use.

          Originally posted by verude View Post
          6) Feeling daring, I moved the EDID file to /usr/lib/firmware/edid, then added the following line to the kernel parameter: drm.edid_firmware=edid/1920x1080.bin
          32- and 64-bit ARM Open Platform Specifications. For software developers. For the maker community. For embedded OEMs. 64-bit ARM for $129.


          Code:
          #define GENERIC_EDIDS 6 static const char *generic_edid_name[GENERIC_EDIDS] = {
          "edid/800x600.bin",
          "edid/1024x768.bin",
          "edid/1280x1024.bin",
          "edid/1600x1200.bin",
          "edid/1680x1050.bin",
          "edid/1920x1080.bin", };
          Trap for the unaware don't name you modes edid/1920x1080.bin or anything like it if it custom. If you do you risk end up with the generic built into the kernel like just what you did. So you were not using your own generated edid but the generic single mode built into the kernel. This is not going to behave itself on a horrible monitor.

          Originally posted by verude View Post
          7) Problems started as soon as I saw systemd's output in low res, and then when attempting to load either a xorg or wayland session the display would freeze not even allowing to switch TTY.
          Originally posted by verude View Post
          Perhaps I'm a big idiot and there's a super easy way to do this, but even if everything were to went fine (and it didn't in my case) it'd still require a lot more steps, and end up being a lot harder than this xrandr#Adding_undetected_resolutions
          On some of your really bad TV from china this is not going to work with Xrandr either.. As monitor will be saying its HDMIa when its a HDMIb monitor. Sending signel as HDMIa to a HDMIb ends in all kinds of wacky behaviour including lockups. Yes this is a value only settable in the EDID.


          Originally posted by verude View Post
          Reading comments I see that kde now also has a tool to add custom resolutions on wayland, perhaps both kde and sway recognized how much of a pain it can be to edit an EDID file and then adding it to your kernel parameters in order to have a _bad display_ working.
          Yes and no. The reality here is both the kde and sway work arounds do fail on particular monitors and its back to custom EDID.

          I am not saying that custom EDID tooling at the moment does not sux badly it does. Lot of people don't consider copying a working EDID from working monitor as starting point. Yes I wish I had legal generated full monitor configuration templates I could point people to use but I don't. Extracted EDID from monitors are technically protected by copyright.

          Some fo the china made monitors with broken EDID if you have someone who can speak the language and read the vendors website you can find EDID file for download on their site that magically fixes everything with the monitor not just the modes but makes audio out and colour of the screen go 100 percent correct. Also if the TV/Monitor came with a driver disc(this is scary sign almost 100 percent says broken EDID out box) check the windows inf drivers lot of cases on badly behaving ones you will find a premaded exactly to monitor EDID file. Yes days of cursing attempting to generate a EDID that worked for a monitor only to find out the correct one was on the driver disc the complete time I was not happy.

          Comment


          • Originally posted by oiaohm View Post

            Also dealing with garbage tier monitors gives you a lot extra knowledge why particular monitors don't expose particular modes even that the chipset inside them supports them. Its stupid reality that different mode scaling cause the chipset inside the monitor to generate different levels of heat. The cooling solution of the monitor may not be good enough to take this heat away if particular modes are used. So you set your preferred mode that the monitor is not exposing by it EDID and in side 6 months your monitor is dead because you did that and overheated the chipset.

            The fact that you can be destroying your hardware setting custom modes not in the EDID is some of the reason why I don't see a problem with adding custom modes requires a reboot and a few extra hoops to jump though as this may make you double check your manufactures details on the monitors before you do it and ruin your monitor.

            PS running custom modes that are not in the EDID with lots of monitors/tv voids you warranty as well and they can have a nice fuse inside setup to blow as soon as you use a non approved mode.
            My monitors just happen to have been among the best money could buy when I got them, I just have interest in more of the modes they are capable of than they expose.

            I don't see why this is such a hard concept for you to grasp. Not everyone just runs their monitors in the default mode, and you don't have to be a jerk about it.


            ​​​​​​PS: no, setting non-advertised modes on a monitor will not "destroy" it, that is absurd. What mechanism in the monitor would even accomplish that?
            Last edited by microcode; 21 December 2020, 09:19 AM.

            Comment


            • Originally posted by microcode View Post
              My monitors just happen to have been among the best money could buy when I got them, I just have interest in more of the modes they are capable of than they expose.
              Hard concept for you to get is not everyone has the best money could buy. Something that is safe with the best you can by can serous-ally be life threatening in the cheapest you can buy.

              Originally posted by microcode View Post
              I don't see why this is such a hard concept for you to grasp. Not everyone just runs their monitors in the default mode, and you don't have to be a jerk about it.
              I am playing with garbage tier hardware default mode is device does not work. So its not a hard concept for me to grasp that people don't want their monitors in their default mode.

              ​​​​​​
              Originally posted by microcode View Post
              PS: no, setting non-advertised modes on a monitor will not "destroy" it, that is absurd. What mechanism in the monitor would even accomplish that?
              This is horrible wrong in when your monitor is not the best you can buy. I will list the common methods that setting a non-advertised mode equals destroyed monitors.

              1) Not enough cooling on the chipset for every possible upscale mode so chip in the monitor overheats and dies when you use a mode that was not exposed or listed on the makers data sheet. Yes at first it appears to work. This is the good outcome in cheap heading garbage tier.
              2) Power supply in the monitor being under rated to-do all modes the monitor chipset supports. This is when you are getting into really dangerous. Monitor driver chipsets have ratings on how many watts they should take for each of the different modes. This is where things get evil for those designing monitors. So designer of monitor cut the most power hungry/heat generating modes off the monitor in the EDID then cut down the power supply to match. Overloaded power supply has a A, B and C failures if you use one of the modes the design of the monitor intentionally cut out and reduced the power supply because the design said those modes would never be used.
              A) failure this one is good ish the monitor just goes black its a thermal fuse it self resets when you leave the monitor powered off for a while.
              B) its a true fuse of the blow once kind the monitor is dead. Still not the worst outcome.
              C) Power supply in the monitor overloads goes completely stupid and due to poor circuit design now sends insane voltage back down the HDMI cable kiss good by to GPU/motherboard. Worst fry outs I have seen in garbage tier of power supplies in monitors I have seen put 240V AC 10amp out the HDMI port of the monitor. Yes that 240V AC 10amp could have just turned the case of your computer live as well if it happens to be something like a metal cased laptop that is not earthed.

              Setting a non-advertised modes on garbage tier and close to garbage tier can get really expensive and worse cases life threatening/human corpse making. Please note I might be playing with garbage tier but relatives to what I am playing with was sold to consumers without the broken EDID or equal.

              Basically microcode you had a myth that setting non-advertised modes does not ever pose a danger. Before setting a non advertised mode you really should check that your monitor in fact was built to-do it that there was not any cost cutting in either cooling design or power supply inside the monitor.

              Microcode I mentioned about using cheap arm boards for testing these garbage tier monitors and close to garbage tier there is no way I am going to use like a few thousand dollar gamer rig when it could go completely wrong and fry the lot.

              I am not against doing custom EDID to add in modes that the vendor decided not to display by the way. But this better be something the monitor vendor decided not to display not random probing a monitor. Random probing if X modes work on garbage tier or close to garbage tier is a do you feel lucky operation. If you are not careful with garbage tier or close to garbage tier monitors/tvs its really simple to end up with a stack of dead hardware and possible you dead as well.

              Yes the grade of monitor you are dealing with is very important to take into account.

              There are some quite high end monitors with high power efficiency claims that they have got there by cutting back the power supply and restricting the EDID. Yes some of these people would think are in the best for value monitors. They are fine monitors if you read the specification sheet on them and notice that the sheet lists a stack of modes disabled not to be used.

              microcode apple laptops have been a great example of people buying something thinking its the best for money and it not performing well because apple did a crap poor cooling solution. There are monitors as bad as apple laptops.

              How do you under clock a monitor chipset so it produces less heat so it works in a bad thermal design that right cut supported modes. Once you have cut supported modes then you cut the power supply back because no one is going to be using all that power. Then a person like you comes along and go hey I will custom mode set that. Yep you just basically over clocked the thing with a non suitable power supply and cooling. Things don't always end well from here.

              Comment


              • Originally posted by oiaohm View Post
                C) Power supply in the monitor overloads goes completely stupid and due to poor circuit design now sends insane voltage back down the HDMI cable kiss good by to GPU/motherboard. Worst fry outs I have seen in garbage tier of power supplies in monitors I have seen put 240V AC 10amp out the HDMI port of the monitor. Yes that 240V AC 10amp could have just turned the case of your computer live as well if it happens to be something like a metal cased laptop that is not earthed.
                This is ridiculous. There may be somewhere in the world where electronics work like that, but I guarantee you my UL listed power supplies are not going to magically short to case ground, and there aren't even any exposed metal parts on most monitors (mine included, one of which uses an external power supply with no ground lead to the power supply in the first place!). Furthermore, the most common effect of exceeding the design pixel rates of a monitor chipset is a blank screen, and one of the most common ways people actually use custom modelines, because display overclocking is not generally that fruitful, is to go considerably lower than the design rates.


                No, my monitor is not going to burst into flames because I ran it at a seventh the designed maximum framerate at its native resolution. You can keep yelling the reddit phrase “garbage tier” but I don't own any monitors fitting that description, and I still want to use more modes than the monitor advertises.

                I have no idea why you are so invested in dunking on people just sharing their experiences here. Clearly you don't understand how electronics work, how digital display connectors work, how monitor chipsets work, how the kernel works; you have one tiny bit of information where you are aware that people can edit EDID blobs, but you fall down when it comes to failure modes, and basically all other information about the subject. Why not just learn instead of saying so much nonsense? Have you ever actually run a monitor chipset to failure? I suspect not. Do you have any idea what makes a bad power supply or what impact higher rates will have on an actual monitor's chipset? I get the impression you do not.

                Stop taking shots in the dark and go learn something.

                My PG279Q is not going to short live to chassis ground because I ran it at 24Hz, despite your ludicrous scaremongering; it doesn't even have a chassis ground, and even if it did there would be no path between that and the power supply.
                Last edited by microcode; 21 December 2020, 11:00 AM.

                Comment



                • Originally posted by microcode View Post
                  This is ridiculous. There may be somewhere in the world where electronics work like that, but I guarantee you my UL listed power supplies are not going to magically short to case ground, and there aren't even any exposed metal parts on most monitors (mine included, one of which uses an external power supply with no ground lead to the power supply in the first place!).
                  Its the integrated power-supply you see more commonly in TVs from China. There are found in quite a few places.

                  Originally posted by microcode View Post
                  Furthermore, the most common effect of exceeding the design pixel rates of a monitor chipset is a blank screen, and one of the most common ways people actually use custom modelines, because display overclocking is not generally that fruitful, is to go considerably lower than the design rates.
                  Its not that straight forwards just because you are sending less data cross the HDMI does not mean the chipset in the monitor is not work harder scaling. Yes the blank screen is a common effect but is not the only failure effect.

                  Originally posted by microcode View Post
                  No, my monitor is not going to burst into flames because I ran it at a seventh the designed maximum framerate at its native resolution.
                  Yours is not but the reality is running at seventh the designed maximum frame rate at native resultion on particular monitors chipsets with particular LCD is 4 times the power consume by the chipset. The fun of mid frame corrections with HDR. Its been one of the fun ones people though that Variable refresh rate monitors always save power at the lower frequencies as well but the truth is if you put power meter on some monitors the highest power draw is at the lower refresh rates. The truth

                  I am not miss understanding the failure modes. Please note I did not say monitor is going to burst into flames that is insanely rare to the point never happens any more. Not even 240 V 10amp out though the hdmi does not do that just a lot of hardware dies and possible a human or two.

                  Originally posted by microcode View Post
                  My PG279Q is not going to short live to chassis ground because I ran it at 24Hz, despite your ludicrous scaremongering; it doesn't even have a chassis ground, and even if it did there would be no path between that and the power supply.
                  Not everyone is playing with a PG279Q class with external power brick power supply. There are still a lot of monitors out their with internal 240/110 Ac power supplies. Not all of those internal power-supplies are well designed but the fatality does not stop there. There have been some of the switch mode external power bricks as well end up with direct passing the AC though due to bad design when overloaded. Yes your PG279Q with a cheap poorly made china clone power brick could go 110/240 AC active live on the DC ground that ends up going straight out the ground shielding of all the ports on that monitor. This happening nothing catches fire some wiring fuses out enough amps before that happens is passed to kill a human and large amounts of circuits.

                  Yes the PG279Q does in fact draw more power than you expect with a 24Hz feed that is drawing more power than when its at full HZ signal in fact. Would have paid to check your own hardware before commenting microcode. The PG279Q has one of the problem child chipsets in it of course its fine the ROG monitors are combined with a well and truly over specification power-supply and the chipset has proper cooling. That proper cooling and proper power supply is not truly for all monitors that have that chipset.

                  This is the problem those using high end monitors mistake think what they can do safely with their monitors people with the low end versions of those monitors can do as well sorry to break your bubble that is not the case. You would have been warned of this if you were not presuming lower Hz of signal to monitor equal less processing and power usage in the chipset in the monitor.

                  Really in some ways I am lucky you have a PG279Q you could have had one of the ones that does properly scale its power usage based on Hz but those are rare. This is why it horrible. I guess you were not thinking that 24Hz could be pulling down as much power as running the monitor at 165Hz.

                  Yes that fun arguement I did not over clock my monitor and the vendor says it fried because you under clocked it and yes that is a real valid answer.

                  Before setting any mode be it higher or lower HZ or different resultion needing different scaling it pays to check that it is truly built to do it. Your case ROG PG279Q is designed to-do it. But not everyone monitors are.

                  microcode like it or not I am not scare mongering. Like it or not playing around with monitor modes that maker of monitor did not test can bring out some very horrible problems particularly when you are talking cheaper tier stuff.

                  PG279G chipset if you don't know microcode is rated for above 30Hz, 24Hz is under-clocked leading to the chipset running well and truly outside its specification pulling down lot more power that you would first expect. Why it so bad is the panel is a min of 30Hz as well you have just triggered a lot more colour correction processing. If you want to treat you PG279G inside specification of what its meant to drive you would not have gone under 30hz. From 165hz to 30hz its power saving all the way down. Power usage starts going up once you go under 30hz and quite steeply in fact.

                  This is a true there be dragons. Before setting modes you really do need to have read your monitors specifications and understand when you are going outside with either too low of Hz or too high of Hz or some scaling the hardware does not like anyone of these is going to lead to higher power usage than the monitor designer could have expected. Higher power usage also equals higher heat generation.

                  This is something else people don't normally do is have a power meter hooked up to a monitor when trying new modes. Its useful to see if you have hit a quirk.

                  Comment


                  • Originally posted by rastersoft View Post
                    That is the forum where the people who create the code for the main desktops talk about what is needed and how to do it in the best way. And AFTER they all decided that, is then when they start coding.

                    But maybe you think that the best way of doing it is to just let each party implement their own vision, and hope that, magically, all of them end being interoperable.
                    I was talking about features not being implemented at all, duh. This can mean both delayed implementation but also skipping some stuff altogether due to lack of interest or resources. Third scenario is an existing protocol being extended, but some implementation remaining stuck with the older spec, causing confusion for end-users over why something works sometimes but then just doesn't. (Similar to the DnD mess of today.)

                    Although right now we can also talk about proprietary protocols as the features required by real DEs such as Gnome and KDE are not there as of today, so both DEs have their own implementations for certain features. Depending on the complexity and rewrites needed, once (IF) these protocols are standardized, it will take a long time for the properietary bits to be replaced with standard stuff.
                    Last edited by curfew; 22 December 2020, 06:47 AM.

                    Comment


                    • Originally posted by curfew View Post
                      I was talking about features not being implemented at all, duh. This can mean both delayed implementation but also skipping some stuff altogether due to lack of interest or resources. Third scenario is an existing protocol being extended, but some implementation remaining stuck with the older spec, causing confusion for end-users over why something works sometimes but then just doesn't. (Similar to the DnD mess of today.)

                      Although right now we can also talk about proprietary protocols as the features required by real DEs such as Gnome and KDE are not there as of today, so both DEs have their own implementations for certain features. Depending on the complexity and rewrites needed, once (IF) these protocols are standardized, it will take a long time for the properietary bits to be replaced with standard stuff.
                      Ok, I see your point. But the question is... How can that be solved? If your solution is to stay with X11, how do you propose to add to it modern capabilities like deep color? More extensions to an already bloated protocol? (check how many entry protocols have X11, for example). Also, remember that a lot of the mess today is due to X11 design being based on "mechanism, not policy", which means that each program (not only each DE, but each program) can implement things like DnD as it preferred.

                      Wayland is a solution to a real problem. X11 has a lot of design problems that cannot be fixed via extensions, and also has a lot of mandatory functionality that nobody uses today, but that can't be removed. Of course, we are in a transition time, so it is normal that there are still things that aren't as polished as we would want, but also don't forget that you can still use X11, and will can do it for a lot of more years.

                      And, what is more important, remember that there IS lack of resources, as you already said: there are a lot of people demanding things, but there is very little people doing actual work, coding all these things. This means that they must focus first on the most important and common features, and in this specific case, being able to set non-standard resolutions with a common app is not a common case, so it makes sense to send it to the back of the queue. The day has only 24 hours, and even programmers need 8 for sleeping, and watch a movie sometimes.

                      Comment

                      Working...
                      X