Announcement

Collapse
No announcement yet.

Open-Source Linux Driver Support For 4K Monitors

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Somehow, I suspect that most of this "4k support" stuff involves two considerations;
    1) That the necessary protocols (i.e. HDMI 2.0, DP 1.2) are actually implemented in software,
    2) Making sure that things aren't *broken* and just not discovered with conventional displays. I.e., things that divide the screen into 4 quadrants and switch them around would be considered "broken".

    Comment


    • #12
      I can tell you this, once those 4K monitors are available next year with the new HDMI 2.0, I will be doing some testing with Ubuntu (maybe Debian too depending on how I can set things up) gaming and media with the latest GPUs I can get my grubby little paws on and I will share the results with you wonderful penguin loving oddballs.

      Comment


      • #13
        Originally posted by schmidtbag View Post
        And how exactly do you expect the software to support something the hardware doesn't support?
        Forward thinking specs. Why shouldn't the kernel support 10k resolutions? Maybe someone will announce a 10k TV tomorrow. There's no technical reason to limit screen resolution output within the kernel. Sure, no device on the planet may support then, yet, but they are there for the day they will be.

        Now, if you want to limit what outputs you can use within the graphic cards driver layer, that's fine. But there's NO reason the kernel should be arbitrarily limited in this way. The Kernel should be able to support any possible screen resolution.

        It doesn't matter what the software is capable of if the hardware can't do it, so it's better to not let the drivers say "hey look, I can do 4k screens!" and someone attaches one only to find out it doesn't work due to a hardware limitation.
        Driver != Kernel. If the GPU can handle 4k output, if the device can handle 4k output, and the transport layer can handle 4k output, then it should be allowed as an output. Even if this isn't the case, the Kernel should have the ability to output 4k should these conditions become true.

        This isn't the same thing as having a CPU too slow to play a game, because the game will still run. If you buy a 4K screen because your drivers SAY they can support it, you're going to be pretty unhappy to find out the screen won't even leave standby.
        Again, Driver != Kernel. If the screen doesn't support 4k, the drivers should disable it as an option. The Kernel, however, should be fully capable of handling 4k. And 8k. And 200k. Its the driver layer which specifies what the actual output is going to be for a given device.

        Setting screen resolutions is a lot more complicated than most people are aware of - there's a lot more than width, height, color depth, and refresh rate. It isn't as simple as just flicking a switch and suddenly getting 4K resolutions.
        Which you can also handle. There is no technical reason why the Kernel shouldn't have support for every possible screen resolution, color depth, refresh rate, under the sun. If you want to limit what is output for a connected device in driver land, find, but the kernel itself shouldn't have any limitations because "Its not supported by hardware yet".

        Comment


        • #14
          While someone could design hardware to potentially work with 10k modes or whatever, it's probably not a good use of resources. Support for arbitrary modes comes down to a few things:
          - link support. You need to make sure the spec supports high enough clocks over the link to support the mode. E.g., HDMI uses single link TMDS. The original TMDS spec only supported clocks <=165 Mhz. Newer versions of HDMI have increased this.
          - clock support. The clock hardware on the asic has to be validated against higher clock rates. If there is no equipment to use the higher clocks, it's harder to validate and increases validation costs to support something that may or may not come to be during the useful lifetime of the card.
          - line buffer support. Once again, it takes die space for the line buffers. Extra die space costs money. You don't want to add and validate more line buffer than you need to cover a reasonable range of monitors that will likely come to be during the card's useful lifetime.

          Also, the current kernel and even older kernels can handle 4k timings just fine using the standard mode structs. What got added in 3.12 was support for fetching 4K modes from the extended tables in the monitor's EDID.
          Last edited by agd5f; 10 September 2013, 10:21 AM.

          Comment


          • #15
            Originally posted by gamerk2
            Forward thinking specs. Why shouldn't the kerne..
            Originally posted by agd5f View Post
            While someone could design hardware....
            thank you for clearing this up!

            Comment

            Working...
            X