Announcement

Collapse
No announcement yet.

Wayland Protocols 1.30 Introduces New Protocol To Allow Screen Tearing

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by birdie View Post

    You started your post with claiming that I praise NVIDIA and Intel. Lying through your teeth again, again, again, again, again and fucking again. Show me a single fucking post where I praise either of these companies. I dare you.
    You defend Intel or Nvidia is someone puts blame on them.

    Phoronix: NVIDIA Makes The PhysX 5.1 SDK Open-Source Back in 2019 NVIDIA open-sourced the PhysX 4.1 SDK and was working on a PhysX 5.0 open-source code drop while we haven't heard anything more on the matter in the past two years. Coming out this morning as a surprise is the NVIDIA PhysX 5.1 SDK open-source release...

    Phoronix: Intel Core i5 12400 "Alder Lake": A Great ~$200 CPU For Linux Users Formally announced at CES, the Core i5 12400 and other Alder Lake non-K desktop CPUs are beginning to appear in retail channels. Last week I was able to buy an Intel Core i5 12400 "Alder Lake" from a major Internet retailer for


    Here it looks like you're trying to sell people on Intel CPUs over AMD. When people aren't feeling it, you complain that "It's amazing that AMD has charged top dollar for the past two years now and still everyone here keeps talking about Intel and NVIDIA as greedy companies. AMD bias on Phoronix is simply insane." When someone calmly said they were happy with their AMD CPU and GPU, you responded by listing of what features Nvidia offers.

    Here's you asking someone else to do the same thing you just asked me, well almost. You asked them to show exact posts where defended Nvidia or attacked Linux. So I guess myself and others are just making up your bias for Nvidia and Intel and against Linux.

    In fact, that whole topic is full of you posting benchmarks and reviews where you're going hard for Intel.

    Originally posted by birdie View Post
    Won't read the rest of your shit, I know my worth
    Oh I'm aware. I remember when you claimed you fixed a bunch of bugs in the kernel.

    Originally posted by birdie View Post
    (that's just for the kernel) and I don't need no-ones from Phoronix.com shitting on me. That won't work, that can't work. LMAO.
    A bunch have. Denying it doesn't mean it doesn't happen.

    Originally posted by birdie View Post
    The only reason why I'm replying is because VBulletin doesn't hide your posts despite you being on my BL.
    Yea, me and a bunch of other people that you've said the same thing to. What value do you think there is in telling someone they're on your block list especially when the it apparently doesn't work?

    Originally posted by birdie View Post
    Amazing how my initial reply in the topic was on-point and you starting talking about me. Ad hominem is strong with lowly people having nothing to say
    And the debate lord stuff comes out...again lol

    "That's called ad hominem. That's a fucking logical fallacy, you've completely failed, and I'm getting the fuck out of this intellectually handicapped discussion for good."

    "Yeah, let's start calling names and use argumentum ad hominem. Kudos! You can be proud of yourself."

    "In your rant you have not addressed a single argument I've given but you've surely used ad hominem to no end.​"

    You mention ad-hominems like crazy. Ever wonder why you're so frequently on the recieving end of them? It must be because everyone else you've interacted with is at fault and not you /s

    Comment


    • #52
      Originally posted by TemplarGR View Post

      You don't need 130% scaling though.... You think you do, but you don't. It will always end up blury and distorted. 30% increase is not really that much different to be worthwhile anyway. Just increase the font size and call it a day. It is what i have done and i have a better picture without scaling BS.
      this is absolutely wrong, Im not even going to address the 30% makes a difference or not comment, because that is entirely dependant on the user and their setup, its not something you could possibly know in the first place, and acting like it is shows great ignorance.

      first of all, it's not just 30% scaling, its 10%-90% which makes this a moot point, 200% scaling is far too big, and 100% far too small, anything inbetween is fucked.

      Secondly increasing font size does not "just work" across all apps, it might work for some QT apps, and it might work for some GTK apps, but no. the universal way to support scaling is via the compositor telling the app what scale to use, This effects UI placement, Icon scaling (very important for SVG icons).

      And third, proper fractional scaling explicitly prevents the blur and distortion as much as possible, sure some frameworks might not handle scaling fine, QT for instance at least handles fractional scaling somewhat well. but in the end, you need a cross platform way to tell the UI kit what to scale by, some UI kits like GTK don't handle it at all, and some kits handle it perfectly fine. and some things might just not scale at all.

      so a protocol is needed to be able to mediate this, since UI kit should always be the one to handle it whenever possible. since there is a good chance it will be able to handle it the best, as UI aware scaling mitigates most if not all of the problems with fractional scaling. also the compositor needs feedback, since sometimes a couple set fractional scaling from the client, then scaled the rest of the way via the compositor is the next best thing, so the kit needs to be able to tell the compositor, "Ok, how about I scale to X and you do the rest" (same with when the kit only supports integer scaling like some lesser kits like GTK),

      Comment


      • #53
        Originally posted by Quackdoc View Post

        this is absolutely wrong, Im not even going to address the 30% makes a difference or not comment, because that is entirely dependant on the user and their setup, its not something you could possibly know in the first place, and acting like it is shows great ignorance.

        first of all, it's not just 30% scaling, its 10%-90% which makes this a moot point, 200% scaling is far too big, and 100% far too small, anything inbetween is fucked.

        Secondly increasing font size does not "just work" across all apps, it might work for some QT apps, and it might work for some GTK apps, but no. the universal way to support scaling is via the compositor telling the app what scale to use, This effects UI placement, Icon scaling (very important for SVG icons).

        And third, proper fractional scaling explicitly prevents the blur and distortion as much as possible, sure some frameworks might not handle scaling fine, QT for instance at least handles fractional scaling somewhat well. but in the end, you need a cross platform way to tell the UI kit what to scale by, some UI kits like GTK don't handle it at all, and some kits handle it perfectly fine. and some things might just not scale at all.

        so a protocol is needed to be able to mediate this, since UI kit should always be the one to handle it whenever possible. since there is a good chance it will be able to handle it the best, as UI aware scaling mitigates most if not all of the problems with fractional scaling. also the compositor needs feedback, since sometimes a couple set fractional scaling from the client, then scaled the rest of the way via the compositor is the next best thing, so the kit needs to be able to tell the compositor, "Ok, how about I scale to X and you do the rest" (same with when the kit only supports integer scaling like some lesser kits like GTK),
        What scaling in general does, is "fixing" displays that have too high resolutions for their physical size. If you have for example a 60 inch 4k monitor, you don't need scaling at all. But if you have a 15 inch 4k monitor, you do, cause everything will look tiny. Still, integer scaling is perfect for such resolutions and works just fine without any issues.

        Problems arise when someone has lower resolutions than 4K, for example 1440p. Then scaling like 200% is too large. But a resolution like 1440p is not too far off 1080p. While 30% scaling might seem nice for such a case, most of the time you can be served by just increasing font sizes. On windows IIRC you can even increase window panel sizes and other stuff as well, alongside the fonts. Used to in the past at least. So you can get by without a universal 130% scaling that will always get blurry and/or use more resources.

        For me, small scaling factors don't have any real use or purpose for the vast majority of PC monitors. And 200% scaling was perfectly for those who have small 4K monitors. It would be fine if fractional scaling existed sure, more features never hurt. But why spent resources on implementing it when eventually everyone and their dog is going to be using a 4K display?

        Comment


        • #54
          Originally posted by treba View Post

          Is that really still a thing if one can simply get a 240Hz screen?
          No its not, hence why 240+Hz screens are one of the primary use cases for this feature, especially if you play shooters where latency matters more then having a single screen out of millions not having tearing at the cost of globally lower FPS.

          If you are playing a game that can spit out 240FPS+ quite consistently there is no point in having Vsync, even if there is the very rare tearing you won't notice and hence in such cases if you enable VSync you are just hurting performance for no reason.

          On another note, the fact that there seems to have been pushback against such a feature in the initial design's of Wayland shows how out of touch they were with users in the initial days, but its good to see that it has been added.
          Last edited by mdedetrich; 22 November 2022, 06:52 AM.

          Comment


          • #55
            Originally posted by stormcrow View Post

            Get over yourself. There's no reason gamers can't have both proper display output and correct input latency. The problem isn't the display stack and it's not v-sync. It's gaming engines that date back to single threaded ancestors that don't properly separate out input threads, display/rendering threads, and storage threads. Many game engines still tie all their physics and input into their display engine, hence input lag. The problem isn't v-sync. The problem is in the engines themselves.

            Multithreading, concurrency, and parallelism are the answer, but many gaming houses aren't looking for genuine solutions. They're pushing content out as fast as they can using off-the-shelf solutions that were designed and mostly coded when systems still couldn't handle more than one or two threads.
            This is only really half true. While it is true that games were coded like this, this is soon becoming a relic and modern game engines are now properly multithreaded (yay Vulkan/DX12/Metal allowing developers to create proper multithreaded game rendering engines without getting PTSD). More critically, those same games which are programmed in the primitive way you described are the ones that now run with 120+ FPS on a modern graphics card so its kind of a moot point if they are not that efficient because by virtue of the fact of being older they run at higher FPS anyways (and this will get better as time progresses).

            Now if we get to modern AAA games (most of which don't have the problems you describe), unless you have a 4090 on higher resolution screens depending on the game you may have issues in getting consistently 120+ fps and in such cases disabling VSync and using freesync/gsync can help and is a far better solution than VSync. I mean the thing is that VSync regardless of whether you have a proper multithreaded game engine (or not) is not free, double/triple buffering has a cost, that is by design.

            Heck with Freesync/GSync, at least when it comes to gaming the concept of screen free tearing (via VSync) is primitive/old. Thats why Freesync/Gsync was created, it allows the monitor to dynamically adjust its refresh rate depending on what frames the GPU can output which is a far more efficient design then buffering frames and comparing them. With freesync being open, at some point in time it wouldn't be surprising if almost every monitor comes out with freesync apart from crap you might by in Walmart. For desktop compositing its different, but this gets to the point how Wayland's initial design was so basic/small they only cared about desktop compositing and nothing else.
            Last edited by mdedetrich; 22 November 2022, 07:07 AM.

            Comment


            • #56
              Originally posted by Alexmitter View Post

              That is not even a thing at 60hz, there can be a delay in frame to screen time but its so low that even sensitive people should not be able to truly notice it.
              On a 90, 120 and higher screen, that time is even lower and I have serious doubts even the most pro gamer(TM) can feel anything.

              I'd for myself rather have correctly drawn and presented frames then a tiny bit less latency.
              That is not the biggest issue (although it is part of issue).
              In next few sentences i am assuming 60Hz.
              1 frame takes to produce 16.66ms. In most benchmarks on computer with reaction time, I am going below 200ms. That means 16.6ms is close to 10% of my latency. It is definitly noticable. Now many people on my configuration are even faster (160-180 ms) and for them 16.6ms would be even more significant relativly. Keep in mind i am talking here about full end-to-end latency, from human latency, to mouse, to processing, to display, to pixel reaction time.

              Ok but this is not the worst part.

              With double buffering Vsync, you always end up with 30 fps or 60fps . So if game is internally running at 50fps, you end up all time swapping from 30 fps to 60 fps. That means you face microstutters, some frames are more fluid (those produced at 60 fps), and some at 30fps (33ms is a lot). And this is something very easy degrading expierience. You could use triple buffering which allows going 45fps and improves fluidity, but triple buffering also means additional 2 frames of lag which implies in best case 33ms of lag, and in worst case even 66ms of lag on some temponarly lag spike.

              Comment


              • #57
                I like how they call this feature "enable tearing" instead of "disable vsync".

                Comment


                • #58
                  Originally posted by MaxToTheMax View Post
                  Glad Wayland is adding a lower-latency mode for games that allows tearing. Wayland devs are continuing to check off the remaining items that keep me on Xorg. Now about middle-click paste...
                  That probably differs from one compositor to another, but it was already implemented some time ago and works just fine on KDE to me. Is there anything wrong with the primary selection / middle click pasting?

                  Comment


                  • #59
                    how long have I tinkered back in the days to make intel video tearfree such that there is no tearing when a movie or stream is played?!
                    Since 5 years I was believing that "real" tearing is a problem mankind has finally (almost) solved.

                    But now this - I don't see a step forward in this solution.

                    Comment


                    • #60
                      Originally posted by piotrj3 View Post

                      That is not the biggest issue (although it is part of issue).
                      In next few sentences i am assuming 60Hz.
                      1 frame takes to produce 16.66ms. In most benchmarks on computer with reaction time, I am going below 200ms. That means 16.6ms is close to 10% of my latency. It is definitly noticable. Now many people on my configuration are even faster (160-180 ms) and for them 16.6ms would be even more significant relativly. Keep in mind i am talking here about full end-to-end latency, from human latency, to mouse, to processing, to display, to pixel reaction time.

                      Ok but this is not the worst part.

                      With double buffering Vsync, you always end up with 30 fps or 60fps . So if game is internally running at 50fps, you end up all time swapping from 30 fps to 60 fps. That means you face microstutters, some frames are more fluid (those produced at 60 fps), and some at 30fps (33ms is a lot). And this is something very easy degrading expierience. You could use triple buffering which allows going 45fps and improves fluidity, but triple buffering also means additional 2 frames of lag which implies in best case 33ms of lag, and in worst case even 66ms of lag on some temponarly lag spike.
                      I think what's much more cirtical is a constant latency, to that the human can automatically adjust. If you have bigger latency fluctuations in an egoshooter you're going to miss often.
                      If you have tearing you have a half frame with minimal latency and another half frame with +1 frame latency. Then it depends on where the object is.
                      The best solution is to have extremly high refresh rates, than it doesn't matter if you have tearing (hardly visible) or use vsync (little higher but constant latency) and that's what most pro gamers allready do.

                      Freesync while beeing the best soulution has other fallbacks like gamma flickering.

                      Originally posted by CochainComplex View Post
                      how long have I tinkered back in the days to make intel video tearfree such that there is no tearing when a movie or stream is played?!
                      Since 5 years I was believing that "real" tearing is a problem mankind has finally (almost) solved.
                      The problem is long solved with vsync, only if you have a problem with the additional latency then it gets difficult. How should you display something that is only half ready? Freesync would be the optimal solution but as mentioned has other drawbacks.
                      Last edited by Anux; 22 November 2022, 10:29 AM.

                      Comment

                      Working...
                      X