Announcement

Collapse
No announcement yet.

Trend Micro Uncovers Yet Another X.Org Server Vulnerability: CVE-2023-1393

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Weasel
    replied
    Originally posted by mSparks View Post
    Of course you are, if you update what is being drawn while the monitor is drawing you get tearing, that is how tearing happens.
    The only thing "special" about wayland is it spent some 12 of its 14 years of life insisting vysnc must always be enabled. While abandoning that requirement does make it less shit, it doesn't make it better than X11 which always had that flexibility.
    And it will need another 10 years before you can even query absolute window positions from another app (like a script).

    And then probably another 10 before it allows you to move other app windows with your own (again, a script being most common example).

    Progress!

    Leave a comment:


  • mSparks
    replied
    Originally posted by oiaohm View Post
    On amd and intel under Xwayland from the start of Xwayland it been no more broken than under X11 bare metal.
    Only because it isn't possible to get more broken than completely broken, which is why no one bought them twice if they wanted GPU acceleration.

    Originally posted by oiaohm View Post
    Tearing free does not equal what people commonly call vsync at all. Vsync is method you fall back on when you don't have some form of "locked output buffering" to use
    VSYNC is the only method to stop tearing. tearing is caused by the display being told to draw something different part way through drawing it, the vsync signal is the only mechanism available for a GPU to know when a display is and is not drawing, and it's 100% effective at preventing the GPU from updating what is being drawn half way through the display drawing it.
    Originally posted by oiaohm View Post
    you are not locked to monitor refresh rate like vsync is.
    Of course you are, if you update what is being drawn while the monitor is drawing you get tearing, that is how tearing happens.
    The only thing "special" about wayland is it spent some 12 of its 14 years of life insisting vysnc must always be enabled. While abandoning that requirement does make it less shit, it doesn't make it better than X11 which always had that flexibility.
    Last edited by mSparks; 13 May 2023, 12:06 PM.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by mSparks View Post
    Do take note that hardware acceleration was historically not by default enabled, has never worked on 90% of acceleration hardware, and is still very broken even for non essential software like any of the web browsers.
    On amd and intel under Xwayland from the start of Xwayland it been no more broken than under X11 bare metal.

    Originally posted by mSparks View Post
    vsync and "tearing free" are not however
    Because they are not the same. You don't understand the fundamentals.

    Remember Wayland is based on EGL that comes from embedded devices.

    There is a vsync free way todo tearing free.

    You would have heard to double buffering/tripple buffering. Ever head of the term "locked output buffering."(you find this in egl documentation)

    This is simple. You have a simple display output device. This is something lot of embedded devices just a little more complex than aspeed devices can do..

    Really simple logic "locked output buffering".

    1) Application tells output device what buffer to output and its locked from write.
    2) Output device while outputting does not update buffer it outputting. So instruction sent to change output buffer only happens when output is not happening.
    3) when output device is done with output buffer it removes the lock.

    Yes this is right application cannot reuse a buffer until it unlocked. This locked output buffer logic makes non intentional tearing impossible.

    So as long as application does not write to output buffer no tearing. Add a write lock to output buffer that only released after output device is done with it and you have tear free with no vsync. Yes locked output buffering.

    AMD and Intel when you enable tearfree under X11 bare metal guess what you are using "locked output buffering". Locked output buffering application don't need to care about vsync to remain tear free.

    NVIDIA Fast Sync & AMD Enhanced Sync are forms of "locked output buffering" embedded GPU had features like this as early as 1998 reason why EGL direct to hardware support "locked output buffering" so well.

    You might be wondering How to fix screen tearing with VSync? You may experience screen tearing while gaming. It can be very distracting and can make gaming


    Tearing free does not equal what people commonly call vsync at all. Vsync is method you fall back on when you don't have some form of "locked output buffering" to use

    Advantage of the "locked output buffering" method it you are not locked to monitor refresh rate like vsync is. Think items like early epaper that don't have a refresh rate. Yes egl was designed to deal with epaper output devices this is why for a very long time it include vsync free ways todo tear free...

    Tearing free and Vsync are not the same thing that true. Vsync is one of many method that can be used to implement tearing free. Vsync has the worst latency costs best methods are one of the "locked output buffering" methods.

    Yes "vsync respecting" is also not the same thing as vsync either. "Locked output buffering" and using vsync to get tear free falls under "vsync respecting" group of functionality but are very different.

    I think issue is terms used is your problem here.

    Originally posted by mSparks View Post
    So why do you keep posting the opposite for both?
    Maybe the problem here as you have been wrong. Command buffers and per process state in opengl are different. Tear free there are many ways to implement it.

    Yes games used vsync to sync to monitor clock rate but this has the most overhead. EGL in embedded devices had to deal with output devices that did not have have refresh rate yet still had to be tear free... Remember Wayland core development is based on EGL and you have been using logic of Windows and X11 graphics you are in the wrong department. Embedded world is it own form of beast when it comes to GPUS and display output devices.

    Wayland developers demand to try 100 percent tear free makes sense when you have the legal requirements and egl specifications in your hands. "locked output buffer" method have very min overheads this is why Wayland at times keeps up with X11 bare metal with tearing when it tear free..

    Leave a comment:


  • mSparks
    replied
    Originally posted by oiaohm View Post
    Do take note from the start wayland is using EGL.
    Do take note that hardware acceleration was historically not by default enabled, has never worked on 90% of acceleration hardware, and is still very broken even for non essential software like any of the web browsers.
    Originally posted by oiaohm View Post
    mSparks per process state and command buffers are two different things.
    lol
    vsync and "tearing free" are not however

    So why do you keep posting the opposite for both?

    Leave a comment:


  • oiaohm
    replied
    Originally posted by mSparks View Post
    bullshit, seriously, you should get a job with the BBC, they adore people with this capacity for truth denial.

    Even if that was true, it has existed for as long as there has been GPUs, Wayland did not invent it, they don't do anything magical with it, "per application" is not possible when all applications share a global state such as when using openGL prior to the introduction of command buffers. Anything not built on X11 and using Xwayland basically still doesn't.
    Until very recently, wayland didn't even support GPU acceleration, which is a hulluva lot more mandatory than a checkbox for vsync.
    Nothing right post again. Yes the vsync write up is based on common GPU designs not the embedded case.

    Where is you evidence that Wayland was not accelerated. Not Nvidia by the way. AMD and Intel.

    Do take note from the start wayland is using EGL. Xwayland had working intel and AMD GPU acceleration for opengl from the start and vulkan when vulkan came out.


    In DRI2, instead of a single shared (back) buffer, every DRI client gets its own private back buffer[11][12] —along with their associated depth and stencil buffers— to render its window content using the hardware acceleration.
    September 4, 2008
    Wayland 30 September 2008 guess what was one the triggers to the Wayland redesign.

    Originally posted by mSparks View Post
    "per application" is not possible when all applications share a global state such as when using openGL prior to the introduction of command buffers.
    ​That what Nvidia documentation tells you. One problem it totally wrong for everyone who is not Nvidia. DRI2 with open source drivers see per process state implemented. The original opengl on irix also has per application state the old silicon graphics GPU did not have command buffers. Without it you are restricted to a single opengl application at time without risking major problems.

    EGL being designed for embedded devices with poorer GPUs has a few different ways of pulling off vsync functionality you are not considering.

    mspark vsync to prevent tearing existed before we had GPUs. aspeed video out does not have a GPU yet using egl you can still have functional vsync yes CPU doing the heavy lifting. Before GPU the display output part tells you when the last Vsync was and from hz of output you have to guess when the next one will be and complete output buffer writes before that. Yes totally doable this is when you need a compositor/something software using CPU.

    Even having the GPU doing the heavy lifting with vsync you still need software to use buffers in ways that GPU can use. Raw writing to the output buffer and setting flags if this is full completed frame or currently working on updating frame as xorg X11 bare metal server wants to do by default does not give GPU much to work with.

    mSparks command buffers helps you with multi threaded processes. Per process state for Opengl appears on Linux Sep 2008 for open source drivers.

    Nvidia closed source reason they were pushing eglstream well after this is that Nvidia had not implemented per process state and only started trying to make it after Nvidia had command buffers to their drivers. There are issues because Nvidia is trying to use command buffers with data leaking between processes and causing causes because Nvidia has not implemented correct per process state.

    Yes mSparks shock horror right that irisgl the thing before opengl by silicon graphics also support per process state. We has 16 years of horrible with opengl on Linux for no good reason. MS windows as soon as they added opengl had per process state yes Windows 95 had per process state opengl in Microsoft opengl software render of course when you installed Nvidia drivers Nvidia broke that.

    Nvidia drivers have been busted for a very long time and it not Linux limited. Linux distribution include open source opengl was bad for 16 years1992-2008 after DRI 2.0 things started changing in a big way.

    mSparks per process state and command buffers are two different things. Per process state gets important think you have applications using the 3 different versions of AMD vulkan all 3 different versions can be using a different command buffer format yet this is going to work. Why you have a per process state to contain a flag to say to driver this command buffer you have been set from this process is XYZ format then the driver can pass to GPU that this command buffer is XYZ format or covert to to the format the GPU is currently using..

    The means to mix and match mesa driver versions and mix and match AMD closed source with open source drivers on the same system is all down to per process state.

    Leave a comment:


  • mSparks
    replied
    Originally posted by oiaohm View Post
    ​Do note the wayland "Allow Screen Tearing" protocol allows vsync respecting to be turned off on a per application base with vsync respecting on by default.
    bullshit, seriously, you should get a job with the BBC, they adore people with this capacity for truth denial.
    Originally posted by oiaohm View Post
    Vsync is not just a GPU switch.
    It is

    Originally posted by oiaohm View Post
    Tear free on for general desktop applications for business is a mandatory requirement in enterprise since Vista time frame.
    Even if that was true, it has existed for as long as there has been GPUs, Wayland did not invent it, they don't do anything magical with it, "per application" is not possible when all applications share a global state such as when using openGL prior to the introduction of command buffers. Anything not built on X11 and using Xwayland basically still doesn't.
    Until very recently, wayland didn't even support GPU acceleration, which is a hulluva lot more mandatory than a checkbox for vsync.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by mSparks View Post
    Oh, and of course, mesa isn't available for windows is it?

    Mesa different drivers to work on Windows. Zink and llvmpipe work on windows. RADV is half ported to windows.


    The reality is more mesa drivers could be ported to Windows if there was the will.(as in some party willing to pay developers for it)

    RADV development is not run by AMD. Redhat, Valve and many other parties fund RADV development all AMD does is provide the technical documentation and answer questions when the documentation appears to be wrong.

    Originally posted by mSparks View Post
    The issues are not specific to X-Plane, but XP is very specifically impacted by them.
    mSparks name another program to use opengl and vulkan in the same process. Yes there are a lot of write up online describing how to attempt opengl/vulkan in the same progress. Guess what most application developer this test on AMD/Intel see it explodes and changes to opengl in one process vulkan in the other with a memory map between the two then under MS Windows the active window priority boost kicks you where hit hurts. At this point the developer normally send up to plugin developers sorry guys your old plugins are not going to be supported please write new ones for vulkan.

    Like or not X-Plane is doing something outside opengl/vulkan specification. And what X-Plane does is very unique. For the number of production applications that use opengl and vulkan at the same time zink is most likely the correct solution. mSparks there is less than 5 by the way it will not be simple to find another one..

    X-Plane opens a Pandora box of undefined behavior. The X-Plane developer is stuck in a rock and hard place. They have a lot of third party plugins they don't source code to and the developers no longer around to recode them and they are not willing to scrap them. Most parties who have this problem scrapped the old opengl parts once they worked out mixing vulkan and opengl is a no go.

    Remember since opengl/vulkan in the one process is not defined behavior of opengl/vulkan standard Nvidia any point in future is free to break this support as well. The correct route forwards for anyone wanting opengl and vulkan in the one process will be support zink development in mesa.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by mSparks View Post
    VSYNC is not a wayland/X11/windows/gnome/kde feature.

    It's a GPU driver switch.

    There is a catch bare metal X11 server without X11 compositor loaded without using tearfree options on X11 drivers this includes Nvidia. Totally disregards vsync. So will alter the output framebuffer at point of output so causing tearing.

    Vsync is not just a GPU switch. Vsync to make tearfree need application not to be playing on the output buffer. GPUs have different forms of vsync implementation.

    Aspeed as in your worst only informs applications when vsync will happen no form of auto buffer swapping. Something like Nvidia/AMD/Intel tearfree hardware that you turn on with tearfree option with open drivers there is a option for your Nvidia closed source take the most recent declared complete frame.

    Something like Aspeed you need compositor to process the vsync time to know when you transfer to the output buffer and not cause tearing of course X.org xserver bare metal fails to run on this where wayland compositors work,

    Originally posted by mSparks View Post
    The only thing "special" about wayland is the original developers invested significant effort into fighting anyone who told them forcing it always on was a really bad idea, until they abandoned the project and the new guys finally capitulated and allowed it to be disabled like everyone else.
    Do note the wayland "Allow Screen Tearing" protocol allows vsync respecting to be turned off on a per application base with vsync respecting on by default.
    1) X11 vsync respecting is off by default.
    2) You use tearfree options on the drivers under X11 there is no option to turn this back off at runtime.
    ​​3) Only way to have tearing allowed and tearfree at the same time with X11 is to have X11 compositor loaded. Now with how does the X11 compositor know if application wants tearfree or not. The correct answer X11 compositor is guessing because it does not have the information if the application wants tear free or not..

    Tear free on for general desktop applications for business is a mandatory requirement in enterprise since Vista time frame. The original developers were not wrong fighting it when you look at the early proposals.

    New the new lead developer did not capitulate. Wayland protocol "allow screen tearing" proposal was the first time someone proposes something that obeyed the enterprise and embedded requirement for tearfree to be on for all general desktop applications and allowed selectively opting out of tear free. Remember game developers who display tearing to be legal in many countries should display warning. Add flag to weston to turn tear free off on every application that was a common early proposal done over and over again.

    Lot of wayland development has been slowed down by parties attempting to make Wayland the same as X11 resulting in hitting a solid wall of no. Yes go read the mailing list arguments for allowing tearing under Wayland you will notice the same mistakes over and over again until we get to the "allow screen tearing" protocol of making screen tearing a global all or nothing option.

    Also those pushing for screen tearing to be allowed in Wayland were not considering the other sides requirements. Yes the developer who made "allow screen tearing" protocol was one of the first to ask the question why tearfree required the answer on the mailing list of Wayland is good. It include references to many countries laws and that post was from the current Wayland lead developer.

    Yes there is a bigger problem when you get into the laws. Person set application full screen the application does not display warning about flicker/tearing the application was a normal desktop application where user is right to expect no flicker but the compositor incorrect sets screen tearing on and the person takes a fit because of the flicker caused by tearing and gets injured you are now in legal liability hell.

    X11 bare metal you either end up with tear free stuck on or interface setup in ways you can get sued over. This also explains why parties like Redhat and canonical are pushing for Wayland so much.

    Tearing is a problem that need to be solved correctly. Yes lots of ways x.org X11 server really need to change defaults and add something to X11 protocol like "allow screen tearing" of wayland so applications can tell X11 compositor that they are expecting to have tearing on screen..

    Originally posted by mSparks View Post
    OpenGL is based on a global state model, Vulkan is based on graphic objects, sync is about as import to the difference between them as the number of bugs between AMD driver versions.
    No you need to go read the opengl standards. Because sorry global state model is not in fact true about opengl.
    There’s been a fair bit of talk recently about attempting to multi-thread using OpenGL so I thought I’d write a bit more about what “multi-threading” an OpenGL game is, what’s normally done, and how it compares to multi-threading in Vulkan. Here's an attempt to explain it a little.


    The reason why you don't multi thread opengl if you can avoid it because there is no global state model.

    Opengl is implicit sync were you sync depend on the API calls in correct orders you have used and is limited to current context/thread. Yes you can have multi opengl contexts in a single process with multi threads. Yes this causes the same kind of explode as when you attempt to sync them as doing vulkan/opengl in a single process on AMD and Intel drivers. Opengl is not compatible with Opengl even inside the same process and this is in the specification.

    For some data processing/cad applications you will use multi opengl contexts across multi threads inside the same process and never try to sync them because sync is doomed.

    The fact Opengl does not in fact have a global state is why trying to link to vulkan is so hard and why multi context opengl are next to impossible to sync with each other. Graphic object sync is global state model that works that Vulkan uses. One of the selling points of Vulkan should be global state model that works.

    Opengl by specification does not have a global state model yes why Opengl is in a lot of cases single context with single thread is no global state to share.

    Opengl by specification is a evil bit of works. Opengl behaving exactly to specification has many ways to ruin your day.

    Leave a comment:


  • mSparks
    replied
    Originally posted by Democrab View Post
    I'm inclined to agree with oaiohm specifically with the points about XPlane because what you're saying completely goes what I've seen with a whole swath of other applications using OpenGL, I'm not saying the issues seen with XPlane aren't there but...well, if the dev wants the GPU to run Vulkan and OpenGL at the same time when that's specifically been singled out as an unsupported use case and then choose to complain about undefined behaviour
    The issues are not specific to X-Plane, but XP is very specifically impacted by them. Its similar to the "60fps desktop vs 25,000fps glxgears" question.
    You can't really tell the difference in general terms, but there are very specific circumstances when the difference is really important.

    In the case of XP, the "universal source of woe" is the occasional frame with very high latency, generally speaking even for first person shooters no one noticed or cared, worst case two or three shots didn't go where you expected.
    But in a flight sim that short burst of very high latency can be the difference between crashing the plane into the ground killing everyone on board and a butter smooth landing. (same issues apply to driving sims)
    Using it for instrument plugins meant that there are occasions when opengl is taking sooo long to finish rendering all the displays come out blank and you get a really obvious display flicker.
    VR is the impacted the worst, and here it applies to all VR, the best way I can describe it, is it feels a lot like being smashed over the head with a baseball bat - like sure, the impact only lasted 50 milliseconds, but omg you really can't miss it.

    Oh, and of course, mesa isn't available for windows is it?
    Last edited by mSparks; 08 May 2023, 07:59 PM.

    Leave a comment:


  • mSparks
    replied
    Originally posted by oiaohm View Post
    tearing-free desktop<< This sale point of Vista is critical. Tearing generates a form flicker.
    VSYNC is not a wayland/X11/windows/gnome/kde feature.

    It's a GPU driver switch.

    The only thing "special" about wayland is the original developers invested significant effort into fighting anyone who told them forcing it always on was a really bad idea, until they abandoned the project and the new guys finally capitulated and allowed it to be disabled like everyone else.

    That was also a significant part of their proposed USP, "no tearing ever". aka "no one needs more than 60fps for anything ever"

    Originally posted by oiaohm View Post

    Opengl API is design around implicit sync. Vulkan API is designed around explicit sync.

    No idea where you got that from.
    OpenGL is based on a global state model, Vulkan is based on graphic objects, sync is about as import to the difference between them as the number of bugs between AMD driver versions.
    Try listening to someone who knows what they are talking about rather than the authors of a failed desktop protocol for a change.
    Last edited by mSparks; 08 May 2023, 05:45 PM.

    Leave a comment:

Working...
X