Announcement

Collapse
No announcement yet.

Trend Micro Uncovers Yet Another X.Org Server Vulnerability: CVE-2023-1393

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • oiaohm
    replied
    Originally posted by mSparks View Post
    Waylands performance issue will last as long as you can't disable the compositor...
    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

    Not supported by benchmarks. Wayland performance under AMD would not be as good as what it if that was the problem.

    Originally posted by mSparks View Post
    And even then it can still only perform as well as X11, since X11 with no compositing has no impact on performance.
    This is in fact incorrect. DRM leasing exists for VR usage because X11 with no compositing in fact has overhead and does impact performance.



    ​VR you 100 percent bipass X11 server or wayland compositor if you want proper performance on platforms without broken drivers..

    This is the problem here application needing ideal performance needs to bi pass x.org X11 server completely. Of course Nvidia drivers have been pure nightmare where they have failed to function right without X11 server loaded on consumer hardware.

    On your pro and server hardware of Nvidia wayland performance is very different to consumer hardware. Mostly due to requirement that egldevice applications as in applications that go straight to GPU with out X11 server need to perform correctly.

    Leave a comment:


  • mSparks
    replied
    Originally posted by oiaohm View Post
    Wayland performance issue as a temporary one
    Waylands performance issue will last as long as you can't disable the compositor...
    And even then it can still only perform as well as X11, since X11 with no compositing has no impact on performance.
    Which it won't, because of all the extra marshalling of IO

    Originally posted by oiaohm View Post
    Nothing because you are not paying it.
    "would I have to"
    That means, say I just paid $100,000 for a DGX from Nvidia.
    How much more would it cost to buy it from AMD and why would anyone stick wayland on it?
    Last edited by mSparks; 15 May 2023, 04:29 PM.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by mSparks View Post
    How much extra would I have to pay to do this on AMD/Wayland?
    Nothing because you are not paying it.

    Originally posted by mSparks View Post
    Oh, that's right, I'd have to basically buy AMD the company and redirect them to be not shit, plus fire all the wayland devs and replace them with competent developers. Because its just not possible there..
    So you want to fire Nvidia head of Unix/Linux driver development do you. "Fire all wayland developers" that includes the head of Nvidia Unix/Linux driver development and most of that team from the work on KDE.

    Originally posted by mSparks View Post
    I'll stick with Nvidia/X11 thanks.... waaaaaay cheaper.
    Yes you just killed that with that stupid idea. There is a reason why I class Nvidia Wayland performance issue as a temporary one caused by bugs in the Nvidia drivers because that exactly what Nvidia driver developers tell KDE wayland developers.

    Nvidia provides full time developer to KDE for Wayland development this started because there were so sure eglstreams could be made workable that turned out not to be the case. This has put Nvidia about a decade behind everyone else in doing DMABUF and other things.

    Next remember RADV and mesa opengl for AMD hardware is not developed by AMD. AMD allows anyone to develop drivers for their GPU hardware.

    X-Plane developer working with vulve funded zink developer. X-Plane could work with Valve funded mesa radeon/radv developer as well or fund their own,



    Originally posted by mSparks View Post
    It literally says in there:
    RESOLVED: This extension provides exactly one new feature: the ability to import/export between dma_buf and VkDeviceMemory. This feature, together with features provided by VK_KHR_external_memory_fd, is sufficient to bind a VkBuffer to dma_buf.​

    So wtf are you so excited about?
    Nothing like being wrong. K_KHR_external_memory_fd its an abstractions.

    Not counting android special unique one linux and bsd have 2 things that the fd that K_KHR_external_memory_fd could be processing



    Host option you are stuffed no flag for locked output buffering or any concept of process unique memory. Guess what form of external memory Nvidia under X11 use that right host. What form is the Nvidia driver force to use under Wayland these days dmabuf. Yes eglstreams backend wayland compositors also use host with problem after problem with nvidia finally admitting host path to buffer management is not workable. Host buffer management has a stack of bad problems.

    The undefined behavior.of host mode allows you to bring out some really creative issues with Nvidia drivers and what they are are nicely written in the Vulkan documentation.

    Nvidia drivers are just not optimized for using DMABUF yet so there are performance problems when they are forced to operate this way..

    X11 protocol buffers are host style with all the same defects. Yes one of the big changes between wayland and X11 is change of buffer style.

    Also this is what delayed Wayland development because when nvidia was pushing eglstreams/host style buffers and had not decide the future would be DMABUF wayland had to keep on being designed for the case that you would be using two different buffer management solutions with each other. With X-plane with amd opengl/Wayland you have seen how bad mixing two different buffer memory systems can be and how deep of a rabbit hole of unsolvable you can end up in.

    AMD/Intel/ARM... the parties behind DMABUF idea is that inside a process you have host memory management between processes you have DMABUF.. With host memory management being unique per process and unique to Graphics API vulkan/opengl...

    Nvidia original idea was use host buffer management as a global thing with no process separation. This is why Nvidia opengl/vulkan can kind of work. One problem its host everywhere model so you cannot optimize the buffer implementation inside a process to suite pure opengl or pure vulkan application resulting in pure opengl and vulkan applications performing worse than they should.

    There is no such thing as a free lunch. Interoperability between opengl and vulkan has a price to pay. Everyone bar nvidia decide to go for higher performance with pure opengl/vulkan applications. Choice of higher performance with pure opengl/vulkan application equals you cannot mix opengl/vulkan inside the same process without using abstraction like zink so that you are only using one host buffer management system.

    AMD and Intel issues with X-Plane trace to a design choice allowed by Opengl/Vullkan protocol. Yes the per process design of host memory is in fact part written into vulkan protocol. X-Plane developer tried to avoid having opengl to vulkan abstraction.

    3) Can the application free the host allocation?

    RESOLVED: No, it violates valid usage conditions. Using the memory object imported from a host allocation that is already freed thus results in undefined behavior.

    This on the host memory page what goes wrong when you do opengl to vulkan using AMD/Intel inside the one process. Either the opengl code or the vulkan code frees a buffer now you end up with black nothing sections.

    Remember you don't DMABUF inside a single process. You host allocation instead. That right this is the trap of VK_KHR_external_memory_fd it does two different things depend on if the fd has been transferred between processes or not. fd transferred between processes DMABUF. fd inside the same process host.

    X-Plane code mixing opengl/vulkan breaks the rules of using host memory mode as written in opengl and vulkan specifications. Just because Nvidia allows it to work does not mean the specifications say it should.

    Leave a comment:


  • mSparks
    replied
    Originally posted by oiaohm View Post
    You have video output ports don't you. You paid extra to have those.
    How much extra would I have to pay to do this on AMD/Wayland?

    I think the solution is to use this for the audio and a small "from cockpit" view and underlay a fullscreen max settings 2D replay for the visuals


    Oh, that's right, I'd have to basically buy AMD the company and redirect them to be not shit, plus fire all the wayland devs and replace them with competent developers. Because its just not possible there.

    I'll stick with Nvidia/X11 thanks.... waaaaaay cheaper.

    It literally says in there:
    RESOLVED: This extension provides exactly one new feature: the ability to import/export between dma_buf and VkDeviceMemory. This feature, together with features provided by VK_KHR_external_memory_fd, is sufficient to bind a VkBuffer to dma_buf.​

    So wtf are you so excited about?

    Leave a comment:


  • oiaohm
    replied
    Originally posted by mSparks View Post
    I paid "extra" for CUDA and a GPU that works,
    Although I wouldn't say anyone paying extra when the Quadro's were $10k a pop.

    Nvidia is launching a GPU range designed just for mining, which should hopefully ease the shortages of its current RTX series.

    You have video output ports don't you. You paid extra to have those. People have attempted to use these no video out cards in prime setups as in using like intel/amd graphics on board. Guess what no "locked output buffer support" hello tearing in prime mode.

    Originally posted by mSparks View Post
    And why is that better than Vulkan?


    Buffers Vulkan and Opengl is abstraction over.

    Leave a comment:


  • mSparks
    replied
    Originally posted by oiaohm View Post
    In fact you have paid extra for "adaptive sync technology" with Nvidia cards. Nvidia does make some cards for server usage that does not have that feature. Thinking you have paid for that feature don't you want to take full advantage of it?
    I paid "extra" for CUDA and a GPU that works,
    Although I wouldn't say anyone paying extra when the Quadro's were $10k a pop.

    Originally posted by oiaohm View Post
    Remember how I have mentioned DMABUF over and over again. Application connecting to Wayland on AMD and Intel and NVidia in future is setup using DMABUF. Guess what DMABUF happens to be a type of buffering suitable for "locked output buffering"/"adaptive sync technology" to prevent tearing.

    mSparks never crossed you mind it the type of buffer the application is rendering to that is important if "adaptive sync technology"/"locked output buffering" can work. You need to be using a type of buffer with sync data to say when buffer is complete so it can be queued into ."locked output buffering" system

    Buffer problem is one of the reasons. The buffers of X11 protocol are not designed with a flag to say when buffer is complete. Think X11 requirement to glue everything into one global image. Think X11 requirement to use X11 old legacy image formats that don't match modern day GPUs.
    And why is that better than Vulkan?

    Leave a comment:


  • oiaohm
    replied
    Originally posted by mSparks View Post
    TBH, never paid extra for the hardware that supports it or looked into what the driver is doing on the wire, 60Hz has always been big enough for me, and less is always going to be shitter.

    But its still "only" a gpu driver - display thing.
    "adaptive sync technology"/"locked output buffering" to work correctly requires applications providing buffers to the GPU/2d accelerator/CRT controller(yes CRT controller incldues hdmi/displayport...) play by particular set of rules.

    In fact you have paid extra for "adaptive sync technology" with Nvidia cards. Nvidia does make some cards for server usage that does not have that feature. Thinking you have paid for that feature don't you want to take full advantage of it?

    Yes to take full advantage of the features you paid for Nvidia need to fix their drivers.

    Originally posted by mSparks View Post
    Plus wayland leaves rendering all to the applications, so any buffering offered by wayland is meaningless to 99% of the desktop.
    Remember how I have mentioned DMABUF over and over again. Application connecting to Wayland on AMD and Intel and NVidia in future is setup using DMABUF. Guess what DMABUF happens to be a type of buffering suitable for "locked output buffering"/"adaptive sync technology" to prevent tearing.

    mSparks never crossed you mind it the type of buffer the application is rendering to that is important if "adaptive sync technology"/"locked output buffering" can work. You need to be using a type of buffer with sync data to say when buffer is complete so it can be queued into ."locked output buffering" system

    Originally posted by mSparks View Post
    Personally, I think all that effort would have been better spent just fixing any issues in xorg-server first and moving on something actually useful, but part of having freedom of choice is the freedom to make dumb choices.
    Buffer problem is one of the reasons. The buffers of X11 protocol are not designed with a flag to say when buffer is complete. Think X11 requirement to glue everything into one global image. Think X11 requirement to use X11 old legacy image formats that don't match modern day GPUs.

    Leave a comment:


  • mSparks
    replied
    Originally posted by oiaohm View Post
    "adaptive sync technology"
    TBH, never paid extra for the hardware that supports it or looked into what the driver is doing on the wire, 60Hz has always been big enough for me, and less is always going to be shitter.

    But its still "only" a gpu driver - display thing.

    Plus wayland leaves rendering all to the applications, so any buffering offered by wayland is meaningless to 99% of the desktop.

    Originally posted by Weasel View Post
    And it will need another 10 years before you can even query absolute window positions from another app (like a script).

    And then probably another 10 before it allows you to move other app windows with your own (again, a script being most common example).

    Progress!

    maybe then they can finally start work on fixing the security vulnerabilities they inherited from copy pasting all that functionality from the xorg-server codebase.

    Personally, I think all that effort would have been better spent just fixing any issues in xorg-server first and moving on something actually useful, but part of having freedom of choice is the freedom to make dumb choices.
    Last edited by mSparks; 14 May 2023, 06:17 AM.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by Weasel View Post
    And it will need another 10 years before you can even query absolute window positions from another app (like a script).

    And then probably another 10 before it allows you to move other app windows with your own (again, a script being most common example).
    For a variety of cases it's desirable to have a method for negotiating the restoration of previously-used states for a client's windows. This helps for e.g., a compositor/client...

    I have pointed out that it not that what you want cannot already be done just it not friendly.

    .Of course you miss the case where script attempts to move a window and user is moving the window....

    Session management stuff has never been locked down. Weasel think you were using Xtest under X11. You did not have proper windows manager integration so you could get into issues with user fighting script and windows manager fighting script.

    Windows placement control has been kind of buggy. One of the issues that has slowed down wayland is how much of X11 is in fact broken once you look closely and has to be redesigned from the ground up.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by mSparks View Post
    VSYNC is the only method to stop tearing. tearing is caused by the display being told to draw something different part way through drawing it, the vsync signal is the only mechanism available for a GPU to know when a display is and is not drawing, and it's 100% effective at preventing the GPU from updating what is being drawn half way through the display drawing it.
    You might be wondering How to fix screen tearing with VSync? You may experience screen tearing while gaming. It can be very distracting and can make gaming

    Nvidia GPU-Sync and AMD Free Sync can be used as an alternative to V Sync. They prevent screen tearing by using adaptive sync technology. They do the same thing, adapting the refresh rate of your monitor to match your GPUs.​
    What do you do when you can change the monitor HZ as much as you like. VSYNC fails to work.

    Epaper does not have a HZ because what is on the screen stays on the screen with no updates and these are not the only screens like this. Epaper screens don't put out a vsync signal. Modern adaptive sync screens behave lot like Epaper.

    adaptive sync technology
    How does adaptive sync technology work the answer is simple EGL "locked output buffering" as you use with epaper screens..

    The reality here you don't need to use Vsync. GPU that support "adaptive sync technology" or "locked output buffering"(the epaper tech embedded GPUs) those both function with you make a buffer then tell the GPU set this buffer as output then that buffer is locked until its replaced by another buffer and is not in use by the GPU..

    Both "adaptive sync technology" and "locked output buffering" are designed where the display device vsync alters to match the GPU output rate instead of normal vsync solution where the GPU is attempting to match the monitors vsync.

    With a display that has vsync/hz using "locked output buffering" Cost is up to 4 times the output screen in V ram.
    1) Output buffer being output.(locked)
    2) the waiting buffer that will come the output buffer when the Output buffer is sent.(locked)
    3) The opps we have updated buffer that attempting to replace the buffer head(locked).
    3)Buffer being got ready for output.

    When working with a GPU that supports "locked output buffering" of some from application does not need to care about vsync to be tear free. Just keep on feed up locked buffers to the GPU and letting the GPU unlock them once they are no longer required. No longer required is because either they have been sent or they have been declared superceed.. Yes in that 1 2 3 4 1 and 2 could both be unlocked at the same time and 3 the opps goes straight on to the output to monitor.

    This is a lot simpler system than working with vsync and works no matter how much you mess around with the hz. Yes a 0 to infiinity.

    In a "locked output buffer" system there is no replacement buffer for epaper or a monitor who controller can maintain output nothing is sent and the output buffer will unlock. Now for a normal monitor that has a vsync this behaves differently the "lock output buffer" remains locked and keeps on being sent.

    mSparks there are LCD monitors in different devices that can maintain output even if the connected computer fully powers down. Those monitors you don't need to send output every vsync.

    mSparks like it or not there is hardware where the concept of vsync simply does not work.

    Adaptive sync where you are altering the monitor update HZ how do you know how long until the next vsync the answer is you don't because GPU is able to set vsync when it wants it to be.

    Epaper and hold state LCD screens guess what these don't always send a vsync signal. Yes some hold state LCD screens have locked output buffers implemented in the controller of the monitor so the vsync signal of those remains inside the monitor and never comes back to the GPU.

    Like it or not there has been need for quite some time for tear free without vsync because of hold state LCD screens and epaper. Of course some cheap adaptive sync monitors fake being adaptive sync by running at max Hz and having hold state LCD controller so that lower HZ signal just result in frames being replayed by the controller.
    "Simple rule don't alter the output buffer you are sending to the output device"
    Any method that does that means you don't have tearing.

    mSparks any idea why the software align VSYNC method was first created. Think early CGA graphics cards and the like they did not have enough ram to use a "locked output buffer solution". and old graphics cards not have GPU or 2d accelerator to handle "locked output buffers". At some point we need to grow out of the old VSYNC method and just use "Locked output buffer solutions". GPU with gigabytes of ram few extra output buffers not a problem. Raspberry pI4 a few extra output buffers not a major problem either.

    Locked buffer output buffer solution you can change the next output buffer many times mid Vsync no problem.

    Yes it use to be choose between no tearing and use vsync alignment with software or have tearing so be able to update output mid vsync with min risk. "Locked output buffers" means you don't have to make that choice. There is a slight overhead of course with "locked output buffers" if you attempt to change the output buffer while it being sent the change can only happen once the buffer being sent has completed.

    Leave a comment:

Working...
X