Announcement

Collapse
No announcement yet.

Trend Micro Uncovers Yet Another X.Org Server Vulnerability: CVE-2023-1393

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • oiaohm
    replied
    Originally posted by Democrab View Post
    The compositor is on most of the time, that is it's on until a heavy 3D application starts at which time it switches off automatically. My general desktop rendering speed is responsive enough that Windows 10 feels sluggish on the same machine installed to the same SSD, with there being enough of a difference that when I first set up everything I genuinely checked to see I hadn't accidentally forgot to set or changed a performance sensitive setting or something while setting everything up that hit Windows particularly hard..
    You would be on a dgpu. The parties behind wayland are enterprise. Shocking reality is 80% of the desktop PC produced every year don't have Dgpu and over 70% will be scrapped never having a dgpu. Igpu/APU

    Majority of the machines with no Dgpu is owned by enterprise and government institutions. The parties funding most Wayland development enterprise and government institutions. Parties setting most Wayland development priorities enterprise and government institutions..

    Originally posted by Democrab View Post
    Didn't you keep claiming Wayland was faster for gaming for most of this thread?.
    Most of the thread I said equal for AMD and Intel for gaming with benchmarks showing that that was the case. mSparks has been claiming that Wayland is way worse but that only applies really to Nvidia uses and lots of that is Nvidia driver issues that Nvidia in fact admits to.

    The major difference is in the general desktop applications when compositor is in fact on. Gaming compositor is normally disabled one way or the other. glgears running as window as I did shows interesting issue.

    Fury Nano compared to what most enterprise desktop computers have is a very powerful GPU. The igpu/APU they have had and will have for a decade more will not anywhere close.

    Now with the requirement to be compatible with work place heath and safety to reduce staff risks systems to prevent screen tearing have to be on. So running X11 without X11 compositor or windows with its compositor disabled is not option for enterprise for their desktops. Enterprise does want better performance but it different performance.

    Yes core Wayland developer write up about Wayland improvement performance were written from the enterprise requirement point of view not running games but running normal desktop applications on absolutely horrible poor iGPU/APUs

    Please note all those glgears speeds I gave all were running as window not one was full screen. This is benchmarking as what you would if you were looking at this from enterprise desktop point of view. The test where I turned compositor off that is not allowed in many countries for enterprise desktop computer because someone could come in off the street have photosensitive seizure because the output of the screen had tearing so a flicker in a known seizure triggering speed.

    What you have to do in business with safety equipment and what you get away with at home are two very different things.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by mSparks View Post
    You don't actually need it for rectangular application windows or single applications at full screen - they do nothing but add latency. Compositors give you things like those nice rounded corners, fancy animations like wobble or explode, or "whole desktop" effects like spinning on a cube or zoom out. Microsoft loved it so much they put 90% of their development team or something into windows aero and brought the world windows vista to try and compete (did for a while, then windows 8 happened)
    Spell checker for you regular application windows you were going for. You missed something critical

    Photosensitive seizure warning/epilepsy warning start appearing on games in 1991. Then there is a lot of study into it. Microsoft did not add compositor to Vista just for the effects.. tearing-free desktop<< This sale point of Vista is critical. Tearing generates a form flicker. Just happens it a form of flicker from the studies can cause problems this might be seizure were person falls to floor or might be logic seizure where human makes increased number of errors.

    Originally posted by mSparks View Post
    You realise this is fundamentally an argument that no one cares about application rendering speed for general desktop applications?
    Because you don't does not equal nobody does. Enterprise does care because they do measure staff performance lower latency desktop does equal higher productivity in general applications. They also care not have work place health and safety/OHS issues caused by a preventable issue this is where tear free performance comes in.

    Originally posted by mSparks View Post
    There are other situations when you really do care about application rendering performance, in that situation you want the compositor gone, on X11 KDE supports disabling it for windowed applications, gnome automatically disables it for fullscreen applications. Wayland you can never make it gone, the compositor is an integral part of the wayland protocol "ooops".
    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite


    This one is more interesting. KDE does support what is called Direct Scan Out under Wayland so does gnome. You know how Wayland is using dmabuf. Direct scan out wayland compositor tells GPU this applications dmabuf is the output display the contents. Yes the application can keep on updating that dmabuf.

    Remember how kwin wayland you can restart the compositor. Lets say you have a application with direct from kernel input so not depending on the wayland compositor to pass forwards input something interesting can be done. You can stop the kwin wayland compositor while not having the wayland applications terminate and keep on using that wayland program as long as it the program was full screen and direct scan out had triggered and has direct from kernel input..

    One of the things the KDE developer working on kwin wayland restart was that long term objective is the means to swap from kwin to another wayland compositor and back.

    Wayland compositor target for a particular program/game can be way lighter than xorg server.

    DRM lease exists for VR because guess what xorg bare metal with no compositor adds too much overhead for VR. Yes DRM lease was created for x.org server first.

    DRM lease method to completely bipass x.org X11 server or Wayland compositor and go straight to the hardware.

    Originally posted by mSparks View Post
    Because AMDs drivers are broken, it applies throughout the entire AMD GPU range. This can change, things like zink are what will help there. If you are not familer with what makes Vulkan such a huge leap forward over previous graphics APIs, the X-Plane Owner/CEO did a really nice lecture on why Vulkan was sooo important to them here
    Problem is I know that Vulkan is huge leap forwards. Opengl and Vulkan are designed on two totally different core design.

    Opengl API is design around implicit sync. Vulkan API is designed around explicit sync.

    AMD and Intel both had the idea someone migrating to Vulkan would not want to mix in Opengl. Also AMD and Intel did not want to have to deal with creating a new set of opengl problems.

    Nvidia put a lot of work in but when you start running conformance test suites Nvidia has broken their Opengl. Also they have broken their vulkan support in places.

    Nvidia problems are coming from a set of areas. Nvidia has no stable ABI for their GPU firmwares has no stable ABI for their kernel driver to user space.

    Mixing two different sync systems is a really quick way to run into race condition hell. Nvidia by the different conformance test suite results says that this is exactly what is happening to Nvidia. AMD and Intel both looked and opengl and vulkan in the one process and said no we are never doing that. AMD and Intel choice means that legacy opengl application that work with AMD and Intel have stayed working with AMD and Intel. Nvidia on the other hand legacy opengl applications that use to work with Nvidia don't any more.

    Zink is being designed to make Opengl legacy applications work on Vulkan. Why does this have advantage over the Nvidia path. Simple Vulkan is a define API. Zink developer cannot decide to add/change something at random. Nvidia issues come from lack of ABI stability where the opengl/vulkan/cuda... can all be altering the core interfaces.

    Lets say AMD attempt to join Vulkan to Opengl directly for single process. AMD you have 2 opengl stacks and 3 Vulkan stacks , for direct hardware. That a lot of combinations. All 6 have there own unique in process memory management that have been optimized for different use cases. There is nothing defined in opengl/vulkan standards to allow this direct joining. Zink is form of wrapper/indirect joining.

    Ever head the saying square peg round hole. Nvidia is getting more and more unstable performance and more odd CTS failures why they are attempting to have a single process memory management solution for cuda, opengl and vulkan.

    mSparks by the Vulkan and Opengl specifications every thing that amd drivers are doing to X-Plane is exactly to specification because the area is marked undefined where the GPU drivers can do what ever they like. Different matter if the opengl parts had been running in their own process because opengl in one process vulkan in a different process with them sharing buffers is in the Vulkan and Opengl specifications.
    Last edited by oiaohm; 07 May 2023, 11:46 PM.

    Leave a comment:


  • Democrab
    replied
    Originally posted by oiaohm View Post
    General desktop application(like firefox,chrome libreoffice..) rendering speed is something people don't measure that much. X11 compositor loaded your general desktop rendering speed is quite slow.

    The business use case is worried about the tear free performance and that means Compositor has to be on. Also business case cares about out of the box performance before you tweak anything.
    The compositor is on most of the time, that is it's on until a heavy 3D application starts at which time it switches off automatically. My general desktop rendering speed is responsive enough that Windows 10 feels sluggish on the same machine installed to the same SSD, with there being enough of a difference that when I first set up everything I genuinely checked to see I hadn't accidentally forgot to set or changed a performance sensitive setting or something while setting everything up that hit Windows particularly hard.

    Originally posted by oiaohm View Post
    There was a lot of talk about Wayland being better performance. People presumed this was top end games will go faster those are mostly highly optimize already so getting faster by any amount is going to be hard. The low hanging fruit in perform is general desktop applications that are faster and respond smoother by a lot by using Wayland.
    Didn't you keep claiming Wayland was faster for gaming for most of this thread?

    Besides, it wasn't any faster for general desktop usage for me when I was trying it on my machines either. At least not in any noticeable way, I didn't exactly look into it beyond trialling it for a week and noticing little to no unexpected differences beyond the missing or buggy features I'm used to having there, just working, with xorg.

    Originally posted by oiaohm View Post
    R9 Fury Nano I would suspect you are depending on AMD driver tear free with X11 compositor disabled. My numbers are you do a clean debian install and install KDE what would you get.

    Gaming performance in particular games can be down by 50%+ because you choose to go X11. General desktop latency is way worse because you choose X11. The all performance is not usable on RX570 card because I have but using X11 bare metal. I am not getting all the performance RX570 in fact offers..
    It's running stock standard X11 + KDE Plasma from the Arch repos with zero modifications, I keep the HTPC pretty much completely vanilla apart from the GE Proton/Wine releases for simplicities sake.

    I'm sure it can be down by 50%+ on some games in some machines. It isn't on the various games I've tested in mine though, Wayland was often slightly slower when measuring the framerates although it didn't feel any different and was close enough that I was willing to put it within the margin of error. That Fury Nano is getting the exact same performance it achieved when it was in my old PC and that it achieved in my current PC, both of which had two OS' running entirely different GPU drivers to test under (Arch and Windows 10) so I doubt there's much, if any, performance left on the table that isn't down to the inherent bottlenecks inside the Fury GPUs design or for some games, the 4GiB framebuffer being too small.

    Originally posted by oiaohm View Post
    x.org X11 server bare metal like it or not is a bottle neck killing performance. There have been hacks implemented to work around the issues but they are only hacks.
    Like it or not, there's plenty of people such as myself who are not seeing a performance bottleneck from X11. You can claim big performance drops, latency problems and I'm sure they exist in some machines and cases, but it completely goes against what a lot of us experience in the real world.

    Leave a comment:


  • mSparks
    replied
    Originally posted by oiaohm View Post
    9052.790 FPS used alt shift F12 to disable compositor or what no compositor X11 server does..
    That sounds like you don't actually know what a compositor is or does, never used Linux before we had them maybe?
    Compiz was the first one I used:


    You don't actually need it for rectangular application windows or single applications at full screen - they do nothing but add latency. Compositors give you things like those nice rounded corners, fancy animations like wobble or explode, or "whole desktop" effects like spinning on a cube or zoom out. Microsoft loved it so much they put 90% of their development team or something into windows aero and brought the world windows vista to try and compete (did for a while, then windows 8 happened)

    Originally posted by oiaohm View Post
    General desktop application(like firefox,chrome libreoffice..) rendering speed is something people don't measure that much. X11 compositor loaded your general desktop rendering speed is quite slow.​
    You realise this is fundamentally an argument that no one cares about application rendering speed for general desktop applications?

    So take all that garbage you posted about Wayland vs X11 performance in that situation and bin it, no one cares, it is never going to make a difference to get a half working wayland to replace a fully functional X11.

    There are other situations when you really do care about application rendering performance, in that situation you want the compositor gone, on X11 KDE supports disabling it for windowed applications, gnome automatically disables it for fullscreen applications. Wayland you can never make it gone, the compositor is an integral part of the wayland protocol "ooops".

    Originally posted by oiaohm View Post
    The business use case is worried about the tear free performance
    Its called Vsync, it has existed as long as there have been GPUs, there really is nothing exciting, special or new there. Even the newer Gsync/Freesync is still a GPU/Display function, not an X11/Wayland function. Wayland propagandists have been pretending otherwise for years, but that's pure bullshit.
    Originally posted by oiaohm View Post
    I am not getting all the performance RX570 in fact offers..
    Because AMDs drivers are broken, it applies throughout the entire AMD GPU range. This can change, things like zink are what will help there. If you are not familer with what makes Vulkan such a huge leap forward over previous graphics APIs, the X-Plane Owner/CEO did a really nice lecture on why Vulkan was sooo important to them here:

    Vulkan/Metal is here for X-Plane! This is how it works, and why we did it!www.X-Plane.com to get in on the action yourself!www.AustinMeyer.com to see what el...
    Last edited by mSparks; 07 May 2023, 06:39 AM.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by Democrab View Post
    You have broken something if you're only getting 1k fps in glxgears out of an RX570 irrespective of display server or DE/WM. I get 25000fps on a 6700XT running just a vanilla X11 install and 5000fps out of the R9 Fury Nano in my HTPC (Which is a tad under the RX580 in terms of performance and is an even older GPU, being from 2015) despite it also running the KDE shader wallpaper plugin at 4k, again with stock standard X11 and Plasma.
    Nothing is broken as such. Depending on how I run glxgears with R570 you get all kinds of different performance numbers.
    1534 frames in 5.0 seconds = 305.892 FPS Kwin X Compositor enabled so no tearing and effects.. vblank_mode=0 glxgears
    45264 frames in 5.0 seconds = 9052.790 FPS used alt shift F12 to disable compositor or what no compositor X11 server does.. vblank_mode=0 glxgears
    98188 frames in 5.0 seconds = 19637.557 FPS gamescope on top when Kwin X Compositor enabled.. vblank_mode=0 gamescope glxgears. Yes Xwayland on top of X11 bare metal end up with better performance.

    glxgears does not trigger the X compositor automatically step out way. Bare metal Kwin Wayland about the same as gamescope.

    General desktop application(like firefox,chrome libreoffice..) rendering speed is something people don't measure that much. X11 compositor loaded your general desktop rendering speed is quite slow.

    The business use case is worried about the tear free performance and that means Compositor has to be on. Also business case cares about out of the box performance before you tweak anything.

    Yes my 5 year old card can get 19637.. FPS using gamescope so not that horrible compared to the new 6700XT. This kind of explains why more and more games have steam auto running gamescope.

    There was a lot of talk about Wayland being better performance. People presumed this was top end games will go faster those are mostly highly optimize already so getting faster by any amount is going to be hard. The low hanging fruit in perform is general desktop applications that are faster and respond smoother by a lot by using Wayland.

    R9 Fury Nano I would suspect you are depending on AMD driver tear free with X11 compositor disabled. My numbers are you do a clean debian install and install KDE what would you get.

    Gaming performance in particular games can be down by 50%+ because you choose to go X11. General desktop latency is way worse because you choose X11. The all performance is not usable on RX570 card because I have but using X11 bare metal. I am not getting all the performance RX570 in fact offers..

    This is one of these case people can be overclocking their cards to get more performance when in fact there was more performance they could have got out of their cards without overclocking cards.

    x.org X11 server bare metal like it or not is a bottle neck killing performance. There have been hacks implemented to work around the issues but they are only hacks.



    Leave a comment:


  • Democrab
    replied
    Originally posted by oiaohm View Post
    "AMD Radeon RX 570 Series (polaris10, LLVM 15.0.6, DRM 3.49, 6.1.0-7-amd64)" Its only a 2017 card and to be worse its a IPX form factor one without ideal cooling. Its still better than a APU or IGPU..
    You have broken something if you're only getting 1k fps in glxgears out of an RX570 irrespective of display server or DE/WM. I get 25000fps on a 6700XT running just a vanilla X11 install and 5000fps out of the R9 Fury Nano in my HTPC (Which is a tad under the RX580 in terms of performance and is an even older GPU, being from 2015) despite it also running the KDE shader wallpaper plugin at 4k, again with stock standard X11 and Plasma.

    Then again, I've been running X11 without any real issues for years now despite having a triplehead setup.

    Originally posted by oiaohm View Post
    Yes low enough cost the wayland route that you can have a compositor on a igpu/APU without being in stutter hell.
    Before I got the 6700XT, the Fury Nano was in my main desktop and the HTPC was using the integrated graphics (FM2+ APU) without any stuttering problems from the X11 compositor. In fact it's still running the same software stack, I just put the Fury into the PCIe slot and switched over which HDMI port was being used. (That reminds me, I need to get around to setting up PRIME)​

    Originally posted by mSparks View Post
    you get a boost in CPU frame time on linux over windows which removes several CPU bottlenecks over windows. but AMDs openGL drivers are just as bad on Linux as windows.
    The proprietary drivers for sure, but the mesa ones are a night and day difference to those drivers and tend to be much more competitive with nVidia.

    I'm inclined to agree with oaiohm specifically with the points about XPlane because what you're saying completely goes what I've seen with a whole swath of other applications using OpenGL, I'm not saying the issues seen with XPlane aren't there but...well, if the dev wants the GPU to run Vulkan and OpenGL at the same time when that's specifically been singled out as an unsupported use case and then choose to complain about undefined behaviour after still doing so I'm not going to use that as a point about driver quality especially when it still appears to be causing issues (albeit less serious) for nVidia as well, that's more of a "dev needs to rethink what they're doing" issue than anything to do with any of the GPU makers.

    This is further shown by the times other devs have publicly spoken about driver quality, with the Dolphin devs even saying that the Mesa AMD drivers were second only to nVidia's drivers for their purpose way back in 2013 when fglrx was still a relevant thing with the only drawbacks being a few bugs they were able to get fixed fairly quickly, while the official drivers were very buggy and had no real means of submitting bug reports, considering the time period between now and 2013 consisted of mesa going from strength to strength it'd be a tad odd if it'd somehow also gotten a lot worse at the same time. Plus y'know, there's also what I can see with my own eyes when testing which games and emulators work best in which OS on my dual booting system, with that experience alone I have absolutely no idea how anyone could come to the conclusion that the Mesa OpenGL drivers are bad, let alone as bad as the AMD Windows OpenGL driver.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by mSparks View Post
    No you can't. Unlike AMD, Nvidia's drivers are not broken such that a single opengl call will ever take 30+ milliseconds like is common AMD, start paying more attention to the minimums on the benchmarks.
    Please go and run the opengl conformance test suite. Nvidia drivers worst testsuite triggered screw up for being stuck in single opengl call is 4 and half hours. AMD is capped on how bad this can get. AMD and Intel will start rendering wrong once stall in opengl calll is going to excess particular value. Part of my delay here was checking if that was still the case.

    People call the game crashed when Nvidia goes wrong.

    Same with you Zink with Nvidia lifts the minimums.


    Yes take a look again at the data you quoted the Zink stuff done with nvidia.

    Also those numbers the older versions of X-Plane with AMD with a lot of work arounds.

    Leave a comment:


  • mSparks
    replied
    Originally posted by oiaohm View Post

    You did not get what I was getting at that picture does not mean what you think it does I could do up a garbage picture like that for Nvidia as well. Neither X-Plane results..
    No you can't. Unlike AMD, Nvidia's drivers are not broken such that a single opengl call will ever take 30+ milliseconds like is common AMD, start paying more attention to the minimums on the benchmarks.

    Originally posted by oiaohm View Post
    Nvidia having a broader performance spread that what you expect out AMD and Intel as well.
    You mean like, sometimes nvidia is bad, and AMD/Intel is always bad??
    No shit.

    Originally posted by oiaohm View Post
    You have to wake up cherry picked numbers.
    here, have a few kilograms of cherries. MMMMMMMMMmmmmmmm yum ym



    Leave a comment:


  • oiaohm
    replied
    Originally posted by mSparks View Post
    Its a vulkan thing, vulkan uses command buffers rather than states, its a fundamentally different way of writing software that works much better on multicore systems.
    nvidia already uses command buffers for its opengl drivers, (cuda), but their vulkan build is slightly better, yielding 10-20% performance improvements.
    There is a problem. Opengl is implicit sync. command buffers are explicit sync.
    Driver 384 and 390 Vulkan on Dawn of War 3 is slower than OpenGL in-game benchmark at 1440p Vulkan min: 24 max: 50 OpenGL min: 37 max: 67

    Not a new problem. You are not seeing 10 to 20% performance increase with Nvidia drivers all the time with Vulkan over Opengl. In fact you see the reverse and it lot worse at times.

    Originally posted by mSparks View Post
    zink is that blue DrawingHook bar that is 10% of the size on the AMD radeon in the screenshot

    Ohhhh, you think nvidia makes AMD Radeon..... that explains a lot....
    You did not get what I was getting at that picture does not mean what you think it does I could do up a garbage picture like that for Nvidia as well. Neither X-Plane results..

    Basically the picture means nothing good for you case. I provided a targeted testsuite example. A test suite that able to test the performance of opengl vulkan interop​ without attempting to fix the tearing and other artifacts is in fact insanely fast. Yes is scary fast that test suite thinking you can track it restarting the GPU and all kinds of extra horrible that you would think should be slowing it down. You can rerun that testsuite with zink have the tearing and artifacts gone and its slightly slow.

    Now interesting point fixed problem by using zink while added no extra locking with the test suite that demos the problem.

    The problem is a race condition. One of the evils of dealing with a race condition is that it can trick you into lowering you programs performance adding useless operations because lower the performance less odds of hitting the race condition so as program lowers performance it appears that they are slowly solving the problem when in fact you are solving nothing and just digging the hole to get out of deeper.

    Doing up a conformance testsuite grade test you hit these problems can in fact warn you when all you are doing is digging yourself deeper because it gives you the unfixed performance value and something to run a few million times to see if you fix in fact did fix it or not.

    Remember opengl is meant to be implicit sync. Galluim3d that Zink uses as core has internal structures that can process opengl implicit sync and optimize explicit sync usage.

    Zink does try to demand opengl application support explicit sync to perform well. Nvidia opengl on command buffers here is the problem Nvidia is expecting the opengl program to be modified to provided explicit sync directions. Now you have another problem with Nvidia that you end up with explicit sync lock storm as too many implicit sync operations are turned into explicit sync operations. Operations that zink galluim3d core would have bundled up into a single explicit sync lock you have Nvidia driver having like 1 each.

    Zink takes a complete opengl stack that would normally connect to GPU but then plugs it straight on top of Vulkan there are some serous advantages form doing this.
    1)is no need to modify opengl programs design for implicit sync to know about explicit sync because they are just seeing a normal stack implicit sync. .
    2) you get to take advantage of traditional opengl implicit sync optimization.
    3) You avoid crossing the streams of implicit sync and explicit sync.
    4) strict link between what is Zink and what is Vulkan.

    Nvidia method lets just replace the opengl operation buffers with command buffers.
    1) you break traditional opengl implicit sync optimization in those opengl operation buffers that nvidia end up having to hack back in on a per application base(costing tones of man hours) or application developers have to modify their applications. Yes this does lead to X-Plane issue were X-Plane developer thinks the problem just need more locking... because more Locking is what Nvidia developers said was required so Nvidia opengl had performance.....
    2) different performance problems happen due to too many cooks. Why because developers working on opengl think that the command buffers is theirs so adds options to help them. Then Vulkan developers think command buffers is theirs and add options to help them. Cuda developers think command buffers is theirs so adds options to help them. Of course all these option adding some are going to be incompatible.
    3) In places in the Nvidia opengl implementation you have crossing of the implicit sync and explicit sync operations that cause very bad outcomes.

    Traditional drivers like AMD and Intel when doing opengl they are still doing implicit sync optimizations this is why you don't see 10-20% performance advantage using Vulkan over opengl on them you see 10% to -5% advantage to vulkan instead lot tighter numbers. Basically the difference is the overhead of the in driver optimize opengl processing vs how good the vulkan application code is. Of course these traditional drivers don't mix with Vulkan in the same process and if you do you end up with race conditions on you hands and doing stupid things like hooking implicit sync locking straight to explicit sync and wondering why things don't perform. There is a need for processing between implicit sync and explicit sync to optimize the explicit sync side.

    Nvidia having a broader performance spread that what you expect out AMD and Intel as well.

    You have to wake up cherry picked numbers. Lot of the numbers saying Vulkan is better than Opengl is cherry picked. They only went looking for the numbers that said vulkan was better. Did not go looking for where the numbers said worse. You really need both numbers. Nvidia case is higher performance advantage of vulkan over opengl in lots of cases with a higher performance disadvantage in many other cases.

    Yes good old accuracy vs precision problem here. AMD and Intel GPU performance does have very high precision. Nvidia is kind of all over the place in precision that mostly gets over looked because they make the most powerful GPUs and making the most powerful GPU they can lose a lot of performance and people fail to notice correctly..

    Leave a comment:


  • mSparks
    replied
    Originally posted by oiaohm View Post

    Is that true or is that another Nvidia thing?
    Its a vulkan thing, vulkan uses command buffers rather than states, its a fundamentally different way of writing software that works much better on multicore systems.
    nvidia already uses command buffers for its opengl drivers, (cuda), but their vulkan build is slightly better, yielding 10-20% performance improvements.
    Originally posted by oiaohm View Post
    Notice Zink has done 4 frames in the time Nvidia did 2.5.
    zink is that blue DrawingHook bar that is 10% of the size on the AMD radeon in the screenshot
    Originally posted by oiaohm View Post
    Thank you for providing the screenshots that Nvidia defective.
    Ohhhh, you think nvidia makes AMD Radeon..... that explains a lot....

    Leave a comment:

Working...
X