Announcement

Collapse
No announcement yet.

X.Org Server 1.20 RC5 Released, Adds EGLStreams To Let NVIDIA Work With XWayland

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Originally posted by Khrundel View Post
    Do not insult AMD, they don't deserve it. Being so lazy not to test their "maintenance" driver with new kernel...That is a terrible accusation.
    Not really this is you clueless. By 2010 the writing is on the wall about KMS and Wayland. The existing fglrx kernel mode driver at the time was not suitable for KMS support. So AMD cut their losses. The announce back then was clear maintenance would be slowed with as much resources possible put in the open source drivers Radeon and when the open source Radeon out performed the old fglrx kernel mode or if there was no request for the fglrx driver it would be stopped completely. So running in this mode testing new kernel releases makes no sense. Because each kernel release containing the open source radeon driver could exceed the fglrx driver so triggering the termination. It took a few years to trigger the termination.

    So this was not an accusation it was what happened.

    Originally posted by Khrundel View Post
    I admit, I was wrong with nouveau-only part, but actual story doesn't proves your point. Same old shit about "we won't do anyithing, you are too late" and nvidia trying to fix everything from their side without rewriting driver.
    Windows Vista Nvidia had to rewrite large sections of their driver. Problem here there comes a point where the core OS changed. Nvidia design of their driver on Linux is still like before Linux had KMS. Before 2008. Yes they have attempt to weld on a proxy for kms that does not work well.

    Originally posted by Khrundel View Post
    I bet for same reason why client binding libraries of some commercial software can be free (like beer). But that doesn't mean these libraries is not for this software.
    DRI did not start with Mesa. That is why its license historically was different.

    Also issues with GLX come from DRI1. Nvidia driver has been requiring x.org to maintain deprecated parts from DRI1.

    DRI is meant to be joint development between those wishing to have drivers on Linux and Unix platforms.

    Originally posted by Khrundel View Post
    So, have you read "Hitchiker's guide to the Galaxy"?.
    O yes I have but the example in there does not apply in this case. There is just a point were a party is getting latter and latter in support until


    Originally posted by Khrundel View Post
    That's why they bothering with alignment agreements within their proposed allocator? Just to make excuses why NIH-GBM doesn't fit them?
    Basically yes. Because they got PRIME to work without the alignment agreements. The trap here is that PRIME buffer in allocation that Nvidia driver supports is basically the same as GBM buffer. So they have basically already implement the code and have decide not to expose it. Items not invented by Nvidia like PRIME and KMS they normally have to be pulled kicking and screaming into to implementing them.

    So it is possible for Nvidia to support GBM until they have a replacement ready. Now we would not be in this location if Nvidia had put active involvement into DRI development.

    Originally posted by Khrundel View Post
    Nvidia is advertising their memory compression, which changes from one generation to another, so there are different buffer formats.
    The fun of reading marketing. Yes Nvidia uses lossy compression to attempt to get more memory bandwidth. So the buffer formats have not changes on the card but it causes quality loss that the game developer has no control over. It also causes bad performance loss when the textures and other parts will not compress this leads to AMD winning with particular games by large margin due to this cheat. By the way using compression is part of the standard and how compressed content is buffered and declared is part of the standards. So there are not different buffer formats. There is different compression yes.
    .
    Originally posted by Khrundel View Post
    But lack of power management doesn't prevent AMD from getting better performance with mining?
    Mining is hold GPU under closed to 100% load that basically disables power management. Most games don't hold the GPU at close 100 percent load so power management logic comes a problem. Also any form of quality cheating does not work in tasks like mining.

    Originally posted by Khrundel View Post
    No cheating can explain how 1060 competes with a chip which have 30% more... of everything.
    Because this bit is a total fib as the 1060 does not compete for a lot of uses. There are games where the 1060 loses big time as the cheat backfires by both downgrading quality and speed. Compression is not a free lunch. Its show up in phoronix benchmarks for a long time. When Nvidia cheat works they gain a lot when it does not they lose a lot. The reason why Nvidia is changing the compression release to release is that they keep on finding that they are ruined on particular AAA games that are benchmarked a lot.

    AMD cards instead of short cutting have just go with provide more raw performance this means there results are stable and predictable at the price of power usage.

    Comment


    • #62
      Originally posted by oiaohm View Post
      Not really this is you clueless. By 2010 the writing is on the wall about KMS and Wayland. The existing fglrx kernel mode driver at the time was not suitable for KMS support. So AMD cut their losses.
      So, AMD didn't check their driver (supported and only at that time) to cut costs. That doesn't save them much as they still had to fix it, just month later. And this was a rational decision, two hours of employee's time worth lost reputation. OK.
      Windows Vista Nvidia had to rewrite large sections of their driver. Problem here there comes a point where the core OS changed. Nvidia design of their driver on Linux is still like before Linux had KMS. Before 2008. Yes they have attempt to weld on a proxy for kms that does not work well.
      When linux will have at least 10% of desktops they maybe will rewrite their driver.
      DRI did not start with Mesa. That is why its license historically was different.
      Read your link. First email there mentions Mesa.
      There is just a point were a party is getting latter and latter in support until
      Or it is just usual pathetic excuse.
      Basically yes. Because they got PRIME to work without the alignment agreements. The trap here is that PRIME buffer in allocation that Nvidia driver supports is basically the same as GBM buffer. So they have basically already implement the code and have decide not to expose it. Items not invented by Nvidia like PRIME and KMS they normally have to be pulled kicking and screaming into to implementing them.
      Maybe they've found out that without control over allocation they loose performance. It can be tolerated with laptops, but when their flagship product will perform on same level with AMD's just because wayland designed dumb way... They don't agree.
      So it is possible for Nvidia to support GBM until they have a replacement ready. Now we would not be in this location if Nvidia had put active involvement into DRI development.
      And it is possible to support EglStreams until replacement ready. Who should fix problems? Usually it is who introduced them or who needs fixation most. In that case it is Wayland both.

      I'm out of "cheating" and "lossy compression" parts. Everybody who has nvidia card is pretty happy with image quality, only you, AMD owner, keep saying about scary quality drops. Looks like for you any feature, which makes nvidia card better is unfair cheating because it makes godlike AMD look worse.

      Comment


      • #63
        Originally posted by Khrundel View Post
        So, AMD didn't check their driver (supported and only at that time) to cut costs. That doesn't save them much as they still had to fix it, just month later. And this was a rational decision, two hours of employee's time worth lost reputation. OK.
        The conformance testsuite for opengl is now open source you can run it on the last fglrx closed source kernel driver and see how many faults the did not fix. They roughly fixed 2 percent by waiting for bugs. So it was way more than 2 hours of employee time saved. Please note when I say roughly fixed a lot of the fixes were turning features off not coding anything Its not like the ATI/AMD driver had a good reputation anyhow. AMD made quite a good choice thinking the position.

        Originally posted by Khrundel View Post
        When linux will have at least 10% of desktops they maybe will rewrite their driver..
        GPU are not only used by desktop computers. AMD replaced their driver with an open source driver and Arm mali that is closed source redid theirs as soon as DRI2 was released. So why should 10% of desktop mean anything.

        Originally posted by Khrundel View Post
        Read your link. First email there mentions Mesa.
        Mesa is mentioned as a sample implementation not the reference implementation. SGI licenses is scattered all over DRI1 because SGI implementation was the reference implementation.

        Originally posted by Khrundel View Post
        Maybe they've found out that without control over allocation they loose performance. It can be tolerated with laptops, but when their flagship product will perform on same level with AMD's just because wayland designed dumb way... They don't agree.
        Sorry Nvidia did benchmark it with eglstreams done to open source driver that use on arm vs libgbm result was libgbm won in performance. It was when eglstreams failed the demo good benchmarks was when NVIDIA decided they better improve gbm and do a gbm2. So control over allocation has nothing to-do with it. Its the fact a lot of the buffer setup by Nvidia closed source driver is done in the userspace side like a usermode X11 driver. So it will mean doing some serous rewrites.

        Originally posted by Khrundel View Post
        And it is possible to support EglStreams until replacement ready. Who should fix problems? Usually it is who introduced them or who needs fixation most. In that case it is Wayland both.
        Supporting more than 1 back end increases testing for compositors. Big one Eglstreams is slower than libgbm path and that is known. So why should people implementing wayland compositors put up with second rate performance from Nvidia.

        Originally posted by Khrundel View Post
        I'm out of "cheating" and "lossy compression" parts. Everybody who has nvidia card is pretty happy with image quality, only you, AMD owner, keep saying about scary quality drops. Looks like for you any feature, which makes nvidia card better is unfair cheating because it makes godlike AMD look worse.
        Not everyone who has a Nvidia card is happy. You have people doing games mods using Nvidia having their models look perfect in blender they export them for use in game and now the colours and details are off. Yet if the game is loaded by someone using AMD card everything looks the same as in modelling tool and in the game engine. Of course when ever someone points it out you have to be a AMD user. I got a AMD card because I was told the problem was Nvidia. I was a NVIDIA user before hand wondering why I was having the strange issue and why the same model loaded in 3 different games with the same texture set looked different. Of course I was incorrectly blaming the game engines until you see the 3 game engines under AMD with the model and they all look exactly the same. Nvidia is cheating when the quality is different engine to engine. Makes benchmarks better if you can downgrade quality and limited number of people notice.

        Comment


        • #64
          Originally posted by Khrundel View Post
          Nvidia have presented technical arguments. The answer from wayland team: you are too late.
          "You are too late" is only half. The other half of what is wrong is "EGLStreams is not useful to open source drivers".
          NVidia are only now trying to correct the second half with their new allocator library. Which doesn't see much development, and even the original author seems more interested in other stuff (only two commits this year so far, and both seem to be authored more than a month before they were committed).

          Originally posted by Khrundel View Post
          No, I just taking you down from some magic cloud. There is no entire FOSS community who agreed on GBM. Personally, I've changed about 10 lines of code within two GPL projects, nobody care about, so, from formal point, I'm a part of FOSS community. Nobody ever asked for my consent about GBM . And I'm not alone: there are thousands of programmers who write or maintain FOSS code, they are FOSS community and you can't pretend they all have agreed to GBM or even thought about it. There are people who have agreed to GBM, they are minority, but their consent mean "I don't object to that architecture", not "I want GBM no matter cost". So, actual team behind decision to use GBM is, I suspect, less than 10 guys, who should at least ask for opinion of manufacturer of 1/3 of all desktop GPU. So, please, stop to excert authority where there is no authority at all.
          FOSS community is in most cases a meritocracy. You do the work, you get to decide. This happens on behalf of the entire community.

          Asking NVidia? Development happened on the open and NVidia did not speak up for years.

          Originally posted by Khrundel View Post
          Utter nonsense. But funny part is that you dare to speak something about freedom in this thread, where you advocate party which is trying to deprive freedom.
          I think you do not understand what is user freedom.

          Let me quote from FSF what is free software:
          • The freedom to run the program as you wish, for any purpose (freedom 0).
          • The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this.
          • The freedom to redistribute copies so you can help others (freedom 2).
          • The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.
          Or if you think that OSI is better, here is their list of requirements for open source:
          1. Free Redistribution
          2. Source Code
          3. Derived Works
          4. Integrity of The Author's Source Code
          5. No Discrimination Against Persons or Groups
          6. No Discrimination Against Fields of Endeavor
          7. Distribution of License
          8. License Must Not Be Specific to a Product
          9. License Must Not Restrict Other Software
          10. License Must Be Technology-Neutral
          The kind of "freedom" you are after is not user freedom. What you want is the ability to buy and use any hardware you want. Windows gives you that "freedom". Go use it.

          Originally posted by Khrundel View Post
          Sure, it was not polished and user ready, partially because of GBM. It is assumed that beta-code will change, some functionality will be added. One of Wayland advertised feature is protocol versioning, meaning some part can be changed later some incompatible way. So they how they could know, that only part that isn't subject to change is GBM?
          The compositor implementation was ready. What was not yet ready is many applications which still had too much X11 specific stuff inside (e.g. assumptions on how input, windowing, etc. works).

          Originally posted by Khrundel View Post
          Replace nvidia part with something like... i don't know, "being gay" or "not see the light of God" and you will get a speech, any terrorist could say. "To take hostage" isn't about people you target, their vices and sins. It is when somebody intentionally threatens to harm 3rd body to force someone to do something his way. I have no problem with guys who say: "I don't care about nvidia users, I will not do anything to support EglStreams". But dominant opinion is different, "Stand your ground, anybody who has added EglStreams to his display server is a traitor, if you have a voice, veto any patches for EglStreams. No pasaran!". So, FOSS zealots actually trying to take nvidia users as hostages, they think this is a good opportunity to either force nvidia to go opensource or banish it from linux.
          Do you ever listen to yourself? That tirade has got to be the new intellectual low point of Phoronix forums.
          I never called anyone a traitor, you fantasized in your rage. I explained why implementing EGLStreams in XWayland (and Weston, mutter, KWin, Sway, etc.) is a bad idea. But it is ultimately the prerogative of the meritocratic rulers of the project, and there are no hostages or treason involved. This is all in your imagination.

          The only recent comparison to hostage that comes to mind is the situation with Diesel cards. Manufacturers lied about the emissions of their cars, and are refusing to fix or take back cars. Now car owners are threatened with driving bans due to violation of emissions regulations.

          But NVidia did not lie about anything. They officially have proprietary-only driver support, nobody got duped into thinking otherwise, but instead the buyers made the fully-conscious decision to go with a company that is not open-source friendly. All consequences of this are fully deserved.

          Originally posted by Khrundel View Post
          They don't have to avoid anything. They are standing now where they have stood 10 years ago. Duty to avoid collisions is on moving party. That is obvious.

          You are on wrong side here. That's why you have to lie about "GBM is entire FOSS community decision" and keep proposing nvidia impossible alternative (go back in time and say your veto). Without this shit all situation will look like it really is: a couple of guys, who managed to convince many people to support their new protocol, have made mistake, and instead of fixing it, are trying to press their solution.
          Let me return that remark about lying. You can't even get your facts straight in your blind rage (or was it love for NVidia).

          I wrote what NVidia did wrong in the past that led to the current situation. This is not a proposal of an alternative. Some things can be addressed (lack of cooperation now), and some cannot (lack of cooperation in 2011), but that is the nature of things that happened in the past.

          Comment


          • #65
            Originally posted by oiaohm View Post
            The conformance testsuite for opengl is now open source you can run it on the last fglrx closed source kernel driver and see how many faults the did not fix. They roughly fixed 2 percent by waiting for bugs. So it was way more than 2 hours of employee time saved. Please note when I say roughly fixed a lot of the fixes were turning features off not coding anything Its not like the ATI/AMD driver had a good reputation anyhow. AMD made quite a good choice thinking the position.
            Their fglrx has been failing to load. What conformance testsuite, their customers were unable to boot to X.
            GPU are not only used by desktop computers. AMD replaced their driver with an open source driver and Arm mali that is closed source redid theirs as soon as DRI2 was released. So why should 10% of desktop mean anything.
            We are talking about wayland. So, desktop only counts.
            Mesa is mentioned as a sample implementation not the reference implementation. SGI licenses is scattered all over DRI1 because SGI implementation was the reference implementation.
            Letter says: "Our intention is to create a sample implementation of the 3D infrastructure using Mesa and XFree86 with full source available under an XFree86-style license."
            So, please stop this bullshit.
            Sorry Nvidia did benchmark it with eglstreams done to open source driver that use on arm vs libgbm result was libgbm won in performance. It was when eglstreams failed the demo good benchmarks was when NVIDIA decided they better improve gbm and do a gbm2. So control over allocation has nothing to-do with it. Its the fact a lot of the buffer setup by Nvidia closed source driver is done in the userspace side like a usermode X11 driver. So it will mean doing some serous rewrites.
            Never heard about it. Considering parts above where you've invented maintenance mode for fglrx from 2010th and "sample, not reference Mesa", I don't trust your judgement, sorry.
            Supporting more than 1 back end increases testing for compositors. Big one Eglstreams is slower than libgbm path and that is known. So why should people implementing wayland compositors put up with second rate performance from Nvidia.
            So, wayland compositors' authors may push nvidia to rewrite their driver, which is complicated software, that is legit demand, while nvidia may not push compositor authors to add a hundred lines of code. And you call yourself just?
            Not everyone who has a Nvidia card is happy. You have people doing games mods using Nvidia having their models look perfect in blender they export them for use in game and now the colours and details are off. Yet if the game is loaded by someone using AMD card everything looks the same as in modelling tool and in the game engine. Of course when ever someone points it out you have to be a AMD user. I got a AMD card because I was told the problem was Nvidia. I was a NVIDIA user before hand wondering why I was having the strange issue and why the same model loaded in 3 different games with the same texture set looked different. Of course I was incorrectly blaming the game engines until you see the 3 game engines under AMD with the model and they all look exactly the same. Nvidia is cheating when the quality is different engine to engine. Makes benchmarks better if you can downgrade quality and limited number of people notice.
            Utter drivel. How model looks ingame depends on model details and on rendering process. "Cheating" nvidia drivers can't just simplify model in realtime, that would made streaming from disk terrible, so they can change rendering process, I don't know... replace texture filtering mode, replace shader with simplier one, simplier ligthing or modify some threshold values to decrease tessellation degree or select less detailed mipmap level. But all this modes and shaders are part of game engine, and even more, game engine can easily select simplified model depending on graphics settings. So if your model looks different depending on engine it is normal, and natural guess is that engines render same model different way from start. Assuming you are sane person and will not replace old videocard with new with same performance, I bet you just comparing max settings on your new amd GPU vs lower settings on old nvidia. And different engines adjust quality different way.

            Comment


            • #66
              Originally posted by Khrundel View Post
              Their fglrx has been failing to load. What conformance testsuite, their customers were unable to boot to X.
              This is conforming behaviour for deprecated feature under x.org covered in the conference of 2008. This is where you break something if no one screams for it being broken you remove the feature.
              Originally posted by Khrundel View Post
              We are talking about wayland. So, desktop only counts.
              Try again. Wayland is not about just the desktop.

              embedded and mobile use cases
              Like it or not x.org X11 server did end up used embedded. Wayland is the fusion between desktop and embedded. The most active instances of Wayland are on arm processes and can be using Mali drivers.

              Originally posted by Khrundel View Post
              Letter says: "Our intention is to create a sample implementation of the 3D infrastructure using Mesa and XFree86 with full source available under an XFree86-style license."
              So, please stop this bullshit.
              Sorry its not bullshit you have to look at why large sections of the early code was SGI. The top of the page lists who started the project. Redhat wanted the SGI graphics.

              Originally posted by Khrundel View Post
              Never heard about it. .
              Does not supprises me as the issue of eglstreams and the closed source driver going deprecated and into a maintaince class mode were both published in the mesa mailing list that you don't read.

              Originally posted by Khrundel View Post
              So, wayland compositors' authors may push nvidia to rewrite their driver, which is complicated software, that is legit demand, while nvidia may not push compositor authors to add a hundred lines of code. And you call yourself just?
              Wayland is not the only problem. Nvidia maintains the nvidia open source driver for arm platform so that FBDEV interfaces work as well as KMS and GBM for the embedded users. The embedded users are already using Wayland. The Unix Buffer Allocator thing from Nvidia is coming for there arm nvidia driver team. There is no promise that the Nvidia closed source driver will take it on board.

              One of the big reason why Linux distributions does not have a clean boot up splash screen is Nvidia. You really do need 2d buffers for output and modesetting to start when the kernel starts and to be able to transfer all the way though the boot process cleanly.

              So on desktop nvidia current driver design. Does not work with your normal start up splash screens. If you write a EGLSTREAMS splashscreen you cannot get smooth transition from boot loader to splash-screen or from splash-screen to compositor/X11 server. Does not work well with Wayland. Blocking the complete remove of all old Dri1 features for X11 servers. Like it or not the current NVIDIA closed source driver really does not integrate into Linux Distributions correctly and is missing the features to integrate correctly.

              Originally posted by Khrundel View Post
              How model looks ingame depends on model details and on rendering process. "Cheating" nvidia drivers can't just simplify model in realtime, that would made streaming from disk terrible, so they can change rendering process, I don't know... replace texture filtering mode, replace shader with simplier one, simplier ligthing or modify some threshold values to decrease tessellation degree or select less detailed mipmap level. But all this modes and shaders are part of game engine, and even more, game engine can easily select simplified model depending on graphics settings. So if your model looks different depending on engine it is normal, and natural guess is that engines render same model different way from start. Assuming you are sane person and will not replace old videocard with new with same performance, I bet you just comparing max settings on your new amd GPU vs lower settings on old nvidia. And different engines adjust quality different way.
              This is garbage you were not paying attention to what I wrote. What I have seen is where the models come into the same appearance. You have many games using like the unreal engine. Almost exactly the same render engines. Same textures and model formats are required. So I am feeding the same stuff in there. Then you have on AMD same driver version they all look the same. Then you have Nvidia where some them look the the same some don't. Then you have nvidia driver updates where a group use to look the same now they don't. You know what going on here the game that quality drops performance goes up. Then you also notice that all the ones going horrible are also the ones different parties regularly use for bench-marking.

              So not me just comparing to AMD. I was seeing the issue on Nvidia with driver version change. When you have two different games looking the same and driver updates and they no longer same do there is a problem. Also when you can rolling the driver back and have it return to being the same between the games. This is truly something Nvidia is doing and it is a cheat. I was comparing 1 AMD running programs and results as one group and 1 Nvidia card running programs and results as another group with both of them changing driver versions. So it was AMD results compare against AMD results and the results being consistent. Nvidia results compare to Nvidia results with this turning out not to be consistent.

              Now if you are entering game torment you don't want to find you the reason why you were losing is that you updated your graphics drivers so now the graphics looked slightly different to what your practised with all because dropping quality could allow faster. So there are some groups who have noticed the Nvidia problem.

              Comment

              Working...
              X