Announcement

Collapse
No announcement yet.

Nouveau Developers Remain Blocked By NVIDIA From Advancing Open-Source Driver

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by karolherbst View Post

    well, Nouveau works best in terms of performance (tm) for the 780 Ti, that's just the way it is. You can actually also reclock a 980 Ti or something, but then there is no fan control, because we miss the signed firmware to control the fans. I have a branch for this though so that at least laptop users can reclock those sooner or later.

    It is just the way it is
    Ok, thanks for the info. It's AMD or nothing, I guess.

    Comment


    • #52
      Originally posted by Otus View Post
      By new I meant 1000 series. I'm not interested in buying a four year old GPU.
      I think this post drives my point home perfectly clear. It's why open source is so important imo.

      Comment


      • #53
        AMD should stop with the overvolting plus should stop the rumors that the reason is ASIC quality. The worst ASIC will lose a 50mhz boost and will gain performance because will be able to stay in higher frequency state due to become cooler.

        https://translate.google.de/translat...tml&edit-text=

        Comment


        • #54
          Originally posted by duby229 View Post
          What distro specific packaging issues are you seeing? I'd like to know more about it if you're willing to share.
          Lack of packages.

          Nobody is building the distro-specific packages which would make it easy for people to make use of the not-yet-upstream code. There are some userspace packages but for example I don't think anyone is building packages with latest amdgpu staging code merged into distro kernel tree of the same vintage, same for ROCm kernel trees with the KFD open source you need to run open source OpenCL.

          We are working on including open source components as an option for what-was-just-hybrid package set but there are a whole lot more interesting things that could be done.
          Last edited by bridgman; 09-23-2017, 04:00 PM.

          Comment


          • #55
            Originally posted by artivision View Post
            AMD should stop with the overvolting plus should stop the rumors that the reason is ASIC quality. The worst ASIC will lose a 50mhz boost and will gain performance because will be able to stay in higher frequency state due to become cooler.

            https://translate.google.de/translat...tml&edit-text=
            well, on Nvidia depending on the quality of the board, the voltage selection changes giving clock states a voltage requirement which can't be reached anymore (even with taking temperature into account) and there is an onboard value for this to be read out. So Nvidia doesn't increase the voltage if the board quality is bad, but simply removes clock states resulting in lower clocks. Temperature doesn't really matter much here for most cards though, killing of 45MHz or something at worst comparing idle vs voll power temperatures. In the end it doesn't matter much if you reduce the clocks beforehand, because you don't get any advantages because heat would kill that higher clock state sooner or later anyway. Reducing the clock from the start just removes the benefit of boosting.

            Comment


            • #56
              Originally posted by artivision View Post
              AMD should stop with the overvolting plus should stop the rumors that the reason is ASIC quality.
              Stopping rumors is really hard, except for those rare cases where the reality is more interesting than the rumor

              Agree that there does seem to be some potential for lowering voltages while maintaining reliable operation, but I'm not sure if all boards shipped that way would work in all expected conditions. I think the direction chosen was making it easier for user to undervolt instead (the whole WattMan thing) but not sure.
              Last edited by bridgman; 09-23-2017, 04:00 PM.

              Comment


              • #57
                Originally posted by birdie View Post
                It's some internal politics I cannot find any rational explanation to.
                Possibly a case of NIH syndrome. NVidia rather works on their own driver than on someone else's.

                Originally posted by karolherbst View Post
                1. there won't be two drivers supporting the same devices
                2. NVGPU only supports a small subset of what Nouveau supports, it's mainly for Tegra
                Not sure about your other points. But against these two a number of historic and current counterexamples in the kernel exist, where two drivers for the same hardware were present for years. To name a few:
                • e100 vs. eepro100
                • b43 vs. brcma
                • juju vs. 1394
                • IDE vs. libata PATA
                • hpsa vs. cciss
                • I'm sure there are more


                Originally posted by bridgman View Post
                These days it seems that even building and publishing packages from published open source driver code has become an "oh we can't do that only AMD can do it" thing.
                Originally posted by bridgman View Post
                Nobody is building the distro-specific packages which would make it easy for people to make use of the not-yet-upstream code. There are some userspace packages but for example I don't think anyone is building packages with latest amdgpu staging code merged into distro kernel tree of the same vintage, same for ROCm kernel trees with the KFD open source you need to run open source OpenCL.
                It's not hard, e.g. for Ubuntu there are number of PPAs where recent DC/DAL code can be installed from conveniently. For ROCm there was less interest because it only works on very specific hardware and there was too little overlap between people who owned the hardware and people who were interested in packaging the software.

                However, not all is/was great. Building a kernel which had both the DC/DAL patches and the ROCm ones was kind of tricky for most of the time, because these two targeted different kernel versions.
                Last edited by chithanh; 09-23-2017, 04:26 PM.

                Comment


                • #58
                  Originally posted by bridgman View Post

                  Stopping rumors is really hard, except for those rare cases where the reality is more interesting than the rumor

                  Agree that there does seem to be some potential for lowering voltages while maintaining reliable operation, but I'm not sure if all boards shipped that way would work in all expected conditions. I think the direction chosen was making it easier for user to undervolt instead (the whole WattMan thing) but not sure.
                  ASICs are complex, you cannot do the same things with them as with CPUs. So your frequency fluctuation driver (that is not even hardcoded) confuses and massively throttles the ASIC. No you cannot jump from 1100 to 1600 Mhz and down and up several times per second. Also if you remember your reference design, two RX480 at 90% utilization and 150watt consumption for both together, they beat a gtx1080 for a +15% in Ashes. The 480M (RX560) supposed to be 35watts and not 75 as well. Also what kind of LP cut is this, that needs 2+ times the consumption to break the last 10-15% frequency range, your own invention?

                  In my case undervolting RX470/80 and RX570/80 reduces noise, temperature and consumption at 80watts plus/minus without loosing performance. The same for RX460/560, it goes down to 40watts minus. A dozen RXxxx GPUs tested till now and no performance loss. I don't have anyone with Vega to test that to. I hope you didn't plan ahead to sell mobile GPUs/APUs at higher prices.
                  Last edited by artivision; 09-23-2017, 04:35 PM.

                  Comment


                  • #59
                    Originally posted by chithanh View Post
                    Not sure about your other points. But against these two a number of historic and current counterexamples in the kernel exist, where two drivers for the same hardware were present for years. To name a few:
                    • e100 vs. eepro100
                    • b43 vs. brcma
                    • iuju vs. 1394
                    • IDE vs. libata PATA
                    • hpsa vs. cciss
                    • I'm sure there are more
                    well it always depends on the case. But in the current state we are fairly sure it would require a lot of convincing from Nvidia sides, that it is a good idea. We could also just work together on it instead, it's nvidias choice.

                    Comment


                    • #60
                      I wished that at least X11 window compositors would work better with Nouveau here on Pascal GPU. KWin gives me obvious corruption in context menues (no matter what settings) and Gnome-Mutter sometimes also shows some corruption with certain windows or occasionally broken vsync. That's on Arch with the latest components.

                      Comment

                      Working...
                      X