Announcement

Collapse
No announcement yet.

AMD Pushes Updated AMDVLK Vulkan Code Following Adrenalin 2020 Unveil

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by aufkrawall View Post
    You can't override AA in any modern game on Windows either.
    Though it would be nice to have it for old games in Wine, especially SSAA.
    On windows you can override antialiasing from the radeon settings control panel. You can do it globally for all games, or even by profile for specific games.

    If you're talking about running windows games on linux with wine or proton, then no you can't because mesa doesn't support overriding antialiasing. On linux the only option is to change antialiasing via the in-game settings, even if it's a windows game. Problem is, many games don't have the appropriate in-game settings and then overriding antialiasing is the only option, which mesa doesn't support.
    Last edited by duby229; 13 December 2019, 07:34 AM.

    Comment


    • #12
      Originally posted by Ciren View Post
      Can somebody help me understand what's the point of AMDVLK? Why wouldn't AMD contribute to RADV directly to make Mesa better as a whole and to improve on already great RADV? The more I read about AMDVLK, the more I become confused...
      We have a unified vulkan driver that is shared across OSes. There is no Linux specific Vulkan team, about the only difference in the vulkan driver is the backend ioctl interfaces (one set for each OS). So we are writing the driver anyway. Despite what others say, it's less resource intensive for us to open source our unified vulkan driver than it would be to fund a team to work on radv. Radv regularly uses amdvlk for inspiration. We also have customers that use it. Why not provide it as open source? If you don't want to use it, don't. When it comes to open source contributions, let's be fair, the vast majority of the contributions for mesa, kernel, llvm, etc. comes from AMD so it's not like community projects are some sort of panacea of contributions. I would argue that the the more vendors contribute to community projects, the fewer contributions are made by community members.

      While I am on my soap box, I would argue that upstream, at least the kernel, is more of a hindrance to good device support in Linux than anything else. The cycles are long and rarely align with product cycles. Once cycles are complete, no new features are allowed so if schedules are not aligned, you can't backport the features to an older kernel even if you wanted to. And the kernel doesn't allow any backwards compatibility code. Add to that the fact that every distro and customer uses a different older kernel for production. Few use the official LTS kernels. I'd guess that most hardware vendors would have twice the engineering capacity if we didn't have to deal with backporting or providing out of tree drivers to the various different versions our customers require. We put in a ton of effort to break out common code and add core helpers and get agreement from the community when we upstream code, only to reimplement internal versions of all that in order to support an older kernel that does not have that core functionality. Oh yeah, and while you are at it, don't break all the other backported or out of tree vendor code with your changes because everyone is in the same boat.

      Comment


      • #13
        Originally posted by agd5f View Post

        We have a unified vulkan driver that is shared across OSes. There is no Linux specific Vulkan team, about the only difference in the vulkan driver is the backend ioctl interfaces (one set for each OS). So we are writing the driver anyway. Despite what others say, it's less resource intensive for us to open source our unified vulkan driver than it would be to fund a team to work on radv. Radv regularly uses amdvlk for inspiration. We also have customers that use it. Why not provide it as open source? If you don't want to use it, don't. When it comes to open source contributions, let's be fair, the vast majority of the contributions for mesa, kernel, llvm, etc. comes from AMD so it's not like community projects are some sort of panacea of contributions. I would argue that the the more vendors contribute to community projects, the fewer contributions are made by community members.

        While I am on my soap box, I would argue that upstream, at least the kernel, is more of a hindrance to good device support in Linux than anything else. The cycles are long and rarely align with product cycles. Once cycles are complete, no new features are allowed so if schedules are not aligned, you can't backport the features to an older kernel even if you wanted to. And the kernel doesn't allow any backwards compatibility code. Add to that the fact that every distro and customer uses a different older kernel for production. Few use the official LTS kernels. I'd guess that most hardware vendors would have twice the engineering capacity if we didn't have to deal with backporting or providing out of tree drivers to the various different versions our customers require. We put in a ton of effort to break out common code and add core helpers and get agreement from the community when we upstream code, only to reimplement internal versions of all that in order to support an older kernel that does not have that core functionality. Oh yeah, and while you are at it, don't break all the other backported or out of tree vendor code with your changes because everyone is in the same boat.
        Much respect but, The fact that the kernel is upstream is the -only- reason the DAL code is in -any- kind useful condition as just one example. I would bet every dollar I'll ever make that the quality of your kernel code is hundreds of times better -because- of the conditions of the kernel upstream. You guys choose to work outside the kernel and then shove enormously huge and highly ugly pull requests and then you suffer the consequences of your own decisions with no ability to bisect it or debug it or even understand it..

        That's all I have to say.

        EDIT: I guess one more thing, you say the upstream kernel is a hindrance to good device support, except the fact is that the linux kernel has -the- most comprehensive device support compared to -anything- else in the world. Nothing else even comes close not even windows....
        Last edited by duby229; 13 December 2019, 03:12 PM.

        Comment


        • #14
          Linux is a drop in the bucket for their revenues. Never forget it.

          Comment


          • #15
            Originally posted by duby229 View Post

            Much respect but, The fact that the kernel is upstream is the -only- reason the DAL code is in -any- kind useful condition as just one example. I would bet every dollar I'll ever make that the quality of your kernel code is hundreds of times better -because- of the conditions of the kernel upstream. You guys choose to work outside the kernel and then shove enormously huge and highly ugly pull requests and then you suffer the consequences of your own decisions with no ability to bisect it or debug it or even understand it..

            That's all I have to say.

            EDIT: I guess one more thing, you say the upstream kernel is a hindrance to good device support, except the fact is that the linux kernel has -the- most comprehensive device support compared to -anything- else in the world. Nothing else even comes close not even windows....
            You are missing my point. It's not the kernel code itself or the coding style requirements or any of that. I'm not talking about throwing garbage over the wall. It's the process. If the process was better, hardware vendors could support more features and write better code. The current process is a hindrance to that. Consider this. You develop a new GPU. OEMs want to use that new GPU in a product they are launching. The OEM wants to offer Distro A and Distro B on the product. Neither Distro's release schedules align with the OEM's product launch. Your GPU driver code is ready 3 months before launch, but you happen to be mid-cycle with the kernel release. New asic support is a new feature, so can't go into the current kernel, so you have to wait for the next kernel. That kernel won't release until 2 months after the product is supposed to ship. On top of that Distro A is using a kernel from 6 months ago and Distro B is using a kernel from 9 months ago. You didn't get silicon back in time to align with those kernels, so it's not like you could have just backed out your development to better align. Not to mention there are 5 other GPU programs also running concurrently at different stages of development. So what do you do? You pay several additional teams to backport those features from the upstream kernel to Distro A's kernel and you pay other people to Distro B's kernel, and both of those kernels are too old to support the new core infrastructure you need for your new feature, so now you have to backport that too. If you didn't have to backport to multiple old kernels, most of those developers could have been working on new features and other driver improvements, instead they are doing backports. What if stable kernels allowed new features? What if newer kernels provided better internal backwards compatibility? What if Distros could align better on what kernels they use? The kernel process really only works well for products with long lead times and life cycles. It doesn't work well for fast moving devices like GPUs.

            Comment


            • #16
              Originally posted by duby229 View Post

              On windows you can override antialiasing from the radeon settings control panel. You can do it globally for all games, or even by profile for specific games.
              No, you can't. It's not possible for drivers to force AA into complex renderers, apart from simple low-quality post processing.
              While you can set up what ever you like in the driver UI and game profiles, it won't have an effect in the best case and else waste performance for no benefit or worse.
              There are some exceptions, but the main point doesn't change.

              Comment


              • #17
                Originally posted by agd5f View Post

                You are missing my point. It's not the kernel code itself or the coding style requirements or any of that. I'm not talking about throwing garbage over the wall. It's the process. If the process was better, hardware vendors could support more features and write better code. The current process is a hindrance to that. Consider this. You develop a new GPU. OEMs want to use that new GPU in a product they are launching. The OEM wants to offer Distro A and Distro B on the product. Neither Distro's release schedules align with the OEM's product launch. Your GPU driver code is ready 3 months before launch, but you happen to be mid-cycle with the kernel release. New asic support is a new feature, so can't go into the current kernel, so you have to wait for the next kernel. That kernel won't release until 2 months after the product is supposed to ship. On top of that Distro A is using a kernel from 6 months ago and Distro B is using a kernel from 9 months ago. You didn't get silicon back in time to align with those kernels, so it's not like you could have just backed out your development to better align. Not to mention there are 5 other GPU programs also running concurrently at different stages of development. So what do you do? You pay several additional teams to backport those features from the upstream kernel to Distro A's kernel and you pay other people to Distro B's kernel, and both of those kernels are too old to support the new core infrastructure you need for your new feature, so now you have to backport that too. If you didn't have to backport to multiple old kernels, most of those developers could have been working on new features and other driver improvements, instead they are doing backports. What if stable kernels allowed new features? What if newer kernels provided better internal backwards compatibility? What if Distros could align better on what kernels they use? The kernel process really only works well for products with long lead times and life cycles. It doesn't work well for fast moving devices like GPUs.
                Regardless of whether I agree with you or not, I do appreciate all the hard work and effort AMD puts in to support linux. And I'm sure millions of other people do as well. I'm not in your shoes so I can't say I understand your frustrations but like I said I have much respect for you.

                Comment

                Working...
                X