Originally posted by VikingGe
View Post
Announcement
Collapse
No announcement yet.
DXVK 0.41 Released, Slightly More CPU Efficient & Offers A Heads-Up Display
Collapse
X
-
Last edited by artivision; 08 April 2018, 08:15 PM.
-
Originally posted by artivision View PostFirst of all there is no such a think as D3D11->Hardware, D3D11 is a High Level API
Gallium is quite similar to D3D11 in many ways, Vulkan isn't. DXVK has it's own internal API as well, so if you're being pedantic, DXVK does D3D11 -> DXVK API -> Vulkan -> Hardware.
Second, AMD's D3D11 solution is the one that reduces D3D11 Multi Threaded games to Single Threaded ones because [...] Command_Lists are missing from AMD's D3D11
For games that do, DXVK has the exact same issue as AMD's driver, it cannot record Vulkan command buffers on a deferred contexts because of various API quirks. Primary Vulkan command buffers don't work because they can't inherit active queries (unlike command lists in D3D11), Secondary command buffers don't work either because they don't allow new render passes to be bound. A mixture of the two where some commands are recorded directly into secondary command buffers and some are recorded to a primary buffer when the command list is executed on the immediate context is probably possible, but would end up being an unmaintainable mess with questionable gains and potentially even higher overhead when the command buffers end up being too small.
D3D11's ability to map resources on a deferred context further complicates this matter.
So i expect DXVK to end up with the 85% of Nvidia's speed [...] and i also expect to end up 15% faster than AMD's solution duo to use Vulkan command abilitiesLast edited by VikingGe; 09 April 2018, 06:38 AM.
- Likes 1
Comment
-
Originally posted by VikingGe View PostYou know exactly what I mean. Do Windows drivers translate their stuff to Vulkan? No, they don't. Do they have to deal with the inefficiencies of a Vulkan-based implementation then? No, they don't, and their internal APIs can be designed to be a better fit for a D3D11 implementation than Vulkan will ever be.
Gallium is quite similar to D3D11 in many ways, Vulkan isn't. DXVK has it's own internal API as well, so if you're being pedantic, DXVK does D3D11 -> DXVK API -> Vulkan -> Hardware.
And nobody cares because hardly any game even uses Deferred Contexts for any significant amount of rendering.
For games that do, DXVK has the exact same issue as AMD's driver, it cannot record Vulkan command buffers on a deferred contexts because of various API quirks. Primary Vulkan command buffers don't work because they can't inherit active queries (unlike command lists in D3D11), Secondary command buffers don't work either because they don't allow new render passes to be bound. A mixture of the two where some commands are recorded directly into secondary command buffers and some are recorded to a primary buffer when the command list is executed on the immediate context is probably possible, but would end up being an unmaintainable mess with questionable gains and potentially even higher overhead when the command buffers end up being too small.
D3D11's ability to map resources on a deferred context further complicates this matter.
Then prepare to be disappointed. AMD's solution is faster, and sometimes by a significant amount, in both CPU- and GPU-limited scenarios.
Comment
-
artivision I have no idea what your point is, but basically, if you have a GPU load of 100% on both native Windows and DXVK, and you get 60 FPS on Windows, you can expect 45 FPS on DXVK. That's the point I'm trying to make.
And assuming that games will never be CPU-bound is also not correct. Some will, some won't, and of course it depends on the hardware and settings used.
Comment
-
Originally posted by VikingGe View Postartivision I have no idea what your point is, but basically, if you have a GPU load of 100% on both native Windows and DXVK, and you get 60 FPS on Windows, you can expect 45 FPS on DXVK. That's the point I'm trying to make.
And assuming that games will never be CPU-bound is also not correct. Some will, some won't, and of course it depends on the hardware and settings used.
Comment
-
-
Originally posted by VikingGe View PostNo, it doesn't. How would you even come to that conclusion?
Just run stuff like the Ungine benchmarks on native Windows and on DXVK and see for yourself...
Comment
-
Well I certainly don't (on Polaris), and on Nvidia it's actually slower than the OpenGL renderer. And only one of the games I regularly use for testing gets roughly the same performance as on Windows. Expecting it to consistently deliver the same performance or even to outperform Windows is expecting far too much.
Comment
-
Originally posted by VikingGe View PostWell I certainly don't (on Polaris), and on Nvidia it's actually slower than the OpenGL renderer. And only one of the games I regularly use for testing gets roughly the same performance as on Windows. Expecting it to consistently deliver the same performance or even to outperform Windows is expecting far too much.
Also as a developer if i program an FX to consume 200Gflops for example, it will do that regardless how many layers passes. If i develop a variable FX that i'm giving you the ability go between 100-200Gflops and when a user chooses High in game options you almost always do 100 that means you actually cheating vs the implementation that does ~150 and you have more noise in your picture.
For X Gpu usage and Y shader information you will always take W FPS regardless of the Api or Emulation that is a rule, after that rule you are out of the Logic Region.Last edited by artivision; 09 April 2018, 12:08 PM.
Comment
-
Comment