Announcement

Collapse
No announcement yet.

RadeonSI/RADV Mesa 17.3 + AMDGPU DC vs. NVIDIA 387.12 Linux Gaming Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • oleyska
    replied
    Originally posted by Strunkenbold View Post

    I was about to ask the same. You can see the perf difference like it actually should be in the older games and Unigine Superposition but actually no one cares for a benchmark and 2 old school shooters. In MadMax and Deus:Ex, the performance delta is a joke. So the question is, are those games in general under-performing in Mesa or is there still something in the tank for Vega?
    Right now it makes no sense to buy a new Vega when you can get used Furys for 250€ less and have little to no performance loss.
    Vega is performing a lot better than fury in Windows, but deus ex especially performs bad in linux imho.

    Leave a comment:


  • Strunkenbold
    replied
    Originally posted by theriddick View Post
    VEGA64 only %10 faster then a R9 Fury card, truly my brain struggles with this..... (in actual game performance)

    This sort of issue really needs to be investigated by AMD, the Vega64 should be at minimal %30 faster then a standard R9 Fury, SURELY!¿
    I was about to ask the same. You can see the perf difference like it actually should be in the older games and Unigine Superposition but actually no one cares for a benchmark and 2 old school shooters. In MadMax and Deus:Ex, the performance delta is a joke. So the question is, are those games in general under-performing in Mesa or is there still something in the tank for Vega?
    Right now it makes no sense to buy a new Vega when you can get used Furys for 250€ less and have little to no performance loss.

    Leave a comment:


  • oleyska
    replied
    Originally posted by starshipeleven View Post
    4k makes sense on 43'' screens, as there it has a similar PPI (pixel per square inch) of a normal fullhd PC monitor, so you actually have more space.

    If you buy smaller screens then you need to upscale your applications or it's too small to be read, and this defies the point of a 4k screen.
    Have you seen a 4K screen?
    I use a 28" screen and the quality compared to 1440P on a 28" is absolutely staggering.
    We have full HD on these 4" phones for a reason.

    Leave a comment:


  • johanb
    replied
    Originally posted by pal666 View Post
    you've been already told that it is useless metric because it doesn't say how often it happens. if it happened once, you can ignore it
    Which is why it's getting more popular nowadays to show 1% and 0.1% minimum framerates as well as frametimes instead of fps.

    Leave a comment:


  • theriddick
    replied
    VEGA64 only %10 faster then a R9 Fury card, truly my brain struggles with this..... (in actual game performance)

    This sort of issue really needs to be investigated by AMD, the Vega64 should be at minimal %30 faster then a standard R9 Fury, SURELY!¿
    Last edited by theriddick; 26 October 2017, 03:55 AM.

    Leave a comment:


  • Wielkie G
    replied
    Originally posted by Leopard View Post

    Min fps is a major metric.

    Did you even encountered one of them on CS:GO and Rocket League , Dota 2 like games?

    It is not so important on single player games that much , but it is critical on multiplayer.

    Also go do your AMD fanboying somewhere else. I also love AMD but i'm smart enough for not calling bad results as good ones.
    Ok, so everyone that doesn't agree with you is a fanboy of the opposite side? That's the very definition of fanboism.

    Minimum fps doesn't tell you much as it just shows you one single moment. The GPU could perform smoothly overall but a single moment and receive harsh min-fps rating. Another GPU could have better min-fps but reach it more often and the whole experience would be worse due to lack of smoothness.

    What you really want is 95th or 99th percentile which tells you the minimum fps of best 95% or 99% of frames. This way you remove outliers from the data and can actually tell something about smoothness.

    Also, take a look at the first Mad Max, where Fury has min fps of 88.25 and 1070 has 1.67. Do you tell me Fury is ~50x more smooth than 1070 here? I think this proves how useless of a metric min fps is.

    Leave a comment:


  • marek
    replied
    bridgman nuetzel The problem is vertex buffers are in RAM, not VRAM. I can get 4x better FPS if I put them in VRAM. This is a driver decision, not app decision - it's fixable universally. That should get us to 18 FPS for Vega. The next bottleneck is the GPU vertex throughput. Not sure what we can do with that - maybe primitive shaders in Vega, which would be an insane amount of work.

    Leave a comment:


  • boxie
    replied
    Originally posted by pal666 View Post
    you've been already told that it is useless metric because it doesn't say how often it happens. if it happened once, you can ignore it
    Especially if you get that 3fps on a loading screen!

    Leave a comment:


  • pal666
    replied
    Originally posted by Leopard View Post
    Min fps is a major metric.
    you've been already told that it is useless metric because it doesn't say how often it happens. if it happened once, you can ignore it

    Leave a comment:


  • Michael
    replied
    Originally posted by marek View Post
    bridgman nuetzel False alarm. I know where the bottleneck is. I don't know how to make it faster yet.
    Not sure if this bit of info is worth anything for bit of inspiration or code path comparison, but.... I was originally recommended to ParaView for workstation graphics testing by the Intel guys working on the SWR rasterizer as one of the workstation graphics tests where SWR vs. LLVMpipe can show a big difference in performance in their code paths.

    Leave a comment:

Working...
X