Announcement

Collapse
No announcement yet.

More Benchmarks Showing How Gallium3D With RX Vega Smacks AMDGPU-PRO's OpenGL Proprietary Driver

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by mphuZ View Post
    But what's the point to test Pro-driver ?
    We were using the -PRO driver as an interim consumer solution while the open source stack picked up GL 4.5 support and the devs had a chance to start working on performance. That work (on the open stack) has progressed well, and Michael's tests confirm it.

    At this point the interesting thing would be to shift AMDGPU-PRO testing focus from consumer to workstation SKUs and apps, which is where fglrx and the -PRO driver have always been focused. I'll talk to the workstation BU and see if we can make some WX boards available for testing against whatever the NVidia equivalent is - Quadro ?

    It would be a different set of apps though so probably some test development work would need to happen first.

    Comment


    • #32
      Originally posted by fallenbit View Post
      From among the new features Vega supports in hardware like:
      - Draw Stream Binning Rasterizer (acc. computerbase is working under windows)
      - High Bandwith Cache Controller (HBCC) (disabled under windows at default?)
      - Primitive Shader (?) (missing under windows)
      what needs driver support and what does radeonSI currently support already?
      Originally posted by andrei_me View Post
      This features belong to AMDGPU or RadeonSI? bridgman agd5f
      Draw Stream Binning Rasterizer is not being used yet in the open drivers yet. Enabling it would be mostly in the amdgpu kernel driver but optimizing performance with it would be mostly in radeonsi and game engines.

      HBCC is not fully enabled yet, although we are using some of the foundation features like 4-level page tables and variable page size support (mixing 2MB and 4KB pages). On Linux we are looking at HBCC more for compute than for graphics, so SW implementation and exposed behaviour would be quite different from Windows where the focus is more on graphics. Most of the work for Linux would be in the amdgpu kernel driver.

      Primitive Shader support - IIRC this is part of a larger NGG feature (next generation geometry). There has been some initial work done for primitive shader support IIRC but don't know if anything has been enabled yet. I believe the work would mostly be in radeonsi but haven't looked closely.

      For both DSBR and NGG/PS I expect we will follow the Windows team's efforts, while I expect HBCC on Linux will get worked on independently of Windows efforts.
      Last edited by bridgman; 18 August 2017, 06:20 PM.

      Comment


      • #33
        Thanks for the information.
        The tragedy of been the first part of a new architectur, not all silicon is currently in use.
        Will be interesting to see how performance develops over time.

        Now if only the miners will not grap all the parts...
        Last edited by fallenbit; 21 August 2017, 12:19 PM. Reason: 'the' (information)...

        Comment


        • #34
          Originally posted by bridgman View Post



          Draw Stream Binning Rasterizer is not being used yet in the open drivers yet. Enabling it would be mostly in the amdgpu kernel driver but optimizing performance with it would be mostly in radeonsi and game engines.

          HBCC is not fully enabled yet, although we are using some of the foundation features like 4-level page tables and variable page size support (mixing 2MB and 4KB pages). On Linux we are looking at HBCC more for compute than for graphics, so SW implementation and exposed behaviour would be quite different from Windows where the focus is more on graphics. Most of the work for Linux would be in the amdgpu kernel driver.

          Primitive Shader support - IIRC this is part of a larger NGG feature (next generation geometry). There has been some initial work done for primitive shader support IIRC but don't know if anything has been enabled yet. I believe the work would mostly be in radeonsi but haven't looked closely.

          For both DSBR and NGG/PS I expect we will follow the Windows team's efforts, while I expect HBCC on Linux will get worked on independently of Windows efforts.
          Really, really wanted to get a Vega GPU to see how the progress goes on the open source side. With how well it compares to the GTX 1080 in Linux, I can't wait to see how it compares once it has more than half the features working. Too bad nobody can actually get their hands on one, yet alone for a reasonable price...

          Comment


          • #35
            Originally posted by LinuxID10T View Post

            Really, really wanted to get a Vega GPU to see how the progress goes on the open source side. With how well it compares to the GTX 1080 in Linux, I can't wait to see how it compares once it has more than half the features working. Too bad nobody can actually get their hands on one, yet alone for a reasonable price...
            If you get one, you still have to play the silicon lottery. There are people out there who can push their cards beyond 1600 MHz on 0% Power Target (and undervolting the P-State 1 down to 1.1V) and there are cards that can't reach 1400 MHz under 50% PT with 1.2V.

            Vega - if you get a good chip - is a fine card. While still a little more energy hungry, the performance can be tweaked to levels right between 1080 and 1080 Ti under Windows DX11 benches (taking this as reference point cause its popular). On Linux, this can very well translate in >1080 Ti performance because we have better drivers. But it all depends on the chip you get.

            You play the lottery on the HBM2 and you play the lottery on the chip itself. I'm quite happy with my card, as it gets to 1700 clocks with +30% PT and reaches 1100 MHz on HBM2. This is a middle-of-the-road chip with somewhat average-to-lucky HBM2. Nothing special, but also not a potato. If you get a chip/HMB2 combination that is worse than that, things start to get uncomfortable. Too much power draw to cool it off in order to get desired frequencies or below average performance.

            I'd be willing to pay more money for a better chip/HBM2 combination, as you can squeeze a lot of performance/power efficiency out of better chips. But so far, I haven't seen a binning on both core and HBM2.

            So in the end: If you get it, test it. If performance under stock doesn't reach the boost clock advertised (you can look in Superposition), you probably have a below-average card and thus might not be happy with it in the long run.

            Comment

            Working...
            X