Announcement

Collapse
No announcement yet.

The First Radeon Vega Frontier Linux Benchmark Doesn't Tell Much

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by vortex View Post
    There is simply no way that the card is nerfed that much. Maybe, 5-10%, but, that is about it.
    AMD even said that the drivers are not gimped for the FE, they are just "older".
    I never said it is nerfed, I assumed it is not working as intended.

    Originally posted by vortex View Post
    As for tiled rendering, it IS doing that, it was already proven it is doing that on the pcper 'live' benchmark.
    No, it's the exact same behaviour every AMD card shows.

    Originally posted by vortex View Post
    It sure seems like they just hit a brick wall with the process tech from GloFlo. Once they push up the voltages to get their desired speed, TDP skyrockets. This is why pretty much all the cards are only rarely hitting 1600MHz, it just is guzzling too much power to stay under the 300w envelope they wanted.
    The PCPER test showed that the board power remained well under 300 Watts. And regardless the clock rates and consumption, the seen performance is too low for the expected GPU with 64 CUs at 1400-1500 MHz.

    Comment


    • #12
      1920x1200, Xonotic, looks like a purely CPU limited scenario.

      Comment


      • #13
        It's been my impression from the beginning that nVidia's tile based rendering doesn't actually improve performance at all. It just compresses data in a certain tile based pattern before memory accesses. I can see memory transfers improving a tiny bit, but absolutely nothing else. It lloks highly overhyped, and very unlikely to be capable of delivering on said promises.

        EDIT: Now that I'm thinking about it, it seems like a great way for one competitor to screw another. I mean competitor 1 can make claims about this technique that intrigues competitor 2, and then competitor 2 takes years and many dollars trying to make it work just as well for themselves only to find out in the end it was all just a farce.
        Last edited by duby229; 30 June 2017, 03:25 PM.

        Comment


        • #14
          Originally posted by juno View Post
          But why would AMD release a card that underperforms by 50%?
          they didn't. it is not a gaming card, so no point in measuring gaming performance and michael could save his $1k

          Comment


          • #15
            Originally posted by duby229 View Post
            It's been my impression from the beginning that nVidia's tile based rendering doesn't actually improve performance at all. It just compresses data in a certain tile based pattern before memory accesses. I can see memory transfers improving a tiny bit, but absolutely nothing else. It lloks highly overhyped, and very unlikely to be capable of delivering on said promises.

            EDIT: Now that I'm thinking about it, it seems like a great way for one competitor to screw another. I mean competitor 1 can make claims about this technique that intrigues competitor 2, and then competitor 2 takes years and many dollars trying to make it work just as well for themselves only to find out in the end it was all just a farce.
            NVidia has never said that the technique helped them. They never advertised it at all, in fact, which is why it was quite a surprise when people discovered what it was doing.

            The thinking is that it should help improve caches internally on the card by increasing memory locality, but obviously it's impossible to know that without having internal knowledge of what NVidia is doing. I think it's safe to say, though, that the design of this feature would have gone through simulation testing by AMD before they committed to going that way in hardware so they must have seen some kind of benefit. You don't bet millions of dollars on a hunch.

            Comment


            • #16
              Originally posted by smitty3268 View Post

              NVidia has never said that the technique helped them. They never advertised it at all, in fact, which is why it was quite a surprise when people discovered what it was doing.

              The thinking is that it should help improve caches internally on the card by increasing memory locality, but obviously it's impossible to know that without having internal knowledge of what NVidia is doing. I think it's safe to say, though, that the design of this feature would have gone through simulation testing by AMD before they committed to going that way in hardware so they must have seen some kind of benefit. You don't bet millions of dollars on a hunch.
              Perhaps you're right, we can all hope that AMD isn't that dumb. I'm not so sure myself. We've seen them waste not just millions, but actually billions of dollars in the past. And that in turn caused them to not even be capable of earning additional billions of dollars. (EDIT: Read about Dirk Meyers, he really hurt AMD and cost them literally billions of dollars and meanwhile he sold their fabs!)
              Last edited by duby229; 30 June 2017, 03:53 PM.

              Comment


              • #17
                Originally posted by Shevchen View Post

                In games... its Vega. AMD said, tiled-rendering is a feature so you would expect to be active when benching games. It isn't, cause the driver does not support it (for whatever reasons)

                The net right now is full of rumors and honestly - all those speculations slowly give me a headache. What we can say for now:

                The per-shader performance is about Polaris level (meaning Vega arch-improvements are not active). These are expected to kick off with the RX Vega and its "real" gaming drivers. 3dcenter said, no binnig on Vega cards, tiled rasterization is off, HBCC is off, the new geometry pipeline is off/nerfed/whatever, gaming-part of the driver is from January and so on...

                In general, its possible that the card runs with 50% handbrake nerf due to the drivers and we have no way to guess if it stays on 1080 performance or will beat the Titan Xp.
                Originally posted by juno View Post

                To reduce bandwidth requirements and boost efficiency. Basically what Nvidia does to be that competitive since Maxwell



                Ooh, I wasn't aware that desktop cards had already started doing tiled rendering. I wouldn't have expected that to happen so soon. I guess they're adjusting their architectures to be more like mobile architectures for laptops and the suchlike as well. I can definitely understand why NVIDIA would do it first, given that they are going to be making more of their money from Tegra.

                Comment


                • #18
                  I just installed my frontier edition a few minutes ago.
                  It looks like amdgpupro is the only option at this point ?
                  it's complaining about firmware with regular amdgpu.

                  4.12.0-rc5

                  ...
                  [ 3.433732] [drm] amdgpu kernel modesetting enabled.
                  [ 3.434004] [drm] initializing kernel modesetting (VEGA10 0x1002:0x6863 0x1002:0x6B76 0x00).
                  [ 3.434123] amdgpu 0000:0b:00.0: VM size (-1) must be a power of 2
                  [ 3.434261] [drm] register mmio base: 0xFDB00000
                  [ 3.434377] [drm] register mmio size: 524288
                  [ 3.434492] [drm] probing gen 2 caps for device 1022:1471 = 700d03/e
                  [ 3.434606] [drm] probing mlw for device 1022:1471 = 700d03
                  [ 3.434719] [drm] UVD is enabled in VM mode
                  [ 3.434825] [drm] UVD ENC is enabled in VM mode
                  [ 3.434933] [drm] VCE enabled in VM mode
                  [ 3.462921] [drm] BIOS signature incorrect 73 7
                  [ 3.463031] amdgpu 0000:0b:00.0: Invalid PCI ROM header signature: expecting 0xaa55, got 0xffff
                  [ 3.463219] ATOM BIOS: 113-D0501100-109
                  [ 3.463336] [drm] GPU post is not needed
                  [ 3.463442] [drm] Changing default dispclk from 2Mhz to 600Mhz
                  [ 3.463564] [drm] vm size is 262144 GB, block size is 9-bit
                  [ 3.463677] amdgpu 0000:0b:00.0: VRAM: 16368M 0x000000F400000000 - 0x000000F7FEFFFFFF (16368M used)
                  [ 3.463855] amdgpu 0000:0b:00.0: GTT: 16368M 0x000000F7FF000000 - 0x000000FBFDFFFFFF
                  [ 3.463972] [drm] Detected VRAM RAM=16368M, BAR=256M
                  [ 3.464082] [drm] RAM width 2048bits HBM
                  [ 3.464257] [TTM] Zone kernel: Available graphics memory: 32950134 kiB
                  [ 3.464377] [TTM] Zone dma32: Available graphics memory: 2097152 kiB
                  [ 3.464490] [TTM] Initializing pool allocator
                  [ 3.464600] [TTM] Initializing DMA pool allocator
                  [ 3.464724] [drm] amdgpu: 16368M of VRAM memory ready
                  [ 3.464835] [drm] amdgpu: 16368M of GTT memory ready.
                  [ 3.464948] [drm] GART: num cpu pages 4190208, num gpu pages 4190208
                  [ 3.465134] [drm] mmhub_v1_0_gart_enable -- in
                  [ 3.465279] [drm] PCIE GART of 16368M enabled (table at 0x000000F400040000).
                  [ 3.465438] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013).
                  [ 3.465547] [drm] Driver supports precise vblank timestamp query.
                  [ 3.465692] amdgpu 0000:0b:00.0: amdgpu: using MSI.
                  [ 3.465866] [drm] amdgpu: irq initialized.
                  [ 3.466477] amdgpu 0000:0b:00.0: Direct firmware load for amdgpu/vega10_sos.bin failed with error -2
                  [ 3.466649] amdgpu 0000:0b:00.0: psp v3.1: Failed to load firmware "amdgpu/vega10_sos.bin"
                  [ 3.466762] [drm:0xffffffffa015720b] *ERROR* Failed to load psp firmware!
                  [ 3.466872] [drm:0xffffffffa00e76d4] *ERROR* sw_init of IP block <psp> failed -2
                  [ 3.466989] amdgpu 0000:0b:00.0: amdgpu_init failed
                  [ 3.469497] [TTM] Finalizing pool allocator
                  [ 3.469603] [TTM] Finalizing DMA pool allocator
                  [ 3.469722] [TTM] Zone kernel: Used memory at exit: 0 kiB
                  [ 3.469830] [TTM] Zone dma32: Used memory at exit: 0 kiB
                  [ 3.469938] [drm] amdgpu: ttm finalized
                  [ 3.470047] amdgpu 0000:0b:00.0: Fatal error during GPU init
                  [ 3.470156] [drm] amdgpu: finishing device.
                  [ 3.470261] [TTM] Memory type 2 has not been initialized
                  [ 3.470455] amdgpu: probe of 0000:0b:00.0 failed with error -2
                  ...
                  Last edited by Soul_keeper; 30 June 2017, 10:00 PM.

                  Comment


                  • #19
                    Originally posted by microcode View Post



                    Ooh, I wasn't aware that desktop cards had already started doing tiled rendering. I wouldn't have expected that to happen so soon. I guess they're adjusting their architectures to be more like mobile architectures for laptops and the suchlike as well. I can definitely understand why NVIDIA would do it first, given that they are going to be making more of their money from Tegra.
                    The idea of tiled rendering on desktop graphical card is nothing new. Imagination Technology then named Videologic did that before with their legacy PowerVR chip based like PCX, Neon 250 and Kyro series decade and half ago. Too bad STMicro closed their graphic division in 2001.
                    Nvidia used tiled rendered in 2009 from the shell of 3DFX.

                    Comment


                    • #20
                      Bummer no binary blob. I guess it'll have to sit unused in the box for awhile

                      Comment

                      Working...
                      X