Announcement

Collapse
No announcement yet.

Radeon RX Vega Launch Postponed To SIGGRAPH

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by mibo View Post
    Why is the word "postponed" used? In the headline.
    Where was an earlier release date announced that was now postponed?
    The press released statement was "GPU products based on the Vega architecture are expected to ship in the first half of 2017". I don't think this has changed since late last year.

    Considering that some Vega based cards are shipping in June (I believe the Frontier/Pro cards), I wouldn't say that Vega has been postponed at all. Or at least based on press statements, not rumour. To be honest, I'm not sure if there's been an announced release date for the consumer Vega GPU's yet.

    Comment


    • #22
      well rx vega should use mainly 4-hi 4GB hbm2 memories(Raja said possible 16GB version, but first batch should be 8GB card). And Fe edition uses even more complex 8-hi 8GB hbm2 memories so it should be with lower yields.

      Comment


      • #23
        Okay, lets go full speculation here and let us reflect a moment before the god of deductions to find out why Vega won't suck.

        1. Prototypes and their benches
        The leak on benches for the 1600 MHz version showed us overclocked 1080 Ti results on a synthetic benchmark. Games however are not synthetic, but often suck in terms on non-optimized drivers, because games are highly optimized by those. The fact that we even get some decent benches out of the synthetic benchmark gives me the impression that:

        a) The drivers are coming along nicely (which I doubt but can't rule out)
        b) The Vega arch itself is more robust against specific Nvidia optimizations (which is more believable looking at the overall arch-summaries by various people who have more knowledge about it than me)

        2. Taking the historical "AMDs drivers suck" part into the ring, I wouldn't be surprised if the performance within the first 3 months after release would kick up about 10%. This has been more or less the case with previous generations and will be even more with Vega, as the change to the arch is more than just a beefed up GCN-improvement.

        3. Raw-power and the street. AMD had several problems bringing their horse power on the street in most of their cards. The new Vega arch however is designed to circumvent most bottlenecks by design. If the programmers don't come to AMD, AMD has to come to the programmers. This is sad, but true. Again, according to people who have more knowledge on GPU arch than me.

        4. Volta as main competitor has beside some minor latency improvements nothing on the table for gamers. I wouldn't wonder if Volta itself brings out the same per-shader performance as Pascal and the overall performance improvement will be bound to more shader + higher frequencies alone. If that is the case, and AMD can bring their horsepower on the street, Vega will be a real competitor for Volta. Nvidia will not let the crown go away and surely bring out some sort of Titan Xv - but the price of such a monster may be close to 2k. If AMD has a similar powerful GPU for 500 bucks and people still buy Nvidia... well, then I guess most people swim in money.

        5. HBM is very well suited for low latency computing and can be regarded as some sort of "big cache". The possible use-cases for this type of memory haven't even been discussed yet. I'd imagine, a lot of creative people in VR are very interested in lower latency.

        edit:
        We also saw that Jensen Huang said something about a "plateau" at his talk - suggesting that the GPU market (which maybe some of us already noticed) is not increasing in performance as we used to see it in generations before. If thats the case and Vega is capable of bringing its horsepower on the street - Vega will be an arch that will be a long long time on the road. In this case, we shouldn't really care about a couple of weeks delay - beside some minor inconveniences like a dead GPU that needs to be replaced.
        Last edited by Shevchen; 31 May 2017, 04:45 PM.

        Comment


        • #24
          Originally posted by Shevchen View Post
          Okay, lets go full speculation here and let us reflect a moment before the god of deductions to find out why Vega won't suck.

          1. Prototypes and their benches
          The leak on benches for the 1600 MHz version showed us overclocked 1080 Ti results on a synthetic benchmark. Games however are not synthetic, but often suck in terms on non-optimized drivers, because games are highly optimized by those. The fact that we even get some decent benches out of the synthetic benchmark gives me the impression that:

          a) The drivers are coming along nicely (which I doubt but can't rule out)
          b) The Vega arch itself is more robust against specific Nvidia optimizations (which is more believable looking at the overall arch-summaries by various people who have more knowledge about it than me)

          2. Taking the historical "AMDs drivers suck" part into the ring, I wouldn't be surprised if the performance within the first 3 months after release would kick up about 10%. This has been more or less the case with previous generations and will be even more with Vega, as the change to the arch is more than just a beefed up GCN-improvement.

          3. Raw-power and the street. AMD had several problems bringing their horse power on the street in most of their cards. The new Vega arch however is designed to circumvent most bottlenecks by design. If the programmers don't come to AMD, AMD has to come to the programmers. This is sad, but true. Again, according to people who have more knowledge on GPU arch than me.

          4. Volta as main competitor has beside some minor latency improvements nothing on the table for gamers. I wouldn't wonder if Volta itself brings out the same per-shader performance as Pascal and the overall performance improvement will be bound to more shader + higher frequencies alone. If that is the case, and AMD can bring their horsepower on the street, Vega will be a real competitor for Volta. Nvidia will not let the crown go away and surely bring out some sort of Titan Xv - but the price of such a monster may be close to 2k. If AMD has a similar powerful GPU for 500 bucks and people still buy Nvidia... well, then I guess most people swim in money.

          5. HBM is very well suited for low latency computing and can be regarded as some sort of "big cache". The possible use-cases for this type of memory haven't even been discussed yet. I'd imagine, a lot of creative people in VR are very interested in lower latency.

          edit:
          We also saw that Jensen Huang said something about a "plateau" at his talk - suggesting that the GPU market (which maybe some of us already noticed) is not increasing in performance as we used to see it in generations before. If thats the case and Vega is capable of bringing its horsepower on the street - Vega will be an arch that will be a long long time on the road. In this case, we shouldn't really care about a couple of weeks delay - beside some minor inconveniences like a dead GPU that needs to be replaced.
          Every thing from CPUs to GPUs have or are going to peak. You just can't shrink enough....the electron will find a way to tunnel out. Hard speed limit for CPUs is 4 Ghz. Mostly 3 in the general desktop world and less than that for laptops. Been that way for 10 years. And multi-core has plateaued for most use cases at 4 cores...6 for games. 8 and more for most uses cases is a waste. Multithreaded programming is still in its infancy vis a vis ease of use and managability. So that leaves frequency and that means shrinkage to keep speed without increasing heat but you're running into the laws of physics........and there you have it. Peak.

          The way out ? Not so much further shrinkage of CPUs and GPUs which are hard limited by physics....but shrinkage of memory so that you load up everything with tons of memory. Everything becomes one big pile of inline memory. Kinda like HP's experimental "Machine". That and parallel programming tools are developed so that coding for multithreaded everything becomes as simple as Pascal.

          Comment


          • #25
            Well I can't afford a Vega card until end of July anyway, lets hope AMD is able to release a few more Linux Vega driver updates before then. It be nice to see an attempt to merge the DAL/DC driver into Kernel 4.13 since it missed the 4.12 window?!??!!!

            Comment


            • #26
              Originally posted by Jumbotron View Post

              Every thing from CPUs to GPUs have or are going to peak. You just can't shrink enough....the electron will find a way to tunnel out. Hard speed limit for CPUs is 4 Ghz. Mostly 3 in the general desktop world and less than that for laptops. Been that way for 10 years. And multi-core has plateaued for most use cases at 4 cores...6 for games. 8 and more for most uses cases is a waste. Multithreaded programming is still in its infancy vis a vis ease of use and managability. So that leaves frequency and that means shrinkage to keep speed without increasing heat but you're running into the laws of physics........and there you have it. Peak.

              The way out ? Not so much further shrinkage of CPUs and GPUs which are hard limited by physics....but shrinkage of memory so that you load up everything with tons of memory. Everything becomes one big pile of inline memory. Kinda like HP's experimental "Machine". That and parallel programming tools are developed so that coding for multithreaded everything becomes as simple as Pascal.
              Well, in general yes. But we should not disregard the world records on overclocking which show us the real physical limits of semiconductor production. The 4 GHz limit on desktop CPUs is more of a practical limit, because we want our CPUs to last more than a month before they die on degradation. The rest is a limit introduced via heat. Water-cooled CPU go well above 4 GHz, not only because they can be cooled easily, but also because a cooler CPU invokes less leakage, giving you a larger margin before you have to ramp up your VCore beyond 1.4 V and beyond to reach 4+ GHz.

              A small side-step to another discussion to get a rough idea about that "sweet spot":
              Alright, another write up. this time I want to write about the truth about CPU degradation. The commonly asked question I have seen time and again on this forum is: whether a CPU would 'degrade' if you overclock pass 1.XX Volts? Now, before I answer that, lets go over some basic underlying...


              Is this the end of semiconductor progress? Well - nope. Both Intel as well IBM have some prototypes of optical chips in their secret chambers. The "only" problem with this is the production cost and the latency of the converters. Also, its more expensive. And if something is holding back progress, its a decreased profit margin. No company would let that slip just to introduce an new tech - with the exception of Elon Musk.

              CPUs nowadays don't cost more than about 20$ in production for the silicon itself - the development costs are in about the same ballpark and in the end, you make profit if you sell your CPUs for anything beyond 100$. People pay willingly over 1k for CPUs and THIS margin would decrease with optical chips. Intel is not interested in pushing the tech, Intel is interested in selling you products.

              Comment


              • #27
                Originally posted by Marc Driftmeyer View Post
                GF switched 14nm nodes from LPE to LPP for both CPU/GPU and presumably APU with HBM2 SoC. TSMC and GF are the exclusive partners for AMD. Samsung isn't currently in the mix beyond HBM2/SK Hynix production of HBM2. Samsung is in the middle of heavy production across several fabs for much larger contracts, including their own for the mobile space. Those get first priority.
                Not sure if you're actually trying to make a point or if you're just randomly droning on trying to sound very knowledgeable. I never claimed that AMD would use Samsung foundries for anything beyond the memory they're buying from and I don't know how Globalfoundries switching from the 14nm LPE to LPP process is really relevant either when the main difference between the two is mainly that LPP has higher output.

                As for Samsung's foundry capacity, stuff like this generally tends to get worked out trough contracts well ahead of time so it shouldn't matter if Samsung wants to have their memory foundries cranking out LPDDR memory when they have a contract with AMD to make them HBM memory instead.

                This isn't rocket science. Vegas FE will be a very profitable release and one that stays in Q2 thus bumping up AMD's books and keeping Wall Street happy. Vega Consumer will get a full quarter run along with FE and will continue the graphics growth. The CPU growth is very substantial and with Threadripper and Naples [Epyc] you have the enterprise chomping at the bit to get their hands on this stuff.
                Now you're essentially just repeating what I just said..

                Comment

                Working...
                X