Announcement

Collapse
No announcement yet.

The Handheld Steam Machine With Linux & AMD SoC Moves Ahead

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Well seeing how all new GPU's introduced by AMD over the last couple of years have been GCN based it would be really odd if this was based on something completely different. Kind of in the league of Intel suddenly starting to sell MIPS CPU's.

    Also, the display will only be 1080p if they reach the stretch goal of 1.25 million euros when their goal is 900.000 euros (which I doubt they'll actually reach). Otherwise it'll be 720p which should be quite enough when the screen is about the same size as on the regular iPhone 6/6S, Galaxy S 3, 4, 5 and 6, Lumia 930 and 950.

    Comment


    • #22
      I did some research and yes, at least the GPU part is GCN, still only supporting OpenGL up to (including) 4.0. So the claim it will be able to run all the games on the market is definitely wrong, as there are games on steam that require 4.1 or even 4.3 (not to mention it is not anywhere near being powerful enough to run many games on steam w/ this SoC).
      By the way: the GPU has 2 CUs (making 128 Shader processors). Now let's better not compare this to the Carrizo solution w/ full GCN 1.3 and 8 CUs...
      The claim on the diagram 2 in update 2, suggesting the GPU has it's own memory/memory controller is also definitely wrong. The SoC has one single channel DDR3 controller (up to 1600 MHz), nothing more.
      SoC is a GX-415GA with 15 Watts TDP, equal to the Kabini APU A4-5000, if you like to validate.

      Originally posted by L_A_G View Post
      Also, the display will only be 1080p if they reach the stretch goal of 1.25 million euros
      Well it's not really a good argument to say the gaming experience will only suffer, if they reach their stretch goal
      Last edited by juno; 14 December 2015, 10:19 AM.

      Comment


      • #23
        AMD's own proprietary driver has had full support to at least OpenGL 4.4 and if I recall correctly they also now fully support up 4.5 as well. It's only the open source drivers that have all those deficiencies in terms of OpenGL support and with AMD being actively involved in the development of this it's obvious that they're going to use AMD's closed source drivers, not the open source ones.

        If you're thinking about how hilariously broken Alien: Isolation was when they tested it here, it was only that broken on the open source drivers. It works just fine on the proprietary ones and you can check it on youtube if you don't believe me.

        Also, seeing how this is coming out late next year it's obvious that AMD's current lineup is not what they're going to use in the final production model. Most probably they're going to be looking at the 14nm parts that have yet to be even announced rather than the ones based on the old 32nm process they've been using since 2011.

        Comment


        • #24
          Couple of things...

          1. From the Kickstarter blurb I think they're developing with the GX-415 but plan to ship with something newer

          2. re: 32nm, pretty sure all the APUs from Kaveri/Kabini onward have been 28nm
          Test signature

          Comment


          • #25
            Well, I haven't really paid all that much attention to AMD's APU offerings or AMD's offerings in general after how disappointing the first couple of series of bulldozer chips turned out to be. I don't see that big of a difference between 32 and 28 nm so mentioning it is mostly just being a bit anal. However the jump to 14 nm FinFET (which they're definitely capable of seeing how Apple is already using Globalfoundries as a second source for 14 nm FinFET chips) will be pretty massive.

            Comment


            • #26
              Not to downplay the impact of 14nm, but the difference between the 32nmn and 28nm processes is a lot more than the raw numbers suggest.

              The 32nm SOI process was good for speed but wasn't so good for high density; between Richland and Kaveri we almost doubled the number of transistors while keeping die size pretty close (moving to a wider/slower approach to reduce power usage), and that was before moving to the higher density libraries for Carrizo.
              Test signature

              Comment


              • #27
                I quite frankly don't care that much about the transistor density seeing how we're talking about embedded systems where power consumption is of paramount importance. The main factor in how much power a chip draws comes from the transistors switching between their "on" and "off" states. Lowered power consumption when you move to a higher precision process mainly comes from the fact that when you move to a new process, you reduce the current difference in between the "on" and "off" states of transistors meaning that there's less energy being wasted every time a transistor goes from one state to the other.

                While cramming more and more transistors onto the same chip might do a lot of good for performance, it's doesn't help with efficiency which is why we've been seeing TDP's of AMD's higher end chips reach as high as 220W. While I do believe you that these extra transistors have helped improve performance, I don't think it's done anywhere near as much good for power efficiency.

                Comment


                • #28
                  Originally posted by L_A_G View Post
                  AMD's own proprietary driver has had full support to at least OpenGL 4.4 and if I recall correctly they also now fully support up 4.5 as well. It's only the open source drivers that have all those deficiencies in terms of OpenGL support and with AMD being actively involved in the development of this it's obvious that they're going to use AMD's closed source drivers, not the open source ones.

                  If you're thinking about how hilariously broken Alien: Isolation was when they tested it here, it was only that broken on the open source drivers. It works just fine on the proprietary ones and you can check it on youtube if you don't believe me.

                  Also, seeing how this is coming out late next year it's obvious that AMD's current lineup is not what they're going to use in the final production model. Most probably they're going to be looking at the 14nm parts that have yet to be even announced rather than the ones based on the old 32nm process they've been using since 2011.
                  Please read before you answer. AMD itself states The G-series SoCs support ogl 4.0, so your argument is invalid. So, even if it works on desktop hardware, that is not transferable to this SoC. Also, if the game just simply checks for Nvidia hardware or ogl version x.y and it doesn't start if there is no match, the claim is wrong, even if it would work with tweaks. Consumers want to buy a handheld console here, not a Linux pc in a funny case to carry around.

                  Originally posted by L_A_G View Post
                  I quite frankly don't care that much about the transistor density seeing how we're talking about embedded systems where power consumption is of paramount importance.
                  As is the price. And die size = price.

                  Originally posted by L_A_G View Post
                  The main factor in how much power a chip draws comes from the transistors switching between their "on" and "off" states. Lowered power consumption when you move to a higher precision process mainly comes from the fact that when you move to a new process, you reduce the current difference in between the "on" and "off" states of transistors meaning that there's less energy being wasted every time a transistor goes from one state to the other.

                  While cramming more and more transistors onto the same chip might do a lot of good for performance, it's doesn't help with efficiency which is why we've been seeing TDP's of AMD's higher end chips reach as high as 220W. While I do believe you that these extra transistors have helped improve performance, I don't think it's done anywhere near as much good for power efficiency.
                  That's why you just don't put more transistors inside a chip, you just don't. Well to be exact you do, within one generation and one series of chips from different performance levels. But otherwise not. You see architectural improvements for any new generation that are improving the power efficiency, also if – or even because – there are more transistors being used compared to the previous generation. Just compare the iterations of bulldozer up to Carrizo or AMD Tahiti vs Tonga or Nvidia's GM200 vs GK110, GM204 vs GK104 etc. etc.


                  Also, there are no FinFET SoCs announced. Focus will be bigger Zen dies and Arctic Islands GPUs. But maybe bridgman knows something more he wants to tell us about
                  While it was planned for this year, like the Puma-update from kabini/temash to beema/mullins, AMD hasn't even been announcing Peregrine Falcon yet. Not even speaking of the subsequent update, which will then - maybe - bring 14/16nm to the G-series.
                  Last edited by juno; 14 December 2015, 01:31 PM.

                  Comment


                  • #29
                    Originally posted by juno View Post
                    Please read before you answer. AMD itself states The G-series SoCs support ogl 4.0, so your argument is invalid. So, even if it works on desktop hardware, that is not transferable to this SoC. Also, if the game just simply checks for Nvidia hardware or ogl version x.y and it doesn't start if there is no match, the claim is wrong, even if it would work with tweaks. Consumers want to buy a handheld console here, not a Linux pc in a funny case to carry around.
                    According to their own website (that I just checked) the G-series that are out right now supports up to 4.2 and when it's using literally the same architecture as on desktop and a full Linux based OS, it IS transferable. Not only that, this is going to come out late next year meaning that they're going to have a lot of time to work on newer hardware and better drivers.

                    If you're going to try to make it seem like you're doing your homework better than someone else, don't do a half-assed job at it...

                    Originally posted by juno
                    As is the price. And die size = price.
                    A higher precision process also means that the price of producing a wafer also goes up while yields go down. It's not so clear cut that a 14 nm process is going to make chips that much cheaper than a well tested 28 nm process where there's not only going to be less problems, but a lot more time spent on ironing these bugs out.

                    Originally posted by juno
                    That's why you just don't put more transistors inside a chip, you just don't. Well to be exact you do, within one generation and one series of chips from different performance levels. But otherwise not. You see architectural improvements for any new generation that are improving the power efficiency, also if – or even because – there are more transistors being used compared to the previous generation. Just compare the iterations of bulldozer up to Carrizo or AMD Tahiti vs Tonga or Nvidia's GM200 vs GK110, GM204 vs GK104 etc. etc.
                    If you look at TDP's of AMD's desktop processors from the Bulldozer family you're generally going to see them getting getting higher and higher TDP's as time goes on. The reason why Intel has been able to keep TDP's stable is because they haven't been stuck on the same node and have been able to continuously introducing new nodes. You can also see this from AMD's desktop GPU's that have also been getting hotter and hotter culminating in the mini furnace known simply as the 390X.

                    Comment


                    • #30
                      Originally posted by L_A_G View Post
                      According to their own website (that I just checked) the G-series that are out right now supports up to 4.2 and when it's using literally the same architecture as on desktop and a full Linux based OS, it IS transferable. Not only that, this is going to come out late next year meaning that they're going to have a lot of time to work on newer hardware and better drivers.
                      On the official product page it still sais 4.0. I don't search deeper again now but all I know is I also checked some datasheet and it also said 4.0. Sorry if that info is wrong or outdated or they only said that because the drivers were not ready or whatever. If this is the case, AMD really should work on that. They should maintain something like Intel with their ARK
                      Just because the shader architecture is "GCN", the GPU doesn't have to support all the hardware features that are available on the desktop. You also don't have the same features available on a FirePRO and a mGPU, even if both are GCN, and I'm not talking about the cut-down fp64 performance.
                      Of course they have time. But we are talking about that specific piece of hardware that they now advertise and that they now want to "sell".


                      Originally posted by L_A_G View Post
                      A higher precision process also means that the price of producing a wafer also goes up while yields go down. It's not so clear cut that a 14 nm process is going to make chips that much cheaper than a well tested 28 nm process where there's not only going to be less problems, but a lot more time spent on ironing these bugs out.
                      Sure, but that's not what this is about. btw. I don't think that it is cheaper to build the exact same, but shrinked SoC in 14 nm at this time. Later, when the yields come closer to those in the planar 28 nm process it will surely be cheaper.

                      Originally posted by L_A_G View Post
                      If you look at TDP's of AMD's desktop processors from the Bulldozer family you're generally going to see them getting getting higher and higher TDP's as time goes on. The reason why Intel has been able to keep TDP's stable is because they haven't been stuck on the same node and have been able to continuously introducing new nodes. You can also see this from AMD's desktop GPU's that have also been getting hotter and hotter culminating in the mini furnace known simply as the 390X.
                      Sorry, that's just plain wrong. You totally underestimate the impact of architectural changes on performance and power efficiency.
                      You must never have heard about Intel's Tick-Tock process? They release a new chip architecture (Tock), then shrink it (Tick), then a Tock on the new node, shrink it again (Tick) etc. The bigger steps in the younger past were always Tocks (Skylake, Haswell, Sandy Bridge(!)) while Ticks brought way less impovements (Ivy Bridge, Broadwell).
                      Also, the TDP for AMDs FX-CPUs or APUs did only rise when they raised the frequency a lot. And I mean a lot. You have to keep in mind, that FX-CPUs only saw the first two of four bulldozer iterations. And the clocks went higher, even if the TDP remained stable.

                      examples?
                      Zambezi (1st gen bulldozer): FX-8150 w/ 3.6-4.2 GHz, 125 W
                      Vishera (2nd gen bulldozer): FX-8370 w/ 4.0-4.3 GHz, 125 W; FX-9590 w/ 4.7-5.0 GHz, 220 W
                      Of course, the Vishera were also faster and still more power efficient for the same clock speeds compared to the Zambezis. All of these are in GlobalFoundries 32 nm SOI, btw.
                      Same is for the APUs:
                      LLano: 3 GHz, 100 W
                      Richland: 4.1-4.4 GHz, 100W
                      [Kaveri: 3.9-4.1 GHz (ofc still faster), 95 W] <- this one is in TSMCs 28 nm instead of GF's 32 nm process.
                      [Bristol Ridge: ??? yet to come]

                      As we are already way too off-topic, I'm not going to explain to you why Hawaii was and is actually an efficient GPU and why it has the reputation to be a like you call it "mini furnace".

                      Comment

                      Working...
                      X