Announcement

Collapse
No announcement yet.

Radeon Pro SSG Packs 1TB Of SSD Storage On The Graphics Card

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    This seems to be used as a means of permanent storage directly on the graphics card, so it has to be programmed for, but once it has the advantage is really the permanent and large size compared to RAM (and the fact that is local obviously).

    So, if you're working with a very large dataset it will still take as much time as before to load onto the GPU, but once it has been loaded once, it will be available again for future uses, so for the next time loading times for that particular program will be much faster. This is mainly of interest to the scientific and professional graphics community I believe.

    As for concerns about p/e cycles there's nothing to worry about, since the graphics card won't actually come with the SSDs. You'll have to provide these yourself, and from what I've read they are housed in some kind of module that plugs in to the card somehow, which means they're replaceable. The only thing that separates this card from any other regular graphics card is just the built-in M.2 interface, and I'm guessing that Polaris already includes support for this type of thing.

    Comment


    • #12
      Originally posted by phoronix View Post
      The GPU for the Radeon Pro SSG is, of course, Polaris based.

      http://www.phoronix.com/scan.php?pag...Radeon-Pro-SSG
      Actually, it's Fiji:


      Comment


      • #13
        WOW! It is realy fantastic story. Some time ago I have worked on porting PGStrom to OpenCL (but stopped now). This approach can win for OLAP postgres solution. I need this card.
        AMD guys are interested in the area database?

        Regards, postgres developer

        Comment


        • #14
          Originally posted by stalkerg View Post
          WOW! It is realy fantastic story. Some time ago I have worked on porting PGStrom to OpenCL (but stopped now). This approach can win for OLAP postgres solution. I need this card.
          AMD guys are interested in the area database?

          Regards, postgres developer
          I'm sure they're interested in competing in the FEA/CFD space where massive data sets of real-time capturing of and/or simulating situations of Thermo/Fluid Dynamics to much more would benefit from such a product.

          Comment


          • #15
            Add USB and network controller, make special OS compiled for execution by the G(C)PU and there you have a computer you don't have to plug into another computer.

            Comment


            • #16
              Originally posted by schmidtbag View Post
              This kind of makes sense, but how much performance could this really offer? If (assuming I understand this correctly) the GPU effectively operates independently by reading it's own form of storage, the PCIe lanes would basically be empty, except for a few streams here and there to operate the display. So in the end, I figure this would only shave off, at best, maybe 1 second of rendering time for the average 10-minute video. I can't imagine there being that much of a PCIe bandwidth issue for rendering. Also don't forget - the original data being stored needs to be copied to and from the GPU. So unless the user has direct access to the GPU storage, this would result in an overall slower rendering time.

              But, maybe there's something here I'm not quite understanding
              Huge, huge performance boost. If you watched the livestream, they have both with and without the SSD enabled, and without, it was chugging along at under 20fps, with it, it was over 90fps. They were live editing 8K content, it was a really cool demo they did.

              This was also be huge in the oil & gas industry, they got billions of data points that can now sit on the SSD.

              Comment


              • #17
                Originally posted by schmidtbag View Post
                This kind of makes sense, but how much performance could this really offer? If (assuming I understand this correctly) the GPU effectively operates independently by reading it's own form of storage, the PCIe lanes would basically be empty, except for a few streams here and there to operate the display. So in the end, I figure this would only shave off, at best, maybe 1 second of rendering time for the average 10-minute video. I can't imagine there being that much of a PCIe bandwidth issue for rendering. Also don't forget - the original data being stored needs to be copied to and from the GPU. So unless the user has direct access to the GPU storage, this would result in an overall slower rendering time.

                But, maybe there's something here I'm not quite understanding
                It is a HUGE, huge performance game.
                Look at the video, https://twitter.com/RadeonPro/status...313566720?s=08 or basically, it went from below 20fps to over 90fps while live editing 8K content.

                It will also be HUGE for the oil & gas industry with their billions of data points that they have.


                Originally posted by juno View Post
                LOL, fudzilla. Why the hell anyone goes to that idiot... I have no idea.

                It is Polaris.

                In terms of hardware, the Polaris based card is outfit with a PCIe bridge chip – the same PEX8747 bridge chip used on the Radeon Pro Duo, I’m told – with the bridge connecting the two PCIe x4 M.2 slots to the GPU, and allowing both cards to share the PCIe system connection. Architecturally the prototype card is essentially a PCIe SSD adapter and a video card on a single board, with no special connectivity in use beyond what the PCIe bridge chip provides.

                Comment


                • #18
                  Originally posted by schmidtbag View Post
                  This kind of makes sense, but how much performance could this really offer? If (assuming I understand this correctly) the GPU effectively operates independently by reading it's own form of storage, the PCIe lanes would basically be empty, except for a few streams here and there to operate the display. So in the end, I figure this would only shave off, at best, maybe 1 second of rendering time for the average 10-minute video. I can't imagine there being that much of a PCIe bandwidth issue for rendering. Also don't forget - the original data being stored needs to be copied to and from the GPU. So unless the user has direct access to the GPU storage, this would result in an overall slower rendering time.

                  But, maybe there's something here I'm not quite understanding
                  This is HUGE, HUGE performance increase.
                  Checkout their video it basically shows with and without the SSD, before, under 20fps, after, above 90fps.
                  More video here https://twitter.com/RadeonPro/status...313566720?s=08

                  This is HUGE for oil & gas industry (among others), they got datasets in the billions, and they need this.



                  Originally posted by juno View Post
                  LOL FUDzilla.
                  Why the heck anyone goes there, I have no idea. They are idiots.

                  http://www.anandtech.com/show/10518/...2-ssds-onboard
                  In terms of hardware, the Polaris based card is outfit with a PCIe bridge chip – the same PEX8747 bridge chip used on the Radeon Pro Duo, I’m told – with the bridge connecting the two PCIe x4 M.2 slots to the GPU, and allowing both cards to share the PCIe system connection. Architecturally the prototype card is essentially a PCIe SSD adapter and a video card on a single board, with no special connectivity in use beyond what the PCIe bridge chip provides.
                  In any case, while AMD is selling dev kits now, expect some significant changes by the time we see the retail hardware in 2017. Given the timeframe I expect we’ll be looking at much more powerful Vega cards, where the overall GPU performance will be much greater, and the difference in performance between memory/storage tiers is even more pronounced.
                  So, final design, and NOT THE DEV kit will be Vega most likely.
                  Last edited by vortex; 26 July 2016, 02:17 PM.

                  Comment


                  • #19
                    The use-case here is crunching through huge data-sets that don't change super-frequently. Oil and Gas, Database acceleration, perhaps offline rendering (storing huge, uncompressed textures, and huge geometry), and ray-tracing applications (which actually ends up being not dissimilar to database acceleration in many ways).

                    Comment


                    • #20
                      Originally posted by vortex View Post
                      LOL FUDzilla.
                      Why the heck anyone goes there, I have no idea. They are idiots.

                      http://www.anandtech.com/show/10518/...2-ssds-onboard

                      So, final design, and NOT THE DEV kit will be Vega most likely.
                      Yeah, I was not talking about the final design (that is unclear to be produced, in case the devkit flops anyway), but the prototype shown.
                      Fudzilla is a rumor page, but nowhere as bad as wccftech, they are not full idiots and you should think a few secs before insulting somebody like this. BTW: I was not quoting fudzilla, I was just posting their photo. You can clearly see and deduce by yourself that it is not Polaris.

                      Confirmation by Robert Hallock himself: https://twitter.com/Thracks/status/757992332583067648

                      Comment

                      Working...
                      X