Announcement

Collapse
No announcement yet.

Open-Source AMD HSA Should Come To Fruition This Year

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    amd need

    amd need to improve the single core performance a lot and not put more and more cores in their cpu, why is the point of a 16 core cpu if apps use no more than 6/8?

    Comment


    • #17
      Originally posted by mmstick View Post
      Theoretically they could probably release a 16 core FX right now though.
      Look at the Opterons, there are some 16 Core CPUs. They have 2 DIEs on the same Package.

      Comment


      • #18
        Originally posted by rikkinho View Post
        amd need to improve the single core performance a lot and not put more and more cores in their cpu, why is the point of a 16 core cpu if apps use no more than 6/8?
        And just what real world application do you need more single thread performance for as opposed to more cores?
        Games? Nope, Core 2s are fine in terms of single thread performance for most games
        Compiling? There's a lot more to be gained by going parallel
        Spreadsheets? Nope
        Video Encode/Decode? That's offloaded to the GPU

        Comment


        • #19
          Originally posted by Luke_Wolf View Post
          And just what real world application do you need more single thread performance for as opposed to more cores?
          Every Application profits from a higher single thread performance.

          Originally posted by Luke_Wolf View Post
          Games? Nope, Core 2s are fine in terms of single thread performance for most games
          benchmarks call you a liar.

          Originally posted by Luke_Wolf View Post
          Compiling? There's a lot more to be gained by going parallel
          Like the first but compiling is for most users irrelevant like weather calculations.

          Originally posted by Luke_Wolf View Post
          Spreadsheets? Nope
          Speadsheets on the gpu are useless. Look at the calculations. You can create a scenario thats runs very nice on the gpu but thats not the case for everything or the most.

          Originally posted by Luke_Wolf View Post
          Video Encode/Decode? That's offloaded to the GPU
          Decoding (Only for Playback, copy the frames to the system memory kills many benefits) yes but encoding not. The Hardware encoder doesn't provide the quality like a software encoder on the cpu. You can maybe use the ALU of gpu for some parts of the encoding (x264 can do this for some lookahead with opencl) but thats only interesting for HSA.

          Comment


          • #20
            Thanks for the responses!


            Thanks Bridgman! Interesting readings.

            Comment


            • #21
              Originally posted by Nille View Post
              Every Application profits from a higher single thread performance.
              Not even close, most applications are just sitting there spinning their wheels, or single shots that finish so fast that the actual performance of a CPU core hasn't mattered for quite some time. It's generally a very small number of professional workloads that are demanding of todays CPUs as well as RTSes where a lot of units have to be calculated at once. Want to guess what's true of most of those? They're problems better solved by going parallel as opposed to serial.
              Originally posted by Nille View Post
              benchmarks call you a liar.
              Oh really? Please do point to a game other than skyrim that under real world conditions a Haswell i7 has an advantage over the FX-8350 that can actually be perceived. Also I would like to point out that under minimum requirements Battlefield 4 and other demanding games say : Core 2 duo, and for their recommended or max requirements they all say a "modern quad core".

              Originally posted by Nille View Post
              Like the first but compiling is for most users irrelevant like weather calculations.
              Compiling is one of a very very few workloads that cares about CPU performance in a modern workload and please I would like to see these benchmarks where haswell at -j1 is beating a core 2 quad at -j4, the fact is it really doesn't take particularly long for a CPU to go through a file, what takes a long time is going through all of the files in a project, Calligra for example is 22,250 files which as a result means more cores is more important so that it can do more at once.

              Originally posted by Nille View Post
              Speadsheets on the gpu are useless. Look at the calculations. You can create a scenario thats runs very nice on the gpu but thats not the case for everything or the most.
              ah yes spreadsheets on the GPU are useless which is why AMD wasted all that money making LibreOffice have an OpenCL backend, obviously spreadsheets that actually have any significant CPU demands aren't massively parallel situations with a bunch of relatively simple operations. Further if you're not one of those situations where spreadsheets are being abused guess what? it doesn't actually have a real performance need and is just spinning it's wheels pretty much the whole time.

              Originally posted by Nille View Post
              Decoding (Only for Playback, copy the frames to the system memory kills many benefits) yes but encoding not. The Hardware encoder doesn't provide the quality like a software encoder on the cpu. You can maybe use the ALU of gpu for some parts of the encoding (x264 can do this for some lookahead with opencl) but thats only interesting for HSA.
              for most people's needs the hardware encoder is more than good enough, and if you're a professional you're going to have a render farm that you've hooked maya up to.
              Last edited by Luke_Wolf; 08-25-2014, 03:32 AM.

              Comment


              • #22
                Originally posted by xeekei View Post
                Doesn't the on-die GPU take up quite a bit of die space, though? Why not put say two extra CPU cores, or bigger cores on the die instead. Enthusiasts don't need the on-die GPU, do they?
                From all I've gathered, theres apparently a pretty big hit on GPGPU performance with a dedicated GPU due to the latency involved that is mitigated by having the GPU on die and sharing the same memory pool.

                Why a GPU on the CPU? All the heavy lift tasks that most people do are multimedia related and GPUs are FAR better then CPUs at these tasks.

                As far as gaming, I can see it having a big impact with APU assisted physics and AI calculations, where you'd still want a dedicated GPU to actually render the actual frames but the GPU portion of the APU would allow for a much more realistic world and enemies and allies that don't get stuck in corners, but instead properly hunt you down using proper squad tactics till you rage quit out of frustration because every time you try something different they out think you.

                Comment


                • #23
                  Originally posted by Luke_Wolf View Post
                  for most people's needs the hardware encoder is more than good enough, and if you're a professional you're going to have a render farm that you've hooked maya up to.
                  Sigh. "If you don't have a render farm, you're not allowed to care about quality"

                  Comment


                  • #24
                    Originally posted by Luke_Wolf View Post
                    Oh really? Please do point to a game other than skyrim that under real world conditions a Haswell i7 has an advantage over the FX-8350 that can actually be perceived. Also I would like to point out that under minimum requirements Battlefield 4 and other demanding games say : Core 2 duo, and for their recommended or max requirements they all say a "modern quad core".
                    Dayz.

                    Comment


                    • #25
                      Originally posted by Luke_Wolf View Post
                      And just what real world application do you need more single thread performance for as opposed to more cores?
                      GUI (GTK / Qt) applications - GTK and Qt libraries have only limited support for threads and mostly don't scale to many cores
                      web browsers are multi-threaded, but one tab don't scale to many cores

                      Comment


                      • #26
                        Personnaly, I have a Kaveri APU and the only time I tell myself I could use some more CPU power is when doing video encoding using the CPU. Which means that more cores would be as welcomed if not more than better single thread performance.
                        For the rest, I've not seen for quite a while any GUI application lagging because of lack of CPU power. Rendering also likes multi-cores and games will probably be less dependent in the future on single thread performance because of new graphic APIs like Mantle, DX12 and OpenGL-Next.
                        So on the whole, for personal use, I also tend to believe that multiple cores is the way forward and single thread performance will become less and less relevant.

                        Comment


                        • #27
                          Originally posted by JS987 View Post
                          GUI (GTK / Qt) applications - GTK and Qt libraries have only limited support for threads and mostly don't scale to many cores
                          web browsers are multi-threaded, but one tab don't scale to many cores
                          Oh please, the average GUI application spends almost all of it's time listening on the event loop as opposed to actually doing anything, and I don't exactly need a xeon to render QtCreator or Dolphin. Also please link to this website that you visit that demands all this performance that you need "a lot more single thread performance" than AMD currently provides. Keep in mind that if you have to draw upon sunspider or other web benchmarking sites, that you'll be proving my point for me.

                          @curaga,
                          Oh you're allowed to care about quality, but let me ask you a question: Why is it that a render farms has thousands of cores as opposed to a small number of cores clocked to the moon? After all wouldn't that only benefit a workload that is massively multithreaded?

                          Comment


                          • #28
                            Originally posted by rvdboom View Post
                            Personnaly, I have a Kaveri APU and the only time I tell myself I could use some more CPU power is when doing video encoding using the CPU. Which means that more cores would be as welcomed if not more than better single thread performance.
                            For the rest, I've not seen for quite a while any GUI application lagging because of lack of CPU power. Rendering also likes multi-cores and games will probably be less dependent in the future on single thread performance because of new graphic APIs like Mantle, DX12 and OpenGL-Next.
                            So on the whole, for personal use, I also tend to believe that multiple cores is the way forward and single thread performance will become less and less relevant.
                            Multicore support in game engines was held up by the consoles. but now that the PS4 and XBone are based on 64 bit 8 core AMD APUs with GCN graphics it means that in the near future all new game engines will have much better multi core support.

                            Comment


                            • #29
                              I wonder how HSA will change program executions. To me it looks like it could be just another ISA, a bit like when the FP co-processor got integrated. But as x86 code and HSAIL don't share a common pipeline, it complicate quite a lot of stuff.

                              Comment


                              • #30
                                Originally posted by Luke_Wolf View Post
                                @curaga,
                                Oh you're allowed to care about quality, but let me ask you a question: Why is it that a render farms has thousands of cores as opposed to a small number of cores clocked to the moon? After all wouldn't that only benefit a workload that is massively multithreaded?
                                Nothing has perfect scaling, even atomic instructions used in lockless methods have overhead. Thus a 50GHz cpu would always be better than 50 1 GHz cpus in performance.

                                Comment

                                Working...
                                X