Announcement

Collapse
No announcement yet.

Bridgman Is No Longer "The AMD Open-Source Guy"

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    Originally posted by mdk777 View Post
    This first point in Bridgeman's post is what I was talking about.

    These steps made code optimized for the previous architecture run like crap on the new.
    I don't understand. I was talking about a new shader compiler for the new architecture in the open source graphics stack. You seem to be talking about proprietary OpenCL.
    Test signature

    Comment


    • #72
      Yes, going from a 6850 to a 7970 yields a 3x improvement in Luxmark V2.0 running on open cl 1.2

      However, FAH yields a negative, ie. a 7970 does not work as well as the 6850. Running the same (open version) Opencl program.

      The difference is attributed by the programers to a lack of robust drivers compilers on the part of AMD...ie. the functions supported on the previous compiler stack are not now supported/optimized to run on the new open source stack.

      FAH cores themselves are not open source. Mike Houston could give you some insight as he was actively participating at one time.

      But thanks for responding even though you have no immediate experience with the issue.

      Best regards,
      Last edited by mdk777; 22 September 2012, 06:24 PM.

      Comment


      • #73
        Originally posted by mdk777 View Post
        The difference is attributed by the programers to a lack of robust drivers compilers on the part of AMD...ie. the functions supported on the previous compiler stack are not now supported/optimized to run on the new open source stack.
        I'm surprised by that conclusion. The FAH team has always maintained differently optimized code paths for different hardware architectures (eg NVidia vs ATI/AMD) so I would not expect them to think that code paths optimized for a VLIW core would also be optimal for a scalar core.
        Test signature

        Comment


        • #74
          Originally posted by tstrunk View Post
          The scientific community still swears by CUDA, because nvidia dumped a lot of marketing effort into the science community.
          I don't think that it's the marketing. The main reasons are:

          1) CUDA was there first, years ago, and they have lots of CUDA code around that would take quite an effort to convert to OpenCL
          2) CUDA has a set of very nice, very fast libraries. OpenCL is more bare-bones. Even finding a good FFT is tricky -- there is the Apple one, but it is not as complete as the Nvidia one, and it's full of OSX-isms. If you know a good, drop-in FFT for OpenCL, let me know. This will improve with time, but once again, CUDA has a head start here.

          Many people choose to use OpenCL for new projects, but CUDA is already well entrenched in many places.

          Comment


          • #75
            so I would not expect them to think that code paths optimized for a VLIW core would also be optimal for a scalar core.

            Reply With Quote Reply With Quote
            The assertion is that the delay in optimization is due to the lack of working compiler from AMD for the VLIW core.

            Perhaps it is all CYA. As, I mentioned, just looking for an inside perspective. I am just an educated end user.

            Comment


            • #76
              Actually the VLIW core is the older one (Cayman and previous). Compiler issues there (whatever they might be) wouldn't affect work on the newer cores, would it ?

              EDIT -- hmm... a quick skim of the F@H site suggests that the client is going in at the CAL level (essentially assembler) rather than OpenCL. If true, and if the IL has been optimized for VLIW GPUs, that would explain why the GCN performance isn't scaling the way it usually does. Will ask Mike.
              Last edited by bridgman; 22 September 2012, 07:38 PM.
              Test signature

              Comment


              • #77
                Switched from CAL to OpenCL last year.


                Hence my hope that the new GNC would continue to function.

                Again, thanks for your interest and insight.

                Comment


                • #78
                  I still have to understand why f@h takes 100% of my cpu with the opencl client (HD5870) while it takes nearly 0% on nvidia with cuda.
                  ## VGA ##
                  AMD: X1950XTX, HD3870, HD5870
                  Intel: GMA45, HD3000 (Core i5 2500K)

                  Comment


                  • #79
                    I still have to understand why f@h takes 100% of my cpu with the opencl client (HD5870) while it takes nearly 0% on nvidia with cuda.
                    Again, I don't have the experience of someone like Bridgman regarding the details of the architecture and software interface.

                    However, what I have been told is that the nvidia architecture did a better job of keeping the computations on the card, while the previous version of AMD architecture(optimized for GPU and not GPU compute) required constant call backs to the CPU to accomplish the same calculations.

                    Hence again the great irony...The AMD GNC was supposed to be an improvement in GPU compute, moving toward the nvidia design model.
                    However, there is always some communication required regardless of of how optimized the GPU compute capability.

                    Hence, eliminating the PCIE latency entirely...well that is the holy grail of the entire HSA project as I understand it.

                    Anyway, this is why I thought this discussion was appropriate to Bridgman being on the HSA team. The AMD GNC should kick@#$ on a AMD 7970 if the same architecture is going to ultimately going to be expected to perform on the APU/HSA.

                    Comment


                    • #80
                      Oh, well.
                      I saw the tweet early but the news post late since I was a few days off computer. Had fun in the forests a few days now.

                      Well, it was less horrible than I thought after reading the tweet. So nobody got fired or anything.
                      So it is just a relatively minor change. And if everybody inside is happy with it, fine.


                      Good luck with hacking on HSA stuff then, John, and thanks for all the answered questions. Thanks for communicating with us and thanks for bearing all the many flames that went towards you/ATI/AMD.

                      And welcome to Tim Writer. (I hope you're prepared for the rough tone some people use here from time to time.)
                      Stop TCPA, stupid software patents and corrupt politicians!

                      Comment

                      Working...
                      X