Announcement

Collapse
No announcement yet.

Bridgman Is No Longer "The AMD Open-Source Guy"

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Originally posted by entropy View Post
    Btw, does anybody know of an attempt to reverse engineer the UVD functionality?
    If there hasn't been such an attempt - why not?

    Maybe there are good technical arguments why this is practically impossible. (?)
    There isn't any reverse engineering effort that I know of. The reason is probably less technical and more that no one's bothered to do it.

    Comment


    • #62
      I like to thank John for his replies on this forum.

      I learned allot from it, and was happy the way he replied when I had a problem.
      No bs because he worked for amd.

      I am also happy he did not run away from the sometimes unfair attacks. ( unfair as in, that he was too blame for everything amd ever did, or did not )

      Tim welcome.

      I might be an optimist, but what I understand from this thread, is that AMD has learned from earlier mistakes.
      More people are now working on the linux part.

      Comment


      • #63
        Originally posted by Plombo View Post
        There isn't any reverse engineering effort that I know of. The reason is probably less technical and more that no one's bothered to do it.
        Thanks. I was thinking of a potential deep entaglement between the UVD block and DRM/Encryption
        and thus a major obstacle to reverse-engineering efforts, even for unprotected data streams.

        Comment


        • #64
          Well, speaking of compilers:

          Do you have any information regarding open CL support on AMD GNC?

          Folding on AMD 7970 is worthless. PG claims that AMD sdk does not sufficiently support open CL, hence open MM is not optimized, and hence Folding Cores are not optimized.

          I know it is a topic that is a bit of a stretch for this forum....but I do see in this case where robust compilers, open Cl drivers will be paramount to the success of the HSA initiative.

          I have had my card since January, and it works well for gaming, but the compute potential of the new architecture is certainly not being utilized to its fullest across the software ecosystem.

          Blame is often directed back at AMD for failures of compilers and drivers.

          Just curious about your inside perspective and opinion if this will happen soon, or if the compute capability of discrete graphics will be eclipsed / replaced by the HSA initiative.

          Comment


          • #65
            Originally posted by cube View Post
            How is that possible, that Intel has open source video acceleration for a long time (from the beginning?) on (AFAIK) nearly all graphics chips, and AMD can't do it ? And they still support DRM and HDCP on Windows - as we can see, it can be done...
            This question had been answered hundreds of times: Intel designed the decoding unit with open source in mind while amd didn't care to sunder the drm part.
            ## VGA ##
            AMD: X1950XTX, HD3870, HD5870
            Intel: GMA45, HD3000 (Core i5 2500K)

            Comment


            • #66
              Originally posted by mdk777 View Post
              Well, speaking of compilers:
              Do you have any information regarding open CL support on AMD GNC?
              Folding on AMD 7970 is worthless. PG claims that AMD sdk does not sufficiently support open CL, hence open MM is not optimized, and hence Folding Cores are not optimized.
              This is not true from my experience. Back in the Stream SDK < 2.4 days there were always some bugs encountered, but OpenCL is fully supported now.
              The scientific community still swears by CUDA, because nvidia dumped a lot of marketing effort into the science community. Convincing them to try OpenCL instead of CUDA, is like trying to convert a Fortran user to C/C++/..., because "Fortran is on average 10% faster" (you hear that a lot from these guys).

              I don't know who PG is, but send him some current benchmarks of similar code on CUDA and OpenCL and tell him that the SDKs are ready.

              Comment


              • #67
                Originally posted by not.sure View Post
                He probably got tired of arguing with Q
                LMAO, now the guy taking bridgman's place may be next to fall victim to Q's trolling

                At any rate, congrats to bridgman!

                Comment


                • #68
                  Originally posted by Gps4l View Post
                  I like to thank John for his replies on this forum.
                  I learned allot from it, and was happy the way he replied when I had a problem.
                  No bs because he worked for amd.

                  I am also happy he did not run away from the sometimes unfair attacks. ( unfair as in, that he was too blame for everything amd ever did, or did not )

                  Tim welcome.

                  I might be an optimist, but what I understand from this thread, is that AMD has learned from earlier mistakes.
                  More people are now working on the linux part.
                  +1.
                  and.. I long for a day when opensource AMD's linux GPU on par with Intel's

                  Comment


                  • #69
                    Thanks Bridgman for all your work, patience and reponses on this forum.

                    Tim, good luck in your new work!

                    Comment


                    • #70
                      1. Moving to a new shader architecture (GCN), new memory management (GPUVM) and new shader compiler (llvm) at the same time. This was kind-of necessary but it meant that we had far more work in process where you couldn't see an obvious benefit. Using llvm was partly to build a good foundation for an open source OpenCL stack, and partly to get a more capable shader compiler into the graphics stack.

                      This first point in Bridgeman's post is what I was talking about.

                      These steps made code optimized for the previous architecture run like crap on the new.

                      While your experience:

                      This is not true from my experience. Back in the Stream SDK < 2.4 days there were always some bugs encountered, but OpenCL is fully supported now.
                      May be true of some software, for others the deficit can run in 70-90%.

                      I understand that HSA will benefit from eliminating the PCIE bus and associated legacy.

                      However, it will be difficult to develop a software ecosystem when you can't model a integrated GPU on the start of the art discrete card (AMD 7970 performing worse than a AMD 6850)

                      Anyway, I didn't come to complain, just asking how they see the SDK development, if they are achieving their goals.

                      Comment


                      • #71
                        Originally posted by mdk777 View Post
                        This first point in Bridgeman's post is what I was talking about.

                        These steps made code optimized for the previous architecture run like crap on the new.
                        I don't understand. I was talking about a new shader compiler for the new architecture in the open source graphics stack. You seem to be talking about proprietary OpenCL.

                        Comment


                        • #72
                          Yes, going from a 6850 to a 7970 yields a 3x improvement in Luxmark V2.0 running on open cl 1.2

                          However, FAH yields a negative, ie. a 7970 does not work as well as the 6850. Running the same (open version) Opencl program.

                          The difference is attributed by the programers to a lack of robust drivers compilers on the part of AMD...ie. the functions supported on the previous compiler stack are not now supported/optimized to run on the new open source stack.

                          FAH cores themselves are not open source. Mike Houston could give you some insight as he was actively participating at one time.

                          But thanks for responding even though you have no immediate experience with the issue.

                          Best regards,
                          Last edited by mdk777; 09-22-2012, 06:24 PM.

                          Comment


                          • #73
                            Originally posted by mdk777 View Post
                            The difference is attributed by the programers to a lack of robust drivers compilers on the part of AMD...ie. the functions supported on the previous compiler stack are not now supported/optimized to run on the new open source stack.
                            I'm surprised by that conclusion. The FAH team has always maintained differently optimized code paths for different hardware architectures (eg NVidia vs ATI/AMD) so I would not expect them to think that code paths optimized for a VLIW core would also be optimal for a scalar core.

                            Comment


                            • #74
                              Originally posted by tstrunk View Post
                              The scientific community still swears by CUDA, because nvidia dumped a lot of marketing effort into the science community.
                              I don't think that it's the marketing. The main reasons are:

                              1) CUDA was there first, years ago, and they have lots of CUDA code around that would take quite an effort to convert to OpenCL
                              2) CUDA has a set of very nice, very fast libraries. OpenCL is more bare-bones. Even finding a good FFT is tricky -- there is the Apple one, but it is not as complete as the Nvidia one, and it's full of OSX-isms. If you know a good, drop-in FFT for OpenCL, let me know. This will improve with time, but once again, CUDA has a head start here.

                              Many people choose to use OpenCL for new projects, but CUDA is already well entrenched in many places.

                              Comment


                              • #75
                                so I would not expect them to think that code paths optimized for a VLIW core would also be optimal for a scalar core.

                                Reply With Quote Reply With Quote
                                The assertion is that the delay in optimization is due to the lack of working compiler from AMD for the VLIW core.

                                Perhaps it is all CYA. As, I mentioned, just looking for an inside perspective. I am just an educated end user.

                                Comment

                                Working...
                                X