Announcement

Collapse
No announcement yet.

HD 3870 already outdated for OpenCL. Am I nave for expecting otherwise?

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • HD 3870 already outdated for OpenCL. Am I nave for expecting otherwise?

    To my great surprise, after battling with getting the AMD APP (OpenCL) SDK to work in Maverick with fglrx, and actually stopping up and reading the System Requirements, I notice that my graphics card isn't supported. It seems the 4xxx series isn't even properly supported (beta level), so I fear that my card will never be.

    Am I not being fair expecting a ~3 year old graphics card to be supported by new technologies? Are there any technical difficulties in implementing OpenCL on older GPUs because of different hardware? Or will support come for the 3xxx series at some point?

    I feel like I bought that card yesterday... Maybe I'm just getting old (too)?

  • #2
    HD38xx has been released in 2007, OpenCL 1.0 has been released in 2008.

    How exactly is ATI supposed to add hardware features required for an API that wasn't available at the time of GPU design?

    Comment


    • #3
      That was the part I was in doubt about, whether the various OpenCL functions are all implemented in hardware, or whether they run on the already present vertex shaders and what else the graphics card has (I don't know much about that, as you might be able to tell ). But that may be the exception rather than the rule, "emulating" such functions on shaders? Like we hopefully will see with video decoding acceleration in Gallium.
      Of course if the various OpenCL functions need to be implemented in hardware, it's no surprise that a card out-dating a standard is incompatible with that standard.

      Comment


      • #4
        Newer versions of OpenCL require new features such as Double Precision Floating Point. This doesn't exist in old hardware, and is not worth implementing in software (where you would punt to anyway).

        Note that OpenCL (particularly from AMD) should handle the CPU too. So your GPU won't be supported, but your CPU will.

        Comment


        • #5
          @runeks

          there are much older cards support openCL because openCL is mostly a copy paste work of CUDA but the reverence card for cuda is the old old Geforce8800 card.

          yes your card hd38xx is younger than the 8800 geforce but the 8800 card do have 2 nividia only caches and the openCL spec (read cuda spec) build the stuff arround this 2 caches.

          the hd4000 ad 1 of this chaches and the second one is emulated in vram.

          an hd5000 card is the first in supporting this 2 geforce8800 caches.

          means an hd5000 is the first amd card for openCL.

          remember openCL... is a spec from NVIDIA+Apple based on NVIDIA hardware.

          if you search you can find many posts about this tropic in this forum.

          Comment


          • #6
            @Qaridarium

            Interesting! I didn't know that Apple were the original authors, and that Nvidia were so heavily involved. But I guess it makes sense that Nvidia assisted them as much as they did, as they are the ones with the most experience regarding GPGPU.
            Kudos to Apple for releasing this as an open spec!

            Comment


            • #7
              To the OP:

              I just found out the same with regards to my Radeon 4770 a few days ago. I've been developing an OpenCL-accelerated decoder for VP8/WebM video (not finished yet), and I've had to resort to CPU-based CL on my desktop (using the Stream SDK v2.3). My laptop (GF 9400M) works just fine using Nvidia's binary drivers.

              The problem with my card: While OpenCL 1.0 is supported, the card doesn't support the cl_khr_byte_addressable_store extension, which makes it useless for what I'm working on.

              *crosses fingers* I'm hoping my thesis advisor pulls through and lends me one of the GTX 580's he just bought for one of his clusters.

              Comment


              • #8
                ^ I guess that's the price we pay for enjoying hardware acceleration: when the demands for what needs to be accelerated change, the hardware has to change too :\

                I'm really starting to look forward to OpenBenchmarking.org appearing. I'd really like to be able to just get a list of OpenCL compatible Linux graphics cards, and list them in ascending order of "OpenCL-power" per dollar. It's really a hassle to find out which graphics card to choose for Linux as things are now.

                Comment


                • #9
                  Well, if you're looking for CL power per dollar, you're probably going to want to look at something like a GTX 560, but I don't really have benchmarks to make my case. As you said, when OB.org is ready, that should help. I've noticed a HUGE difference in CL execution speed in Linux vs Mac OS, at least when running on CPU's (My dual-core (C2D 2.53) Macbook outruns my 6 core Phenom in Linux 2.6.38-rc2 by a factor of 4).

                  Otherwise, I'd check and see if any of the recent Phoronix articles have done comparisons of AMD/Nv with an OpenCL test being involved. Either Anandtech or TechReport also might include Windows-based CL testing in their video card articles.

                  Anandtech's GPU Bench also has a couple GPGPU tests, such as:
                  http://www.anandtech.com/bench/GPU11/222

                  Comment


                  • #10
                    [QUOTE=runeks;172781]I'd really like to be able to just get a list of OpenCL compatible Linux graphics cards, and list them in ascending order of "OpenCL-power" per dollar. /QUOTE]

                    the hd5850 2gb vram version does have the best OpenCL-power per dollar ratio.

                    thats because this card do have full 64bit support without any slowdown and the card only cost you 200 today.

                    all gtx480 cards and gtx580 cards are much slower on 64bit tasks because they slowdown the speed to sell tesla cards-

                    Comment


                    • #11
                      [QUOTE=Qaridarium;172890]
                      Originally posted by runeks View Post
                      I'd really like to be able to just get a list of OpenCL compatible Linux graphics cards, and list them in ascending order of "OpenCL-power" per dollar. /QUOTE]

                      the hd5850 2gb vram version does have the best OpenCL-power per dollar ratio.

                      thats because this card do have full 64bit support without any slowdown and the card only cost you 200 today.

                      all gtx480 cards and gtx580 cards are much slower on 64bit tasks because they slowdown the speed to sell tesla cards-
                      The double-precision speed isn't as fast as the Tesla cards, but for single-precision and integer workloads, they're fine. Everything I'm working on is integer-based, so it doesn't affect me...

                      Comment


                      • #12
                        [QUOTE=Veerappan;173272]
                        Originally posted by Qaridarium View Post

                        The double-precision speed isn't as fast as the Tesla cards, but for single-precision and integer workloads, they're fine. Everything I'm working on is integer-based, so it doesn't affect me...
                        you really not get the point the DP speed is much higher with the hd5850 per DOLLAR
                        i wrote this: "OpenCL-power per dollar ratio. "
                        PER DOLLAR mean you can not beat an 200 card with an 2000 Tesla card if you are 50% faster.
                        50% faster means 300.... 100% faster means 400.... but your tesla costs 2000,.,.
                        your tesla need to be 10 time faster on DP

                        Comment


                        • #13
                          [QUOTE=Qaridarium;173284]
                          Originally posted by Veerappan View Post

                          you really not get the point the DP speed is much higher with the hd5850 per DOLLAR
                          i wrote this: "OpenCL-power per dollar ratio. "
                          PER DOLLAR mean you can not beat an 200 card with an 2000 Tesla card if you are 50% faster.
                          50% faster means 300.... 100% faster means 400.... but your tesla costs 2000,.,.
                          your tesla need to be 10 time faster on DP
                          http://blog.cudachess.org/2010/03/nv...ncl-benchmark/

                          GTX 580 versus a Radeon 5870 (also includes a 6870 and GTX 460/480):
                          http://www.geeks3d.com/20101125/test...eeks3d-labs/5/

                          4-part series which compares a radeon 5870 against a GTX 280 (280, not 480). Note that the GTX 280 usually beats the 5870, which means the Fermi cards probably demolish it:
                          http://www.geeks3d.com/20100115/gpu-...l-test-part-1/

                          Another GTX 480 versus 5870:
                          http://www.geeks3d.com/20100330/gefo...rmance-tested/

                          As I said, the GTX 480 and GTX 580 kick the pants off the 5870 (and therefore the 5850). Looking at the second link above, the GTX 460 768MB is as fast as the 5870. The GTX 460 goes for $160-$200 on newegg, and the 5850 is $185+.

                          Personally, even if it is a little slower, I'd probably still buy the GTX 460/560 because of Nvidia's Visual Profiler tool:
                          http://developer.nvidia.com/object/visual-profiler.html

                          If you're doing OpenCL development work, it will probably come in really handy.

                          Comment


                          • #14
                            [QUOTE=Veerappan;173432]
                            Originally posted by Qaridarium View Post

                            http://blog.cudachess.org/2010/03/nv...ncl-benchmark/

                            GTX 580 versus a Radeon 5870 (also includes a 6870 and GTX 460/480):
                            http://www.geeks3d.com/20101125/test...eeks3d-labs/5/

                            4-part series which compares a radeon 5870 against a GTX 280 (280, not 480). Note that the GTX 280 usually beats the 5870, which means the Fermi cards probably demolish it:
                            http://www.geeks3d.com/20100115/gpu-...l-test-part-1/

                            Another GTX 480 versus 5870:
                            http://www.geeks3d.com/20100330/gefo...rmance-tested/

                            As I said, the GTX 480 and GTX 580 kick the pants off the 5870 (and therefore the 5850). Looking at the second link above, the GTX 460 768MB is as fast as the 5870. The GTX 460 goes for $160-$200 on newegg, and the 5850 is $185+.

                            Personally, even if it is a little slower, I'd probably still buy the GTX 460/560 because of Nvidia's Visual Profiler tool:
                            http://developer.nvidia.com/object/visual-profiler.html

                            If you're doing OpenCL development work, it will probably come in really handy.

                            This all really depends on the the GPGPU application being used. For most average end users the Nvidia cards will smoke an AMD on consumer based applications which do not typically rely on double precision (those benchmarks for example don't utilize DP).

                            Q is right however if it was going to be on pure double precision benchmark or application the AMD cards would show a better performance benchmark result on their consumer cards then nvidias consumer cards. The flipside however to that the AMD cards do not carry the same error correction so how many of those results are accurate is anybodies guess and for an application that uses double precision accuracy is desired even if it comes at a monetary cost or some speed.

                            It really all depends on the market you are going after and the GPGPU application.

                            Comment


                            • #15
                              Originally posted by deanjo View Post

                              This all really depends on the the GPGPU application being used. For most average end users the Nvidia cards will smoke an AMD on consumer based applications which do not typically rely on double precision (those benchmarks for example don't utilize DP).

                              Q is right however if it was going to be on pure double precision benchmark or application the AMD cards would show a better performance benchmark result on their consumer cards then nvidias consumer cards. The flipside however to that the AMD cards do not carry the same error correction so how many of those results are accurate is anybodies guess and for an application that uses double precision accuracy is desired even if it comes at a monetary cost or some speed.

                              It really all depends on the market you are going after and the GPGPU application.
                              Yeah, you and Quaridarium are right that the double precision floating performance of the Fermi-based GTX cards is crippled, but at the same time, I haven't seen any consumer-level benchmarks that use fp64 either.

                              Given that double-precision floating point wasn't even a requirement of the OpenCL 1.0 spec (it was an optional extension), I haven't placed much importance on it when weighing purchasing decisions.

                              Comment

                              Working...
                              X