Announcement

Collapse
No announcement yet.

new video card advice

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by droidhacker View Post
    You're picking on words without spending the time to understand the meaning/context/purpose.

    OpenCL and CUDA are both subsets (i.e. implementations/versions/etc.) of "USE GPU AS GENERAL-ish PURPOSE PROCESSOR".

    Is it that hard to understand?

    Or you can look at it from a different perspective.... CUDA is a version of OpenCL. That might make more sense in the context since CUDA is a restricted proprietary implementation of what should naturally be open. Using the word "version" not to mean "subset", but to mean "interpretation" or "take on".
    It's not a "version", "interpretation" or "take on" of Cuda either. It is simply an alternative.
    Just as openGL is not a "version", "interpretation" or "take on" of DirectX or openAL is not a "version", "interpretation" or "take on" of Directsound. You wouldn't say "linux is a version of windows" or "openSUSE is a version of Ubuntu" so why would you say openCL is a version of Cuda?

    You know how tiring it is dealing with you language nazis? Spend more time worrying about the intention. We're not writing contracts here.
    Intention is fine but convey it in a accurate potrayal so that there isn't room for misinterpretation due to misrepresentation of facts.

    And FYI: the number of "CUDA APPLICATIONS" is irrelevant if most machines CAN'T RUN THEM!!! It is strictly MORONIC to write software that is restricted to a particular vendor's hardware! And more than that, to a particular set of drivers for that hardware!
    It's hardly moronic to write software that is restricted to one vendor when there is no real viable option yet to do otherwise. openCL still is having teething pains on all levels. Even with openCL you still have device specific optimizations that have to be done in the code for it to strut it's stuff.

    Last time I checked intel does not nor does it have any plans for implementing openCL support for their products. That leaves really ATI cards the odd man out when it comes to running parallel computing on GPU which is dominated by nvidia cards so yes Cuda is the most widely implemented parallel computing on GPU solution.

    Looking into a possible future, openCL has a G3D state tracker, which might hopefully one day mean openCL on intel, AMD, **AND** nvidia OPEN SOURCE drivers, as well as the current AMD blob implementation. That pretty much covers EVERYBODY EXCEPT nvidia blob users. Much more freeing, eh?
    Nvidia blob users wouldn't need Gallium so they wouldn't be loosing out on anything. Right now your choices are either to use openCL on CPU if you want to be free of which there really isn't advantage over current parallel implementations for the CPU. If you want to use openCL right now you still have to use propriatary blobs. Now given that there is a higher demand for tasks such as video acceleration in open drivers proper openCL support in them could be quite a while.

    Comment


    • #12
      Originally posted by deanjo View Post
      It's not a "version", "interpretation" or "take on" of Cuda either. It is simply an alternative.
      Just as openGL is not a "version", "interpretation" or "take on" of DirectX or openAL is not a "version", "interpretation" or "take on" of Directsound. You wouldn't say "linux is a version of windows" or "openSUSE is a version of Ubuntu" so why would you say openCL is a version of Cuda?

      Intention is fine but convey it in a accurate potrayal so that there isn't room for misinterpretation due to misrepresentation of facts.
      You seem to have a real hard time with reading comprehension. I suggest grammar school.

      It's hardly moronic to write software that is restricted to one vendor when there is no real viable option yet to do otherwise. openCL still is having teething pains on all levels. Even with openCL you still have device specific optimizations that have to be done in the code for it to strut it's stuff.
      And when that is to be done on ONE piece of hardware for one implementation, and ALL hardware for the other implementation? Definitely moronic to limit yourself to the one. Especially when that ONE faces a very uncertain future. WHERE WILL NVIDIA BE when fusion chips become common? They don't make an x86 core!

      Last time I checked intel does not nor does it have any plans for implementing openCL support for their products. That leaves really ATI cards the odd man out when it comes to running parallel computing on GPU which is dominated by nvidia cards so yes Cuda is the most widely implemented parallel computing on GPU solution.
      You clearly have NO CONCEPT of open source software. I suggest you go back to MS as it clearly suits you better.

      Nvidia blob users wouldn't need Gallium so they wouldn't be loosing out on anything.
      WHAT nvidia blob users?
      And NVIDIA users IN TOTAL are a MINORITY. Really -- count up all the INTEL GPUs and the AMD GPUs and compare that with the number of NVIDIA GPUs.

      Right now your choices are either to use openCL on CPU if you want to be free of which there really isn't advantage over current parallel implementations for the CPU.
      THERE IS ANOTHER ADVANTAGE!!!!
      For those who do NOT have a GPU with openCL support, openCL software is STILL USABLE!!! Slow? Sure. Functional? You bet! Works for all, fast for some. Sure beats fast for some, dead for most.

      If you want to use openCL right now you still have to use propriatary blobs. Now given that there is a higher demand for tasks such as video acceleration in open drivers proper openCL support in them could be quite a while.
      Your short-sightedness is SICKENING.
      Complex systems don't get built in a day.
      RIGHT NOW, you have ACCELERATED OpenCL on AMD GPUs, and slow OpenCL on ALL OTHER GPUs (of if you insist on nit-picking, on all systems without AMD GPUs and AMD blob driver). That means it does SOMETHING for EVERYONE. CUDA? ACCELERATED on NVIDIA GPUs, and that's IT. In the future, OpenCL will expand to ALL GPUs, probably with the exception of those running nvidia blobs. CUDA isn't going anywhere. There will be fewer and fewer NVIDIA GPUs around (thanks to Fusion and intel's equivalent, and thanks to AMD's open source drivers), and as nouveau keeps getting better and better, there will be fewer and fewer of those with nvidia hardware willing to suffer with their blob.

      Comment


      • #13
        I think that droidhacker used the term "version" in a general sense, not in the software sense. Which led to confusion.

        Like saying that David Hasselhof is a German version of Tom Cruise, or something. He didn't mean that one was literally derived from the other, just that they fill a similar role in different contexts.

        Comment


        • #14
          @Deanjo

          I think you have misunderstood OpenCL and what it is trying to do. AFAIK nvidia has even been one of the head-developers of OpenCL, so it is actually something nvidia wants.

          OpenCL has the advantage over CUDA, that it supports multible devices running the same application. This means that you can exploit more than one GPU or CPU, working together on the same stream of data. I have developed both CUDA and OpenCL application, and the optimization part in OpenCL is not that complicated really. In CUDA you have to tune the number of threads pr. block and how big a grid of blocks you would want to use. This fine-tuning might even be different for different types of nvidia cards.

          The exactly same optimizations go for OpenCL. The difference is that you now also have to tune it for the individual vendors. But its not that hard really.

          That said, it is true that CUDA might be a few percent faster than OpenCL, but that margin is eliminated when you are using the faster ATI cards anyway, due to the greater number of microprocessors :-) And the fact that you can use the same code nomatter which vendor you are using, justifies why you should use OpenCL over CUDA anytime.

          Comment


          • #15
            Originally posted by droidhacker View Post
            You seem to have a real hard time with reading comprehension. I suggest grammar school.
            I'm not the one who doesn't know what the work version means. May I suggest you using one of the online dictionaries.

            And when that is to be done on ONE piece of hardware for one implementation, and ALL hardware for the other implementation? Definitely moronic to limit yourself to the one. Especially when that ONE faces a very uncertain future. WHERE WILL NVIDIA BE when fusion chips become common? They don't make an x86 core!
            I can tell you have not done any openCL development. It's hardly just for "one" and the rest are fine. To get peak performance, optimizations still have to be done for pretty much every series of hardware out there. Nvidia will still be long around strong and well once Fusion hit. The ATI soothsayers have been predicting Nvidia's death with pretty much every release of ATI product but yet Nvidia still is strong and thriving.

            You clearly have NO CONCEPT of open source software. I suggest you go back to MS as it clearly suits you better.
            I have a very stong knowledge of opensource software. I'm just not blinded and see the advantages to open and closed. The current company I happen to work for also does GPL and closed development.

            WHAT nvidia blob users?
            And NVIDIA users IN TOTAL are a MINORITY. Really -- count up all the INTEL GPUs and the AMD GPUs and compare that with the number of NVIDIA GPUs.
            Intel GPU's are not openCL capable so take that out of the equation right now. So that leaves ATI of which you still have to use closed drivers to utilize openCL on their GPU's

            THERE IS ANOTHER ADVANTAGE!!!!
            For those who do NOT have a GPU with openCL support, openCL software is STILL USABLE!!! Slow? Sure. Functional? You bet! Works for all, fast for some. Sure beats fast for some, dead for most.
            How can you say a slower implementation is an advantage? Your sure showing your blind love now.

            Your short-sightedness is SICKENING.
            Complex systems don't get built in a day.
            RIGHT NOW, you have ACCELERATED OpenCL on AMD GPUs, and slow OpenCL on ALL OTHER GPUs (of if you insist on nit-picking, on all systems without AMD GPUs and AMD blob driver).
            WRONG. You have NO openCL on any other GPU's other then ATI / NVIDIA cards running the blob RIGHT NOW. The only other option is running openCL on the CPU.

            That means it does SOMETHING for EVERYONE. CUDA? ACCELERATED on NVIDIA GPUs, and that's IT. In the future, OpenCL will expand to ALL GPUs, probably with the exception of those running nvidia blobs.
            Sigh, you do realize that the blobs run openCL just fine right?

            CUDA isn't going anywhere. There will be fewer and fewer NVIDIA GPUs around (thanks to Fusion and intel's equivalent, and thanks to AMD's open source drivers), and as nouveau keeps getting better and better, there will be fewer and fewer of those with nvidia hardware willing to suffer with their blob.
            All-in-one implementations are not going to replace discreet solutions anytime soon. Current Fusion demonstrations show that performance is slightly better then current IGP solutions. Your still limited to the bandwidth of the system ram and plain old die real estate. Fusion is great for replacing current IGP solutions but is hardly a discreet solution killer, at least for quite some years to come.

            I don't know why you think I'm against openCL. I'm not. I was there at Apple during it's development and it's a great standard. I'm just being realistic about it's current and near future state. Right now openCL shows great promise but has under delivered in implementation and has it's weaknesses yet. Over time it may become as refined as Cuda but it is not at that state yet.

            Comment


            • #16
              Originally posted by deanjo View Post
              I'm not the one who doesn't know what the work version means. May I suggest you using one of the online dictionaries.
              I really suggest that you stop this nonsense right now. Your very first sentence is completely incomprehensible.

              I can tell you have not done any openCL development. It's hardly just for "one" and the rest are fine. To get peak performance, optimizations still have to be done for pretty much every series of hardware out there.
              You seem confused. You are arguing about things that are NOT POINTS OF CONTENTION. This last statement of yours is entirely IRRELEVANT.

              Nvidia will still be long around strong and well once Fusion hit.
              GRAMMAR SCHOOL!
              And if you might... at least one reason WHY, and saying that people want powerful discrete graphics isn't an answer because they are an EXTREMELY SMALL MINORITY of users.

              The ATI soothsayers have been predicting Nvidia's death with pretty much every release of ATI product but yet Nvidia still is strong and thriving.
              No, the Fusion chips will be the first time when NVIDIA *really* gets shut out. And intel is doing the same thing with their CPUs. WHY would a *regualar* user (we're not talking about the odd game addict) want to pay extra for a discrete graphics card when they already have one with their CPU?

              I have a very stong knowledge of opensource software. I'm just not blinded and see the advantages to open and closed.
              No, you're blinded by the PRESENT. Not present as in "GIFT", present as in the CURRENT TIME.

              And FYI: this is another irrelevant point. We are NOT discussing the merits of open vs closed source. We are discussing MARKET SHARE of OPENCL vs CUDA. If all of the open source drivers and SOME of the closed source drivers ALL support openCL, this is MANY MANY MORE supported systems than having just some of the closed source.

              I.e. add up NUMBER OF INTEL OSS USERS + NUMBER OF AMD OSS USERS + NUMBER OF NVIDIA OSS USERS + NUMBER OF AMD BLOB USERS... this number will DWARF the number of nvidia blob users by a few orders of magnitude.

              The current company I happen to work for also does GPL and closed development.
              Again, open vs closed is irrelevant, except that being closed, nobody but NVIDIA can add opencl to their blob.

              Intel GPU's are not openCL capable so take that out of the equation right now.
              You mean their DRIVERS do not implement openCL.
              They most definitely DO have hardware capable of openCL.

              So that leaves ATI of which you still have to use closed drivers to utilize openCL on their GPU's
              TEMPORARY.

              How can you say a slower implementation is an advantage? Your sure showing your blind love now.
              So I suppose your car must be an SC Ultimate Aero? Because anything else is a slower implementation of *CAR*, and must, by your argument, be useless.

              Do you see how stupid your argument is?

              WRONG. You have NO openCL on any other GPU's other then ATI / NVIDIA cards running the blob RIGHT NOW. The only other option is running openCL on the CPU.
              The key words being "RIGHT NOW".
              Lets think back 10 years. If we think back 10 years, there was no such thing as "GPGPU" at all. By your way of thinking, 10 years ago, we definitely would never need to think about it at all, because it doesn't exist.

              Well THINGS CHANGE!
              And in this business, they change FAST.
              And though RIGHT NOW you have your choice of openCL on AMD blob or CUDA on nvidia blob, what will there be next year? openCL on INTEL OSS maybe? OpenCL on AMD OSS? How about openCL on NVIDIA OSS? One thing I can tell you FOR CERTAIN is that within one year, the number of systems capable of OpenCL will VASTLY DWARF the number of systems capable of CUDA. Of this, there is NO DOUBT.

              Sigh, you do realize that the blobs run openCL just fine right?
              That's nice. And in the future, there will ALSO be open source drivers capable of it.

              All-in-one implementations are not going to replace discreet solutions anytime soon. Current Fusion demonstrations show that performance is slightly better then current IGP solutions. Your still limited to the bandwidth of the system ram and plain old die real estate. Fusion is great for replacing current IGP solutions but is hardly a discreet solution killer, at least for quite some years to come.
              You are making the mistake of thinking that discrete graphics cards matter. They don't. The VAST VAST majority of graphics processors are INTEGRATED.

              Very soon after AMD introduces FUSION, **BOTH** major x86 CPU manufacturers will control 100% of integrated graphics solutions. NVIDIA will be COMPLETELY WIPED OUT of the IGP market (which is the biggest part of graphics), and the reason for this is simple: If your CPU comes with a graphics core, WHY would you want to pay MORE for ANOTHER one of similar strength? Sure they can still sell discrete cards for the few people who are serious game players, but they'll STILL be competing with AMD in that market.

              I don't know why you think I'm against openCL. I'm not. I was there at Apple during it's development and it's a great standard. I'm just being realistic about it's current and near future state.
              Be realistic then. About 50% of graphics devices with a GPGPU implementation are openCL, the other 50% is CUDA. THAT'S RIGHT NOW.

              In a VERY short time, this proportion WILL shift to the point that openCL DWARFS CUDA.

              You have a choice to make: openCL or CUDA. If you implement your product in CUDA, then as the market shifts to favor openCL, your target will get SMALLER AND SMALLER, and not only that, you'll have to start over from scratch when it comes time that you HAVE to convert.

              Right now openCL shows great promise but has under delivered in implementation and has it's weaknesses yet. Over time it may become as refined as Cuda but it is not at that state yet.
              NVIDIA has NEVER released ANYTHING that I would classify as "refined". Their CUDA is extremely LIMITED in overall usefulness. And don't get confused by the word "useful", it encompasses more than just functionality.

              Comment


              • #17
                Originally posted by droidhacker View Post
                I really suggest that you stop this nonsense right now. Your very first sentence is completely incomprehensible.
                Excuse my typo.

                You seem confused. You are arguing about things that are NOT POINTS OF CONTENTION. This last statement of yours is entirely IRRELEVANT.
                So nvidia doesn't have a x86 processor, that doesn't mean squat. There are thousands of other device manufacturers that don't have x86 licenses either.

                No, the Fusion chips will be the first time when NVIDIA *really* gets shut out. And intel is doing the same thing with their CPUs. WHY would a *regualar* user (we're not talking about the odd game addict) want to pay extra for a discrete graphics card when they already have one with their CPU?
                How the hell do you figure they will be locked out? Nothing about fusion restricts the ability to use discreet solutions. We are talking about GPU computing here. Do you really think fusion is going to bring a parallel computing holy grail to the average joe? Average joe doesn't give a rats ass about parallel computing unless he is one of the few that indulge in some tasks such as video encoding. The average joe has few applications that can really utilize openCL to his advantage.

                No, you're blinded by the PRESENT. Not present as in "GIFT", present as in the CURRENT TIME.
                Yawn, heard that 25 years ago, still is the same.

                And FYI: this is another irrelevant point. We are NOT discussing the merits of open vs closed source. We are discussing MARKET SHARE of OPENCL vs CUDA. If all of the open source drivers and SOME of the closed source drivers ALL support openCL, this is MANY MANY MORE supported systems than having just some of the closed source.
                I agree but there are areas where openCL is still barren at such as at the high-level interfaces. openCL requires one to do their own items such as memory management as well as the current language binding are lacking at this point. Generally when your talking about massive parallel computing your dealing with profs and eggheads and not with code gurus. This is where Cuda still has an advantage.

                I.e. add up NUMBER OF INTEL OSS USERS + NUMBER OF AMD OSS USERS + NUMBER OF NVIDIA OSS USERS + NUMBER OF AMD BLOB USERS... this number will DWARF the number of nvidia blob users by a few orders of magnitude.
                What don't you get about Intel not supporting openCL? Nvidia's OSS users are downright near non-existant. I ask you what is the #1 video solution in linux right now? Why do you think that is?

                Again, open vs closed is irrelevant, except that being closed, nobody but NVIDIA can add opencl to their blob.
                And they have done a great job of it. Who has added openCL to their drivers yet in the "free world".

                You mean their DRIVERS do not implement openCL.
                They most definitely DO have hardware capable of openCL.
                With the design limitations on intel solutions right now you could in theory run openCL on them granted. If it gives any meaningful performance advantage that is highly doubtful.

                TEMPORARY.
                You hope. Hows that open video acceleration working for ya?

                So I suppose your car must be an SC Ultimate Aero? Because anything else is a slower implementation of *CAR*, and must, by your argument, be useless.


                Do you see how stupid your argument is?
                Hardly, I don't buy a honda civic to haul around the 5th wheel. The civic is there to go get groceries. I also don't buy a truck to make 3 trips back and forth to drive the kids to soccer practice.

                The key words being "RIGHT NOW".
                Lets think back 10 years. If we think back 10 years, there was no such thing as "GPGPU" at all. By your way of thinking, 10 years ago, we definitely would never need to think about it at all, because it doesn't exist.
                People shouldn't by things that are "MAYBE in the future". They have a need now and that's what they should invest in. By the time the "future" comes around you should be able to buy a solution that is exponentially greater in capability for the same price instead of hoping that their investment may one day pay off so they can use it for items that they want to do now.

                Well THINGS CHANGE!
                And in this business, they change FAST.
                And though RIGHT NOW you have your choice of openCL on AMD blob or CUDA on nvidia blob, what will there be next year? openCL on INTEL OSS maybe? OpenCL on AMD OSS? How about openCL on NVIDIA OSS? One thing I can tell you FOR CERTAIN is that within one year, the number of systems capable of OpenCL will VASTLY DWARF the number of systems capable of CUDA. Of this, there is NO DOUBT.
                You seem to be ignoring the biggest threats to both, DirectCompute. Windows does enjoy quite a large comfortable marketshare especially with the home market. It would not be surprising to see DirectCompute take over. openCL may eventually take over the grand computing clusters but even intel doesn't want to see that happen and it still has a long way to go to catch up to current proprietary solutions.

                That's nice. And in the future, there will ALSO be open source drivers capable of it.
                Funny thing about the future is that nobody can prove anything about it. It's all speculation. Again we have all heard about the future on items like video acceleration on free drivers but nobody has stepped forward to actually do it and see it through to completion.

                You are making the mistake of thinking that discrete graphics cards matter. They don't. The VAST VAST majority of graphics processors are INTEGRATED.

                Very soon after AMD introduces FUSION, **BOTH** major x86 CPU manufacturers will control 100% of integrated graphics solutions. NVIDIA will be COMPLETELY WIPED OUT of the IGP market (which is the biggest part of graphics), and the reason for this is simple: If your CPU comes with a graphics core, WHY would you want to pay MORE for ANOTHER one of similar strength? Sure they can still sell discrete cards for the few people who are serious game players, but they'll STILL be competing with AMD in that market.
                Integrated graphics solutions have been around for well over a 1/4 century now and the discreet market is still there and profitable.

                Be realistic then. About 50% of graphics devices with a GPGPU implementation are openCL, the other 50% is CUDA. THAT'S RIGHT NOW.

                In a VERY short time, this proportion WILL shift to the point that openCL DWARFS CUDA.

                You have a choice to make: openCL or CUDA. If you implement your product in CUDA, then as the market shifts to favor openCL, your target will get SMALLER AND SMALLER, and not only that, you'll have to start over from scratch when it comes time that you HAVE to convert.
                It is very easy to port CUDA code to openCL. It's not an issue. When openCL matures enough then yes you will see perhaps projects move over to openCL. Chances are however that you will probably see more shift to DirectCompute. Hate if you want but this is the most likely scenario especially when it comes to the consumer market.

                NVIDIA has NEVER released ANYTHING that I would classify as "refined". Their CUDA is extremely LIMITED in overall usefulness. And don't get confused by the word "useful", it encompasses more than just functionality.
                LMAO, compare the tools available for CUDA, now compare the tools available for openCL. Compare the performance and features of their blobs now compare them to their open source rivals (hell even the ATI blobs if you wish). Compare vdpau and then compare va-api on implementation and features. Hell even compare documentation for any of the above products.

                Bottomline is that your waiting for someone to pickup the slack and start implementing meanwhile I've been enjoying the benefits of GPGPU computing for years now. Should the market switch, I still have a solution that works with all of the above.

                Comment


                • #18
                  I don't get the point of this discussion. From the first question it is clear that you would need a nvidia card. vdpau works much better than xvba/vaapi which does not seem to work with the ati hd 5 series (which is basically the only good series from ati when you compare the features). If somebody wants to use CUDA for whatever reason you can not buy ATI, OpenCL would however work. A gt 220 is not really fast, usally just enough for normal Linux games but will no fit the needs of a (win) hardcore gamer. If you are used to vdpau you do not want to miss it.

                  Comment


                  • #19
                    This is the last time I'm going to explain this to you. It is really simple. If you STILL don't understand, then you are obviously retarded.

                    Originally posted by deanjo View Post
                    So nvidia doesn't have a x86 processor, that doesn't mean squat. There are thousands of other device manufacturers that don't have x86 licenses either.
                    It means that EVERYONE WHO USES x86 WILL NOT USE NVIDIA BECAUSE THERE IS NO REASON TO.

                    How the hell do you figure they will be locked out? Nothing about fusion restricts the ability to use discreet solutions.
                    THEY WILL NOT BE SELLING ANY IGP ***INTEGRATED*** COMPONENTS FOR x86 AT ALL!!! ALMOST NOBODY WILL BUY DISCRETE BECAUSE IT IS REDUNDANT!

                    We are talking about GPU computing here. Do you really think fusion is going to bring a parallel computing holy grail to the average joe? Average joe doesn't give a rats ass about parallel computing unless he is one of the few that indulge in some tasks such as video encoding. The average joe has few applications that can really utilize openCL to his advantage.
                    AVERAGE JOE WILL HAVE INTEGRATED GPU WHICH WILL ***NOT EVER*** BE NVIDIA!

                    Yawn, heard that 25 years ago, still is the same.
                    IF YOU BELIEVE THAT EVERYTHING IS THE SAME AS 25 YEARS AGO, YOU ARE DISQUALIFYING YOUSELF TO PARTICIPATE IN ANY PART OF THIS DISCUSSION.

                    I agree but there are areas where openCL is still barren at such as at the high-level interfaces. openCL requires one to do their own items such as memory management as well as the current language binding are lacking at this point. Generally when your talking about massive parallel computing your dealing with profs and eggheads and not with code gurus. This is where Cuda still has an advantage.
                    PROFS AND EGGHEADS?
                    FYI: ARE THE ONES WHO USE OPEN SOURCE AND NOT CLOSED SOURCE.

                    What don't you get about Intel not supporting openCL?
                    YOU ARE CLEARLY RETARDED.

                    Nvidia's OSS users are downright near non-existant.
                    LOOK UP THE NOUVEAU DRIVER. IT PROVIDES EVEN 3D ACCELERATION. ITS DRM IS IN KERNEL. IT IS THE DEFAULT DRIVER FOR MAJOR DISTROS NOW. MOST PEOPLE AREN'T GOING TO GO WASTING THEIR TIME LOOKING FOR PROPRIETARY BLOBS THAT DON'T WORK RIGHT WHEN THEY HAVE A PERFECTLY WORKING DRIVER ALREADY THERE!

                    I ask you what is the #1 video solution in linux right now? Why do you think that is?
                    AMD.

                    And they have done a great job of it. Who has added openCL to their drivers yet in the "free world".
                    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

                    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite


                    READ THAT STUFF. DO YOU UNDERSTAND WHAT IT MEANS?

                    With the design limitations on intel solutions right now you could in theory run openCL on them granted. If it gives any meaningful performance advantage that is highly doubtful.
                    YOU DO NOT NEED EVERYTHING ACCELERATED TO BENEFIT.

                    You hope.
                    I ***KNOW***.
                    Hows that open video acceleration working for ya?
                    JUST FINE.

                    Hardly, I don't buy a honda civic to haul around the 5th wheel. The civic is there to go get groceries. I also don't buy a truck to make 3 trips back and forth to drive the kids to soccer practice.
                    AND HERE YOU GO MAKING MY POINT FOR ME!!!!

                    People shouldn't by things that are "MAYBE in the future". They have a need now and that's what they should invest in.
                    AND SINCE OPENCL ***RIGHT NOW*** IS SUPPORTED BY MORE HARDWARE THAN CUDA, THAT IS WHAT SHOULD BE USED!!!!!

                    By the time the "future" comes around you should be able to buy a solution that is exponentially greater in capability for the same price instead of hoping that their investment may one day pay off so they can use it for items that they want to do now.
                    NOBODY TOLD YOU TO BUY ANYTHING. YOU ARE BEING TOLD TO DEVELOP SOFTWARE USING OPENCL INSTEAD OF THAT DEAD END PROPRIETARY CUDA.

                    You seem to be ignoring the biggest threats to both, DirectCompute. Windows does enjoy quite a large comfortable marketshare especially with the home market. It would not be surprising to see DirectCompute take over. openCL may eventually take over the grand computing clusters but even intel doesn't want to see that happen and it still has a long way to go to catch up to current proprietary solutions.
                    NOT RELEVANT OR RELATED.

                    Funny thing about the future is that nobody can prove anything about it. It's all speculation. Again we have all heard about the future on items like video acceleration on free drivers but nobody has stepped forward to actually do it and see it through to completion.
                    ARE YOU BLIND??!??

                    Integrated graphics solutions have been around for well over a 1/4 century now and the discreet market is still there and profitable.
                    THE MARKET FOR DISCRETE IS A FRINGE. DISCRETE ARE GOING ON CPU AND SO WILL BE DOMINATED 100% BY INTEL AND AMD.

                    It is very easy to port CUDA code to openCL. It's not an issue. When openCL matures enough then yes you will see perhaps projects move over to openCL.
                    ANY WORK IS MORE THAN NO WORK.

                    Chances are however that you will probably see more shift to DirectCompute. Hate if you want but this is the most likely scenario especially when it comes to the consumer market.
                    HIGHLY HIGHLY UNLIKELY

                    LMAO, compare the tools available for CUDA, now compare the tools available for openCL. Compare the performance and features of their blobs now compare them to their open source rivals (hell even the ATI blobs if you wish). Compare vdpau and then compare va-api on implementation and features. Hell even compare documentation for any of the above products.
                    GO AHEAD AND COMPARE ALL YOU LIKE. EVERYTHING SUPPORTS MY POSITION AND NOT YOURS.

                    Bottomline is that your waiting for someone to pickup the slack and start implementing meanwhile I've been enjoying the benefits of GPGPU computing for years now. Should the market switch, I still have a solution that works with all of the above.
                    NO. THE SITUATION IS THIS: YOU ARE DIGGING YOUR OWN GRAVE.

                    Comment


                    • #20
                      deanjo, I applaud your patience here. I would've left when the first caps arrived..

                      Comment

                      Working...
                      X