Didn't bother to read a few of the pages of the thread but this needs to be mentioned, it's Arch specific. Don't buy an ATI card if you want to use binary drivers. They dropped repository support for the fglrx drivers a few months ago, and fglrx was behind in support of the x.org variant that Arch used by at least 3 months at one point, which essentially made it impossible to install.
Arch has been my main distribution since 2002, and I've had multiple video cards. ATI cards have caused me nothing but trouble, I can't count the times I've had to build the fglrx drivers using some workaround or hack. Nvidia has always just worked in Arch. Save yourself a lot of grief.
Announcement
Collapse
No announcement yet.
Any Arch users advice on choosing a graphics card..?
Collapse
X
-
Originally posted by pingufunkybeat View PostI've been trolled.
Leave a comment:
-
Every single post you've made in this thread has been self aggrandizing BS with a measurably irrational focus clearly laid on your one-eyed fan love for ATI hardware. You've pulled dollar figure values out of your bum ($20 between a midrange and highend Radeon is not reality in any economy), you've attacked Nvidia-glx by trying to wave a FOSS ATI Driver flag yet have also recommended people use Windows to watch HD content because neither of ATI's driver options in Linux can do so in hardware and you've quite literally shat on the nature and existence of Open Source software by telling people that you don't see a point in using or developing something which already exists elsewhere, even if it's open source (your comment on Open Office vs MS Office was ridiculous).
Now you want the thread locked because your irrational commentary isn't the problem in your view, but rather rational people who disagree with you are supposedly spreading FUD.
:/
Leave a comment:
-
Originally posted by IsawSparks View PostYeah your point as backed up by telling people to use Windows, close sourced apps and to use both closed and open source drivers which don't perform properly because you're a one eyed ATI fan. Sure, you're the ulti
Please, telling Linux users o read the FSF website who use a Linux forum of a Linux website which is known for its informed technical analyses is redundant.
Actually, this thread should have been locked after post #3. It's been FUD ever since.
Leave a comment:
-
Originally posted by deanjo View PostYou don't think financial analysis on a real time system is complex?
The kmeans job is an example that needs to be done a couple of times only. It's not a financial analysis system which runs often and where time is crucial. Other times, I will need a baseline comparison to a well-known method. Then you take a well-known implementation because you know that it works and how well it works.
I appreciate your optimisation sentiment, but it doesn't apply here.
Leave a comment:
-
Originally posted by pingufunkybeat View PostI do recommend you to read through the FSF website in any case, it is quite instructive. You might understand my point of view better then.
Please, telling Linux users o read the FSF website who use a Linux forum of a Linux website which is known for its informed technical analyses is redundant.
Just stop and stop talking up FOSS like you actually care at all. You're just happy to have an anti NV sticking point, however lame.
Leave a comment:
-
Originally posted by pingufunkybeat View PostIf it neatly fits in the memory yes.
You are paid to optimize your company's software.
I am not paid to implement every machine learning algorithm on the GPU just because it's faster. I need to run lots of complex algorithms on lots of data. I do not have the time to reimplement everything from scratch. I have a server farm that I can leave to do stuff while I actually do more important things.
If you know a really fast kmeans implementation for GPUs that can deal with millions of data points, let me know. In the meantime, I'm really busy doing other stuff.
Do you reimplement GCC on the GPU before you compile software?
Did you reimplement ssh on the GPU or do you use the less optimal CPU version? What exactly are you arguing?
Leave a comment:
-
If I started reimplementing all standard algorithms from university textbooks to run more efficiently on the GPU, I'd get fired before I downloaded OpenCL documentation
Leave a comment:
Leave a comment: