Announcement

Collapse
No announcement yet.

Any Arch users advice on choosing a graphics card..?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • nonfatmatt
    replied
    Didn't bother to read a few of the pages of the thread but this needs to be mentioned, it's Arch specific. Don't buy an ATI card if you want to use binary drivers. They dropped repository support for the fglrx drivers a few months ago, and fglrx was behind in support of the x.org variant that Arch used by at least 3 months at one point, which essentially made it impossible to install.

    Arch has been my main distribution since 2002, and I've had multiple video cards. ATI cards have caused me nothing but trouble, I can't count the times I've had to build the fglrx drivers using some workaround or hack. Nvidia has always just worked in Arch. Save yourself a lot of grief.

    Leave a comment:


  • IsawSparks
    replied
    Originally posted by pingufunkybeat View Post
    I've been trolled.
    No, your posts are contradicting themselves.

    Leave a comment:


  • pingufunkybeat
    replied
    I've been trolled.

    Leave a comment:


  • IsawSparks
    replied
    Every single post you've made in this thread has been self aggrandizing BS with a measurably irrational focus clearly laid on your one-eyed fan love for ATI hardware. You've pulled dollar figure values out of your bum ($20 between a midrange and highend Radeon is not reality in any economy), you've attacked Nvidia-glx by trying to wave a FOSS ATI Driver flag yet have also recommended people use Windows to watch HD content because neither of ATI's driver options in Linux can do so in hardware and you've quite literally shat on the nature and existence of Open Source software by telling people that you don't see a point in using or developing something which already exists elsewhere, even if it's open source (your comment on Open Office vs MS Office was ridiculous).

    Now you want the thread locked because your irrational commentary isn't the problem in your view, but rather rational people who disagree with you are supposedly spreading FUD.

    :/

    Leave a comment:


  • pingufunkybeat
    replied
    Originally posted by IsawSparks View Post
    Yeah your point as backed up by telling people to use Windows, close sourced apps and to use both closed and open source drivers which don't perform properly because you're a one eyed ATI fan. Sure, you're the ulti
    I tell people that they can use Windows if they want to run Windows games, and that GNU/Linux should remain open for people who want an open system. Which part don't you understand?

    Please, telling Linux users o read the FSF website who use a Linux forum of a Linux website which is known for its informed technical analyses is redundant.
    When people claim that RMS didn't care about openness when writing GNU, then they really need a link to the FSF website.

    Actually, this thread should have been locked after post #3. It's been FUD ever since.

    Leave a comment:


  • pingufunkybeat
    replied
    Originally posted by deanjo View Post
    You don't think financial analysis on a real time system is complex?
    When did I say that? I was telling you that I need a strong CPU because I do lots of computation. Not all of it is parallelisable. I don't develop products, I use lots of tools. Some of them I have to write from scratch, sometimes I use off-the-shelf algorithms. I cannot redo all of machine learning and data mining algorithms out there from scratch.

    The kmeans job is an example that needs to be done a couple of times only. It's not a financial analysis system which runs often and where time is crucial. Other times, I will need a baseline comparison to a well-known method. Then you take a well-known implementation because you know that it works and how well it works.

    I appreciate your optimisation sentiment, but it doesn't apply here.

    Leave a comment:


  • IsawSparks
    replied
    "Sure, you're the ultimate FSF aficionado."

    Leave a comment:


  • IsawSparks
    replied
    Originally posted by pingufunkybeat View Post
    I do recommend you to read through the FSF website in any case, it is quite instructive. You might understand my point of view better then.
    Yeah your point as backed up by telling people to use Windows, close sourced apps and to use both closed and open source drivers which don't perform properly because you're a one eyed ATI fan. Sure, you're the ulti

    Please, telling Linux users o read the FSF website who use a Linux forum of a Linux website which is known for its informed technical analyses is redundant.

    Just stop and stop talking up FOSS like you actually care at all. You're just happy to have an anti NV sticking point, however lame.

    Leave a comment:


  • deanjo
    replied
    Originally posted by pingufunkybeat View Post
    If it neatly fits in the memory yes.
    Dividing and swapping isn't that hard to do. Trust me many of these data sets exceed the amount of gpu memory by a factor of 8+ X.

    You are paid to optimize your company's software.
    I got hired because I could do so. Nobody paid me to come up with my knowledge and learning of how to do it. That was a pure labor of love for efficiency.

    I am not paid to implement every machine learning algorithm on the GPU just because it's faster. I need to run lots of complex algorithms on lots of data. I do not have the time to reimplement everything from scratch. I have a server farm that I can leave to do stuff while I actually do more important things.
    You don't think financial analysis on a real time system is complex?

    If you know a really fast kmeans implementation for GPUs that can deal with millions of data points, let me know. In the meantime, I'm really busy doing other stuff.
    I would love the challenge. Drives me nuts to see hardware that could be utilized in a much more efficient manner be bogged down by a sloppy implementation of code. It has since my days in the demoscene.

    Do you reimplement GCC on the GPU before you compile software?
    I certainly don't compile my software to the i386 arch. I also hand tweak my GMP libraries.

    Did you reimplement ssh on the GPU or do you use the less optimal CPU version? What exactly are you arguing?
    SSH doesn't show hardly any benefit utilizing the GPU. When your talking the difference between one thousandth of a second or 1 ten thousandth of a second then the benefit is to the point of being moot. Your project however has massive GPU potential. If you use it enough the saved time moving it over to the GPU would more then justify the cost of development.

    Leave a comment:


  • pingufunkybeat
    replied
    If I started reimplementing all standard algorithms from university textbooks to run more efficiently on the GPU, I'd get fired before I downloaded OpenCL documentation

    Leave a comment:

Working...
X