No announcement yet.

AMD Catalyst 9.12 Hotfix released

  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    So we have a choice between the NVIDIA Fermi which doesn't exist and ATI hardware which doesn't have drivers? Drivers that work on Fedora at least. Between the two I think that ATI has the easier task.


    • #32
      The only "FLOP" here is AMD Catalyst.


      • #33
        FLOPS don't always translate into 3D performance though. There are two paths Fermi can take:

        1. The thing ends up being nvidia's HD2xxx blunder (delayed, under performing compared to the competition, power hungry), but with some nice features

        2. Fermi blows everything out of the water, but availability and price (and delay) hurts its competitive edge

        I'm betting it will be #2. No doubt it will be powerful and full of really nice features.
        Last edited by Melcar; 20 December 2009, 06:35 PM.


        • #34
          here is some info - but of course that's from semiaccurate.


          • #35


            • #36
              Originally posted by Qaridarium
              Yes Nvidia runs into a Death of Bankrupt.

              but this article is FUD! but there is another article with much more "True"
              You gotta be kidding me. At least I think you're spreading more FUDs than the quoted articles are.

              nVidia does more things than just consumer graphics. Keep in mind that they also have Tegra, the Ion platform and the upcoming GeForce based on the Fermi architecture (read up on MIMD). On the market end, you should really compare the relative size of both companies. nVidia, with its graphics portfolio *alone*, garners a market cap of $10.040 billion, as compared to $6.64 billion from AMD's combined portfolio of CPU, GPU and chipset technologies. (Just to put things into perspective, Intel's market cap is a whopping $112.26 billion)

              Now, the issue today is not about who has the higher theoretical GFLOPS (it says "theoretical", you see). The actual throughput is all that matters. As you can see, the default configuration from ATI Radeon HD5870 has 2720 GFLOPS of (theoretical) single-precision floating point performance. Compare that to a similarly-placed nVidia product, the nVidia Geforce GTX295, which has 1788 GFLOPS in performance. Now, I would expect the Radeon HD5870 to have 52% higher performance in every benchmark. In actuality, this isn't the case. The Radeon HD5870 is only marginally better, and sometimes worse than the GTX295. This 3DMark Vantage benchmark proves this. Remember, the current ATI cards (RV6xx - RV8xx) still use five-way VLIW units, and this architecture is one that is not easy to optimize for. Not forgetting the fact that Fermi boosts its efficiency by having parallel kernel support which previously didn't exist in its architecture. There are also a myriad of architecture changes - the SFUs are now decoupled from the SMs in a warp.

              In addition, the yield problems with the 40 nm manufacturing process used at TSMC did affect *both* ATI and nVidia. If you haven't heard yet, the blog poster who said that nVidia has a < 1.7% yield on the GT3xx is a denounced ATI fanboy. I shan't comment further on this.

              You should really read up on *real* sources - like Anandtech and the like. Disclaimer: I am typing this on a laptop with ATI Mobility Radeon HD4670, Catalyst 9.12 driver on Ubuntu.


              • #37
                Originally posted by Kano View Post

                Fermi will be definitely the first choice for LINUX users. Most likely for Win too if you have got enough money Just because of the working features - not because of features that will work in the future. I gave up the idea that xvba will be ever fixed and that gallium3d approach to simulate vdpau/vaapi or whatever using shaders will most likely not work on low end cards - and buying medium/highend cards for that is wasted money. Using oss drivers for those cards is like paying for a porsche and driving a fiat. I definitely do NOT need kms, it does not matter if it flickers when it switches from vt to X. The 2-3s delay is not optimal, but how often do you need to do that?
                are you paid for posting this bullshit?

                Fermi will be no choice for most people. No matter which OS they use. Because it is broken by design, fat and expensive. But dream on.


                • #38
                  Lets see when you can buy it, the cards for testing are already out there. Most likely soon for the press too.


                  • #39
                    which cards? the pre-castrated or the castrated ones? the ones with the original planned clock speed or the downclocked ones?


                    • #40
                      oh - and don't forget - Fermi is huge and expensive. How many 300$ computers will be sold with a 600$ card? Or 400$ computers? 500$, 600$, 700$ computers? You know - that stuff that makes up the majority of sold systems? The mainstream is sub 200. No way that Fermi will be ever reach that. What is Nvidia's answer?
                      Oh wait! They renamed the G92 AGAIN.