Announcement

Collapse
No announcement yet.

The Fermi!

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by kyzz View Post
    I have one issue with the enormous amount of power this chip uses, if the unofficial benchmarks are even remotely accurate that I have seen, all I can say is I expected more out of a card using 280ish watts. Of course the amount of speculation and bias out there is pretty ridiculous so the benchmarks could all be BS.

    Well nvidia's spec sheet for the Fermi based Tesla cards quote a power consumption of <=225 Watts.

    http://www.nvidia.com/docs/IO/43395/...83-001_v01.pdf

    Given that it is a computing card the load is going to be among the most stressful for the Fermi. Granted the first Fermi Tesla cards have cut down to 448 cores instead of 512 (but those specs were also based on A2 silicon), Bottom line is we all have to wait and see but I honestly can't see it being a repeat of the FX.

    Comment


    • #32
      Originally posted by deanjo View Post
      Well nvidia's spec sheet for the Fermi based Tesla cards quote a power consumption of <=225 Watts.

      http://www.nvidia.com/docs/IO/43395/...83-001_v01.pdf

      Given that it is a computing card the load is going to be among the most stressful for the Fermi. Granted the first Fermi Tesla cards have cut down to 448 cores instead of 512 (but those specs were also based on A2 silicon), Bottom line is we all have to wait and see but I honestly can't see it being a repeat of the FX.
      Yeah, the Tesla cards also run on lower clock speeds too I believe. So that would also help its power consumption vs the desktop version. And I agree I don't think its going to be a repeat of the FX either, I don't know if people even remember how terrible the FX series was. This card would have to be a real stinker to even be compared to that.

      Do I think the card will meet its expectations though? No, I think it'll be about 10-15 FPS on average faster than a 5870. So I think it will be faster, but not by as much as it was hyped up to be. All ATI would have to do then at that point is do a refresh and they're right back where they were pretty much.

      Comment


      • #33
        10-15fps faster than the 5870 means that it will get creamed by the 5970...

        Comment


        • #34
          Most of the benchmarks I've seen so far are "simulated", whatever that means. We have to wait until some actual revies come out to make conclusions, but until then we have stuff like this.

          Comment


          • #35
            Originally posted by Melcar View Post
            Most of the benchmarks I've seen so far are "simulated", whatever that means.
            I think that's just an euphemism for "pulled out of their asses", but maybe I'm just jaded.

            Comment


            • #36
              They probably have a high-level (compared to actual chip simulation) simulator that can run Fermi shader programs and report things like number of cache hits/misses, number of stalls due to shared memory contention, and so on. Nobody in their right mind would build a chip like Fermi without heavily analyzing the architecture first. They may also be talking about results from register-level simulations of the actual chip design (I don't want to even think about the kind of computing resources that a transistor-level simulation of the full chip would require).

              Comment


              • #37
                Originally posted by Ex-Cyber View Post
                They probably have a high-level (compared to actual chip simulation) simulator that can run Fermi shader programs and report things like number of cache hits/misses, number of stalls due to shared memory contention, and so on. Nobody in their right mind would build a chip like Fermi without heavily analyzing the architecture first. They may also be talking about results from register-level simulations of the actual chip design (I don't want to even think about the kind of computing resources that a transistor-level simulation of the full chip would require).
                Not sure if it's still done, but they used to have large FPGA (or similar) devices to simulate video cards. Abysmal performance, but you could run and test the design logic from them. This was when chips were a good deal less complicated though, so maybe their design methods have changed since then.

                -- Additional: when I say "they", I mean companies in general, not specifically nvidia.

                Comment


                • #38
                  Are those 'simulated' tests released by nvidia? Google didn't help, any linkies?

                  Comment


                  • #39
                    The benchmarks were done by NVidia, no hardware review sites have cards for testing yet.

                    In the "simulated" benchmarks, some sites have attempted to reproduce the NVidia setup in a test system. And then install a Radeon 5870 for benchmarks.

                    Comment


                    • #40
                      the fermi's the press will get for testing, will be selected cards. Don#t expect the same performance with the cards you will be able to buy.

                      Twice the size, and more than twice the cost for not twice the performance of a 5870. Nvidia screwed up and only fanboys refuse to see it.

                      Comment


                      • #41
                        Only ati fanboys buy those cards for Linux usage.

                        Comment


                        • #42
                          Originally posted by energyman View Post
                          Twice the size, and more than twice the cost for not twice the performance of a 5870. Nvidia screwed up and only fanboys refuse to see it.
                          well how many fps do you get in doom 3? Chances are my crusty old 6200 can whip your 5870's ass due to the current state of the free drivers; and the binary are not even worth talking about. Thank you but im not spending $300 too see quake 3 almost work right.

                          Comment


                          • #43
                            Originally posted by L33F3R View Post
                            well how many fps do you get in doom 3? Chances are my crusty old 6200 can whip your 5870's ass due to the current state of the free drivers; and the binary are not even worth talking about. Thank you but im not spending $300 too see quake 3 almost work right.
                            If there's one thing that works well with fglrx, that is 3d. The other is power management.

                            Sorry, but your trolling attempt is laughable.

                            Comment


                            • #44
                              Originally posted by mirv View Post
                              Not sure if it's still done, but they used to have large FPGA (or similar) devices to simulate video cards. Abysmal performance, but you could run and test the design logic from them. This was when chips were a good deal less complicated though, so maybe their design methods have changed since then.

                              -- Additional: when I say "they", I mean companies in general, not specifically nvidia.
                              That could be, though I'm almost laughing trying to picture what it would look like. If my math is right, one of Fermi's 512 shader cores might fit in a high-end FPGA. I'm thinking that a full prototype would have to be a rack full of custom backplanes, with each card hosting one or two shader cores. It might even make sense to do it, given the cost of producing an actual chip.

                              Comment


                              • #45
                                The hardware emulator systems used to have boxes with hundreds of large FPGAs per box, with multiple boxes connected over a thick mat of ribbon cables. Imagine the innards of a Cray 1. These days I believe the FPGAs have been replaced with hundreds of custom RISC processors per box, but you still have to keep buying boxes until you have enough capacity to fit the GPU (or at least enough blocks to handle the specific testing you want to do at the moment).

                                There are two big benefits to these boxes - the first and most obvious is that you get to run "full chip" testing before taping out, and the second is that you have access to all of the internal logic nodes so when the chip isn't doing what you expect you have a much better chance of figuring out why.

                                Hardware emulation systems are godawful expensive so they tend to run 24/7 with engineers booking on the systems at all hours of the day. On HD4xxx I think our video BIOS team had the slot from 2AM to 5AM ;(
                                Last edited by bridgman; 01-23-2010, 11:21 AM.

                                Comment

                                Working...
                                X