Announcement

Collapse
No announcement yet.

NVIDIA Prepares 195.xx Linux Driver, Carries Fermi Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    My guess would be that any Fermi chips shipping in 2009 will be on compute cards for a handful of deep-pocketed HPC customers. NVidia's marketing for Fermi has been extremely GPGPU-heavy, to the point that the only directly graphics-related thing on their Fermi page is a mention of raytracing as a possible application.

    Also, the white paper is hilarious:

    The graphics processing unit (GPU), first invented by NVIDIA in 1999

    Comment


    • #12
      They are the first that started calling it that. All we had before were VPUs .

      Comment


      • #13
        Originally posted by Melcar View Post
        They are the first that started calling it that. All we had before were VPUs .
        Where V = Voodoo? Virtual? Video?

        Voodoos, Mystiques, Verites, Rivas, Rages were all called "graphics accelerators" back in the day. It's possible that the "GPU" name didn't come into existence until 1999, but Nvidia certainly didn't invent graphics processors - this claim is completely laughable.

        I have a feeling that Fermi won't end well for Nvidia... (I hope to be proven wrong.)

        Comment


        • #14
          Given the year, I assume what they're referring to is having transformation+lighting+rasterization all happening in a single chip. I'm pretty sure that 3DLabs had already been doing this for several years using multiple chips.

          Comment


          • #15
            Originally posted by Ex-Cyber View Post
            My guess would be that any Fermi chips shipping in 2009 will be on compute cards for a handful of deep-pocketed HPC customers. NVidia's marketing for Fermi has been extremely GPGPU-heavy, to the point that the only directly graphics-related thing on their Fermi page is a mention of raytracing as a possible application.

            Also, the white paper is hilarious:
            The fact is that nvidia did invent the GPU in 1999. The term was never used before and was defined by nvidia as "a single-chip processor with integrated transform, lighting, triangle setup/clipping, and rendering engines that is capable of processing a minimum of 10 million polygons per second."

            Comment


            • #16
              The term had been used before, but it wasn't mainstream (check Google Scholar, for example). Either way, the common use of the term now is not ruled by the self-serving "definition" crafted by Nvidia's marketing department (philosophical question: if you downclock a GPU so that it can't do 10 million polygons per second, does it stop being a GPU?).

              And either way, this doesn't say a lot about Fermi, which doesn't look like it'll be shipping in an actual graphics card for a while unless I've missed some big announcement/leak. Given Jen-Hsun Huang's statement that they're "ramping Fermi for [...] GeForce" I assume we will see them at some point, but so far I've seen almost no talk about its launch or features as a GPU except that it will support DX11. All indications seem to be that the first generation Fermi is going to be a top GPGPU performer but also brutally expensive. I wouldn't be surprised if they end up going exclusively for the super high-end HPC and gaming markets and ceding the mainstream gaming market to ATI for a little while.

              Comment


              • #17
                Originally posted by Ex-Cyber View Post
                Fermi is going to be a top GPGPU performer but also brutally expensive.
                AMD's Hemlock cards will have perform over 5 TFlops. Fermi is projected to do 1.5 given its shader count (512) and architecture, with a similar die size. FLOPS isn't everything but as a measure of raw compute power, Fermi will be behind.

                With its dedicated caches and simpler shader hierarchy (AMD has 5 shaders to a cluster) it will perform better in the real world, but anything you could do with Fermi should run faster on AMD if you optimise it. And the price you pay for the two things I mentioned is much greater die size, since neither feature helps in games but they takes up area.

                Comment


                • #18
                  Originally posted by phoronix View Post
                  Phoronix: NVIDIA Prepares 195.xx Linux Driver, Carries Fermi Support

                  It was just last week that NVIDIA had finally released a stable 190.xx Linux driver after this driver series had been in beta for months. The 190.xx driver series brought new hardware support, OpenGL 3.2 support, VDPAU improvements, and a fair amount of other changes...

                  http://www.phoronix.com/vr.php?view=NzY4MA


                  Comment

                  Working...
                  X