Announcement

Collapse
No announcement yet.

OpenGL Performance & Performance-Per-Watt For NVIDIA GPUs From The Past 10 Years

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    If you're going to benchmark everything at 2560 x 1600, why not call it for what it is? "Fill Rates From The Past 10 Years" It's definitely NOT general OpenGL performance.

    Michael, you of all people should have realized that when every single page of benchmarks looks identical, you're not measuring general performance only a very specific metric. This article would have been much more interesting if you had included a resolution that cards from 10 years ago were designed to run at such as 1280x1024. This way we could see how geometry rate scales and how well old cards hold up at normal resolutions.
    Last edited by slacka; 20 January 2016, 12:27 AM.

    Comment


    • #12
      Is there a problem with Firefox + vBulletin? Some of the buttons aren't working and I got an error message when I tried to post. Switched to Chrome and it's working OK now.
      Last edited by slacka; 20 January 2016, 12:24 AM.

      Comment


      • #13
        GeForce 8500 GTS ...
        Are you sure?

        And why not one GK208?
        Last edited by drSeehas; 20 January 2016, 04:52 AM.

        Comment


        • #14
          Originally posted by schmidtbag View Post
          It is unfortunate, though not all that surprising. The demand for GPU performance dramatically increases, while the demand CPU performance (for the average user) has pretty much reached it's peak 5 years ago. The only reason to get a new CPU is for energy efficiency, but GPUs still have a long way to go for performance.
          IMO the main problem with CPU is that both frequencies and die process are coming close to the physical limit of silicium so basically only optimization or cores multiplications can be done now.

          GPU being (to simplify) a huge number of specialized mini-CPU units they still can be multiplied for better performance... as long as you have space on the card

          Comment


          • #15
            Originally posted by slacka View Post
            This article would have been much more interesting if you had included a resolution that cards from 10 years ago were designed to run at such as 1280x1024. This way we could see how geometry rate scales and how well old cards hold up at normal resolutions.
            I agree a more "classical" resolution like 1080 would let older cards feel better, but changing resolution for each group of cards... definitely no.

            If Michael does what you propose this forum would be filled by people whining that comparisons make no sense at all because resolution / details are different from one bench to another...

            Comment


            • #16
              Originally posted by siavashserver
              It would be great if you could also throw in GTX275 or GTX285 as strongest DX10/OGL3 generation cards Michael, with lower display/texture resolution of course.
              The reason I didn't do that is because... I don't have those cards. It wasn't until the GTX 400 series that NVIDIA began sending me out more cards.
              Michael Larabel
              https://www.michaellarabel.com/

              Comment


              • #17
                Originally posted by drSeehas View Post
                And why not one GK208?
                I used the cards I have.
                Michael Larabel
                https://www.michaellarabel.com/

                Comment


                • #18
                  Originally posted by Jumbotron View Post
                  I am still amazed at how much the good 'ol GTX 680 is still a relative beast.
                  Exactly!

                  Comment


                  • #19
                    Originally posted by Passso View Post
                    IMO the main problem with CPU is that both frequencies and die process are coming close to the physical limit of silicium so basically only optimization or cores multiplications can be done now.

                    GPU being (to simplify) a huge number of specialized mini-CPU units they still can be multiplied for better performance... as long as you have space on the card
                    I agree. I'm not really sure why companies like Intel are wasting so much time and money trying to do die shrinks when they should be looking into other elements or types of transistors.

                    And yes, GPUs have a lot of freedom of form to be the best that they can be. I'm sure there's a much more efficient approach to them but it just hasn't been discovered.

                    Comment


                    • #20
                      Originally posted by schmidtbag View Post
                      I agree. I'm not really sure why companies like Intel are wasting so much time and money trying to do die shrinks when they should be looking into other elements or types of transistors.
                      Do not worry for Intel, they actually work hard on it, like a bunch of laboratories. The main issue is that only few chemical bases are able to theorically surpass the silicium in term of performance and stability. Add to this that the silicium is very (very) cheap compared to others... at the moment it is impossible to sell anything else!

                      A new gen die base will arrive later for sure, but it will need years or decades to beat the silicium... wait, see and trust R&D

                      Comment

                      Working...
                      X