Announcement

Collapse
No announcement yet.

AMD vs NVIDIA drivers, that big difference?

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Glaucous View Post
    Well you were sort of right, the performance in Unigine Heaven did give some more sense making results.

    NVIDIA GTX 260
    FPS:
    36.1
    Scores:
    910
    Min FPS:
    19.6
    Max FPS:
    72.8

    ATI HD 4870
    FPS:
    38.0
    Scores:
    957
    Min FPS:
    12.9
    Max FPS:
    78.5

    Unigine Heaven settings:
    Code:
    export LD_LIBRARY_PATH=./bin:$LD_LIBRARY_PATH
    ./bin/Heaven_x64    -video_app opengl \
                        -sound_app openal \
                        -extern_define RELEASE \
                        -system_script heaven/unigine.cpp \
                        -engine_config ../data/heaven_2.1.cfg \
                        -console_command "gl_render_use_arb_tessellation_shader 0 && render_restart" \
                        -data_path ../ \
                        -video_fullscreen 1 \
                        -video_mode -1 \
                        -video_width 1680 \
                        -video_height 1050
    I'm about to do some more testing on Wine now.

    Actually, I ran a few tests to see about performance on linux with a radeon 5770... here's the script i used to launch the program:

    Code:
    #!/bin/sh
    
    export LD_LIBRARY_PATH=./bin:$LD_LIBRARY_PATH
    ./bin/Heaven_x64	-video_app opengl \
    					-sound_app openal \
    					-extern_define RELEASE \
    					-system_script heaven/unigine.cpp \
    					-engine_config ../data/heaven_2.1.cfg \
    					-console_command "gl_render_use_arb_tessellation_shader 1 && render_hdr 8 && render_srgb 1 && render_restart" \
    					-data_path ../ \
    					-video_fullscreen 1 \
    					-video_mode -1 \
    					-video_width 1920 \
    					-video_height 1200\
    					-video_multisample 0
    Note: Change video_multisample param is to control Anti-Aliasing... where 0= off, 1=2xAA, 2=4xAA, 3=8xAA.

    Test Run with Tessellation off and 0xAA:


    Test Run with Normal Tessellation and 0xAA to 8xAA:



    Comment


    • #12
      Originally posted by kernelOfTruth View Post
      afaik the only distro I saw or experienced this on was Ubuntu
      Same on Fedora.

      Comment


      • #13
        Code:
        [nanonyme@confusion ~]$ glxgears
        Running synchronized to the vertical refresh.  The framerate should be
        approximately the same as the monitor refresh rate.
        302 frames in 5.0 seconds = 60.202 FPS
        300 frames in 5.0 seconds = 59.992 FPS
        301 frames in 5.0 seconds = 60.008 FPS
        What it's supposed to look like if glxgears if "display is synced to vblank" like glxgears.c says.

        Comment


        • #14
          ps. Display is 60Hz

          Comment


          • #15
            @Glaucous

            remember for the future any benchmark result higher than 60fps or stereoscopic 3D 120fps are invalid benchmarks,

            you get zero positiv effect if you have 10000000fps in any kind of programm.

            so you can't benchmark a modern gpu with quake3 or glxgears

            and in my point of view you can't benchmark a OpenGL4.1 card with a openGL2.1 programm or an openGL1.5 programm.

            and you can not test your hardware with an benchmark only programme like the unigine tests in the phoronix test

            only real apps/games can be a basic for an REAL test result

            in CPU speech its like an SSE1 programm test on an SSE4 cpu you never geht any valid result of the real speed of your hardware...

            same vor Directx9 on DX10+ hardware.

            Comment


            • #16
              Originally posted by LinuxID10T View Post
              Well, if you are interested in the new Unigine games coming out, you might want to go for a Radeon HD 5xxx series card. Many people don't know this, but ATI has had hardware tesselation since the HD 2xxx series, and it has been speced in OpenGL since the early 2000s. Anyway though, the 5xxx series has MUCH faster tesselation. I have a Radeon HD 5750 (way overclocked though) and I have been very happy with the performance.
              the tesselation unit in hd2000-4000 isn't openGL4 compatible and this oldstyle trueform tesselation unit in 2000-4000 is just broken for unigine in generall.

              your talking isn't so smart because in unigine2 nvidia is the clear winner because the benchmark is 100% nvidia focused.


              the hd6870 will have a very fine tesselation unit ;-)

              Comment


              • #17
                Originally posted by Qaridarium View Post
                the tesselation unit in hd2000-4000 isn't openGL4 compatible and this oldstyle trueform tesselation unit in 2000-4000 is just broken for unigine in generall.

                your talking isn't so smart because in unigine2 nvidia is the clear winner because the benchmark is 100% nvidia focused.


                the hd6870 will have a very fine tesselation unit ;-)
                hopefully

                and that will blow Fermi out of the water with even less power consumption

                Comment


                • #18
                  Originally posted by kernelOfTruth View Post
                  hopefully

                  and that will blow Fermi out of the water with even less power consumption
                  you can handle a 6870 by a passiv cooling solution ;-)

                  because its a hd5770 class chip ;-)

                  i think the price of a 6870 will be 200 at max.

                  AMD makes the big money with GPUs 50-150 Watt and 80-200

                  a GTX485 or an hd5970 is just Penis enlargement without any clue.

                  Comment


                  • #19
                    Power consumption? How well does power consumption work on ATI cards? Is it working close to the same as Windows? I believe it's not so great when using open source drivers, correct?

                    My point is, you're making compromises left, right and center, aren't you? So, when you bash Nvidia for power, I am wondering if it at least works since you're using the dreaded blob?

                    I think power consumption is pretty important, just as much as some other features you' want and this is especially true for laptop users. But, nowadays, video manufacturers are trying to boast having the best with the latest and greatest so you want good power efficiency with your card and for it to work.

                    If it works with the ATI binary but then you want to use the open source driver, what do you do?

                    Comment


                    • #20
                      Too late for me to edit? I just went to the Radeon Feature Matrix page.

                      So, under Power Saving, those show it's supported so I guess this indicates support, right? So, you Evergreen owners figure it's working well?

                      Looks like 3D is in need of the most progress, then? Also, HA and video decoding which looks like it may never get there.

                      Comment

                      Working...
                      X