Page 2 of 5 FirstFirst 1234 ... LastLast
Results 11 to 20 of 42

Thread: AMD vs NVIDIA drivers, that big difference?

  1. #11
    Join Date
    Oct 2007
    Posts
    321

    Default

    Quote Originally Posted by Glaucous View Post
    Well you were sort of right, the performance in Unigine Heaven did give some more sense making results.

    NVIDIA GTX 260
    FPS:
    36.1
    Scores:
    910
    Min FPS:
    19.6
    Max FPS:
    72.8

    ATI HD 4870
    FPS:
    38.0
    Scores:
    957
    Min FPS:
    12.9
    Max FPS:
    78.5

    Unigine Heaven settings:
    Code:
    export LD_LIBRARY_PATH=./bin:$LD_LIBRARY_PATH
    ./bin/Heaven_x64    -video_app opengl \
                        -sound_app openal \
                        -extern_define RELEASE \
                        -system_script heaven/unigine.cpp \
                        -engine_config ../data/heaven_2.1.cfg \
                        -console_command "gl_render_use_arb_tessellation_shader 0 && render_restart" \
                        -data_path ../ \
                        -video_fullscreen 1 \
                        -video_mode -1 \
                        -video_width 1680 \
                        -video_height 1050
    I'm about to do some more testing on Wine now.

    Actually, I ran a few tests to see about performance on linux with a radeon 5770... here's the script i used to launch the program:

    Code:
    #!/bin/sh
    
    export LD_LIBRARY_PATH=./bin:$LD_LIBRARY_PATH
    ./bin/Heaven_x64	-video_app opengl \
    					-sound_app openal \
    					-extern_define RELEASE \
    					-system_script heaven/unigine.cpp \
    					-engine_config ../data/heaven_2.1.cfg \
    					-console_command "gl_render_use_arb_tessellation_shader 1 && render_hdr 8 && render_srgb 1 && render_restart" \
    					-data_path ../ \
    					-video_fullscreen 1 \
    					-video_mode -1 \
    					-video_width 1920 \
    					-video_height 1200\
    					-video_multisample 0
    Note: Change video_multisample param is to control Anti-Aliasing... where 0= off, 1=2xAA, 2=4xAA, 3=8xAA.

    Test Run with Tessellation off and 0xAA:


    Test Run with Normal Tessellation and 0xAA to 8xAA:




  2. #12
    Join Date
    Aug 2008
    Location
    Finland
    Posts
    1,803

    Default

    Quote Originally Posted by kernelOfTruth View Post
    afaik the only distro I saw or experienced this on was Ubuntu
    Same on Fedora.

  3. #13
    Join Date
    Aug 2008
    Location
    Finland
    Posts
    1,803

    Default

    Code:
    [nanonyme@confusion ~]$ glxgears
    Running synchronized to the vertical refresh.  The framerate should be
    approximately the same as the monitor refresh rate.
    302 frames in 5.0 seconds = 60.202 FPS
    300 frames in 5.0 seconds = 59.992 FPS
    301 frames in 5.0 seconds = 60.008 FPS
    What it's supposed to look like if glxgears if "display is synced to vblank" like glxgears.c says.

  4. #14
    Join Date
    Aug 2008
    Location
    Finland
    Posts
    1,803

    Default

    ps. Display is 60Hz

  5. #15
    Join Date
    Nov 2008
    Location
    Germany
    Posts
    5,411

    Default

    @Glaucous

    remember for the future any benchmark result higher than 60fps or stereoscopic 3D 120fps are invalid benchmarks,

    you get zero positiv effect if you have 10000000fps in any kind of programm.

    so you can't benchmark a modern gpu with quake3 or glxgears

    and in my point of view you can't benchmark a OpenGL4.1 card with a openGL2.1 programm or an openGL1.5 programm.

    and you can not test your hardware with an benchmark only programme like the unigine tests in the phoronix test

    only real apps/games can be a basic for an REAL test result

    in CPU speech its like an SSE1 programm test on an SSE4 cpu you never geht any valid result of the real speed of your hardware...

    same vor Directx9 on DX10+ hardware.

  6. #16
    Join Date
    Nov 2008
    Location
    Germany
    Posts
    5,411

    Default

    Quote Originally Posted by LinuxID10T View Post
    Well, if you are interested in the new Unigine games coming out, you might want to go for a Radeon HD 5xxx series card. Many people don't know this, but ATI has had hardware tesselation since the HD 2xxx series, and it has been speced in OpenGL since the early 2000s. Anyway though, the 5xxx series has MUCH faster tesselation. I have a Radeon HD 5750 (way overclocked though) and I have been very happy with the performance.
    the tesselation unit in hd2000-4000 isn't openGL4 compatible and this oldstyle trueform tesselation unit in 2000-4000 is just broken for unigine in generall.

    your talking isn't so smart because in unigine2 nvidia is the clear winner because the benchmark is 100% nvidia focused.


    the hd6870 will have a very fine tesselation unit ;-)

  7. #17
    Join Date
    Jan 2009
    Location
    Vienna, Austria; Germany; hello world :)
    Posts
    642

    Default

    Quote Originally Posted by Qaridarium View Post
    the tesselation unit in hd2000-4000 isn't openGL4 compatible and this oldstyle trueform tesselation unit in 2000-4000 is just broken for unigine in generall.

    your talking isn't so smart because in unigine2 nvidia is the clear winner because the benchmark is 100% nvidia focused.


    the hd6870 will have a very fine tesselation unit ;-)
    hopefully

    and that will blow Fermi out of the water with even less power consumption

  8. #18
    Join Date
    Nov 2008
    Location
    Germany
    Posts
    5,411

    Default

    Quote Originally Posted by kernelOfTruth View Post
    hopefully

    and that will blow Fermi out of the water with even less power consumption
    you can handle a 6870 by a passiv cooling solution ;-)

    because its a hd5770 class chip ;-)

    i think the price of a 6870 will be 200 at max.

    AMD makes the big money with GPUs 50-150 Watt and 80-200

    a GTX485 or an hd5970 is just Penis enlargement without any clue.

  9. #19
    Join Date
    Sep 2007
    Posts
    1,001

    Default

    Power consumption? How well does power consumption work on ATI cards? Is it working close to the same as Windows? I believe it's not so great when using open source drivers, correct?

    My point is, you're making compromises left, right and center, aren't you? So, when you bash Nvidia for power, I am wondering if it at least works since you're using the dreaded blob?

    I think power consumption is pretty important, just as much as some other features you' want and this is especially true for laptop users. But, nowadays, video manufacturers are trying to boast having the best with the latest and greatest so you want good power efficiency with your card and for it to work.

    If it works with the ATI binary but then you want to use the open source driver, what do you do?

  10. #20
    Join Date
    Sep 2007
    Posts
    1,001

    Default

    Too late for me to edit? I just went to the Radeon Feature Matrix page.

    So, under Power Saving, those show it's supported so I guess this indicates support, right? So, you Evergreen owners figure it's working well?

    Looks like 3D is in need of the most progress, then? Also, HA and video decoding which looks like it may never get there.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •