Actually, I ran a few tests to see about performance on linux with a radeon 5770... here's the script i used to launch the program:
Note: Change video_multisample param is to control Anti-Aliasing... where 0= off, 1=2xAA, 2=4xAA, 3=8xAA.Code:#!/bin/sh export LD_LIBRARY_PATH=./bin:$LD_LIBRARY_PATH ./bin/Heaven_x64 -video_app opengl \ -sound_app openal \ -extern_define RELEASE \ -system_script heaven/unigine.cpp \ -engine_config ../data/heaven_2.1.cfg \ -console_command "gl_render_use_arb_tessellation_shader 1 && render_hdr 8 && render_srgb 1 && render_restart" \ -data_path ../ \ -video_fullscreen 1 \ -video_mode -1 \ -video_width 1920 \ -video_height 1200\ -video_multisample 0
Test Run with Tessellation off and 0xAA:
Test Run with Normal Tessellation and 0xAA to 8xAA:
What it's supposed to look like if glxgears if "display is synced to vblank" like glxgears.c says.Code:[nanonyme@confusion ~]$ glxgears Running synchronized to the vertical refresh. The framerate should be approximately the same as the monitor refresh rate. 302 frames in 5.0 seconds = 60.202 FPS 300 frames in 5.0 seconds = 59.992 FPS 301 frames in 5.0 seconds = 60.008 FPS
ps. Display is 60Hz
remember for the future any benchmark result higher than 60fps or stereoscopic 3D 120fps are invalid benchmarks,
you get zero positiv effect if you have 10000000fps in any kind of programm.
so you can't benchmark a modern gpu with quake3 or glxgears
and in my point of view you can't benchmark a OpenGL4.1 card with a openGL2.1 programm or an openGL1.5 programm.
and you can not test your hardware with an benchmark only programme like the unigine tests in the phoronix test
only real apps/games can be a basic for an REAL test result
in CPU speech its like an SSE1 programm test on an SSE4 cpu you never geht any valid result of the real speed of your hardware...
same vor Directx9 on DX10+ hardware.
your talking isn't so smart because in unigine2 nvidia is the clear winner because the benchmark is 100% nvidia focused.
the hd6870 will have a very fine tesselation unit ;-)
Power consumption? How well does power consumption work on ATI cards? Is it working close to the same as Windows? I believe it's not so great when using open source drivers, correct?
My point is, you're making compromises left, right and center, aren't you? So, when you bash Nvidia for power, I am wondering if it at least works since you're using the dreaded blob?
I think power consumption is pretty important, just as much as some other features you' want and this is especially true for laptop users. But, nowadays, video manufacturers are trying to boast having the best with the latest and greatest so you want good power efficiency with your card and for it to work.
If it works with the ATI binary but then you want to use the open source driver, what do you do?
Too late for me to edit? I just went to the Radeon Feature Matrix page.
So, under Power Saving, those show it's supported so I guess this indicates support, right? So, you Evergreen owners figure it's working well?
Looks like 3D is in need of the most progress, then? Also, HA and video decoding which looks like it may never get there.