If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.
Thats the advantage of having git; the data isn't lost. As long as someone has a single clone of the repo it's all good. I'm sure agd5f has it stored on his computer and can easily just push it back up to fd.o if he needs to.
Git is not like CVS/SVN where everything is stored on the server.
Thanks for the clarification. I somewhat thought I remembered git working like that, but I've only used git to fetch source so far. Most of my day to day stuff is in SVN.
after i rebuilt drm and mesa again, it finaly works fine again, openarena has no glitches, indirect rendering only for quake3 still, but it has no glitches too, i have drawprim disabled with patch posted here earlier.
Is anyone playing things using the free drivers on r6xx+ ?
What i'm observing on my HD4550 is frame rates between 40 and 80 in OpenArena, with very little relation to detail/complexity level and resolution.
I've tried many different combinations, and basically I get similar frame rates using vertex lighting and minimum detail at 800x600 and full-blown everything on at 1920x1080. There is some difference, but I can't hit 125fps, whatever I do.
What does this mean? It's hard to imagine that the card is not powerful enough (at low res), the engine is 10 years old. I guess that the fill rate is not the issue, as the resolution doesn't matter much. Where is the bottleneck? Is there hope for improvement?
Another thing I've noticed is that the frame rate is very unstable and fluctuates wildly, even when you're staring at a wall. On top of this, there are sticky points every 2 seconds or so (screen freezes for a split second), which gets worse the longer X has been running. Upgrading to the latest drm-next kernel has improved this significantly, but it's still there.
This is not moaning, just a discussion, and I do appreciate the efforts that have gone into the drivers. We've had a few trolls here recently, so I wanted to make that clear.
Yeah, the drivers are written for real world apps that do a lot of drawing in each frame, ie the drawing code gets more attention than the "once per frame" code.
The glxgears program draws so little each frame that you end up measuring the performance of the "once per frame" code rather than the drawing code -- so the correlation between glxgears performance and real world application performance is weak at best.
That's why the devs make rude comments about glxgears being used as a benchmark for anything other than buffer-flipping. Any frame rate faster than your display refresh rate is largely wasted anyways.