Announcement

Collapse
No announcement yet.

Open-Source ATI R600/700 Mesa 3D Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • BlackStar
    replied
    Originally posted by squirrl
    I could be wrong.
    You are.

    Can you run a state-of-the-art desktop Linux distribution on a 386 with 4MB RAM? Is that because Intel (or AMD) is giving kickbacks to distro developers?

    Nah, it's called progress.

    You can't get better graphics without better hardware. If you don't care about graphics (as many gamers obviously do), there's a huge market out there waiting for you. You just need to look beyond iD, Crytek and the rest of the AAA $50M budget developers.

    Has open-source ever released a fully functional 3D driver?
    The original open-source R200 driver was said to be faster than fglrx on 3d. R300 isn't too shabby either (better e.g. Compiz compared to Catalyst 9.3). Finally, the 3d drivers for Intel IGPs are significantly better than what you get on Windows.

    Of course, recent hardware is much more programmable and thus much more difficult to write drivers for. Personally, I don't care about raw 3d performance so much - even 50% of the closed-source drivers would be fine, as long as the drivers expose the necessary features (e.g. GLSL) in a stable fashion.

    Leave a comment:


  • squirrl
    replied
    On topic,

    Has open-source ever released a fully functional 3D driver?

    I've been around the works since 1996 and seems like ever year it's always 1/2 to 3/4 potential of the card. My S3 had some decent capabilities I never saw work after the XFree86 fork to Xorg.
    Vesa and VGA are not an option. No acceleration.

    To be constructive, I hope that guy from the other article who is cleaning up the graphics driver stack realizes that there are people still using very old hardware.

    I have a 3DFX Voodoo2. It's a classic. Funny thing is I tried booting Slackware 12 on the machine it's in, Pentium 233mmx. It crawled at the console.

    What I'm getting at is it seems we're leaving behind some quality hardware. The throw it away generation, it's too old.

    Leave a comment:


  • squirrl
    replied
    hmm actually

    Tell that to the Crytek/Crysis developpers... If anything; they are probably the most skilled people. The not only went with "Hey let's
    Let us rephrase that statement.

    Crytek engine along with IDtech and Unreal all scam us by forcing hardware upgrades. Skilled? It doesn't take skill to copy paste the Graphics Gems Books. Gamasutra? Look this stuff isn't just invented one day out of the blue. It's been known since SceneGraph 99 that you could do X and Y. Papers have been written people have been paid.

    Remember back when the old Xbox (Celeron 700 w/64mb ram shared with an Nvidia GF3) could run Doom 3 no problems?

    Well it could if you don't remember. At the time I had a 1Ghz Celeron (1 GB memory) strapped to a Radeon 9800 pro (128mb ram).

    Forgot Doom 3 running on that setup. Why?

    Because these game companies get kickbacks from Nvidia and Radeon. It's like handing a kid a Huge pair of boots and saying, "We'll someday son you can fill my shoes."

    I could be wrong.

    Leave a comment:


  • V!NCENT
    replied
    Originally posted by elanthis View Post
    The problem here is that you're someone without a clue, and I _do this for a freaking living_.
    Which makes you automatically right? So When Linux wasn't corporate backed the developpers should have listened to Microsoft Kernel developper because the had a subscription to rightiousness?

    If your monitor is 75hz, then the game will run at 75 FPS with vsync. Simple as that.
    Never ran into a game that did that, so no...

    The physics will run at 60hz because it freaking has to stay at a constant rate, and that rate has to be the same for all players. If you don't understand why, don't argue the point. If they run at different rates, the simulation would be inconsistent across clients.
    Yes I understand that for MP it is required by the netcode. But Doom3 was about the SP and the physics update rate wasn't cariable.

    In most games the rate of the physics can be changed (most counter-strike source servers run at a tickrate of 100 (100fps) when at that time no single card could handle 100fps.

    And then in most games physics are just server side disabled because it sucks for MP. When throwing an object in counter-strike source on another players avatar (like a granade) it results in a -1hp damage, but even then the physics are client side and it is nothing but netcode where the server desides the yes/no.

    I don't care for FPS in general. You shoot shit, move, shoot shit, move. /yawn
    Tactics, puzzles... Whilst the point and click FPS's are not realy my thing anymore, I take a look at GoldenEyeL:Source and realise that people can still make fun and addictive FPS games that are not Quake Clone pixel shooters...

    Yes, there are games that do without and sync with frames, and you can tell very easily. If the frame rate drops, then not only does the game look a little choppy, the entire simulation becomes choppy, and more and more errors are introduced. The liklihood of tunneling increases and simple object interactions become highly unstable.
    The problem with that is simply caused by a computer that is too slow, gamestudios that did not think about performance optimising or both.

    I'd rather have my monitor telling me that my PC is too slow than walking around smoothly and thinking that I am making headshots while I am not and get constantly redirected: "No! You are standing HERE!" AaaaaaaaAAAArRGGG!

    Said games are poorly coded and the bugs become apparent. it works for something like Asteroids; it does not work for today's physics-intensive games.
    Tell that to the Crytek/Crysis developpers... If anything; they are probably the most skilled people. The not only went with "Hey let's make this game horribly slow by implementing sick graphics" but went ahead and literaly leveraged what could be done with triangle rendering and shadowing on todays HW.

    Because complex scenes end up taking more than 16.666ms to perform all physics, AI, sound, and graphics processing, causing it to miss the vsync and hence drop below the required number of frames. Doesn't matter how good you are as a dev, if you throw too much data to be processed at an engine, it is going to take longer than you want. Simple as that.
    Yup... ?

    So I have. And hence I have seen first hand all the shit that goes wrong when you half-ass your engine, have half-assed physics, half-assed graphics renderers, half-assed AI.
    Good for you...

    ... so your system _does_ represent your penis?
    You honestly think that I am going to respond to that half assed troll atempt of yours?

    Leave a comment:


  • mendieta
    replied
    Is it me or this thread has been hijacked?

    I think we should all relax a bit, it is amazing how much talent there is around, posting very interesting questions and answers. Insulting others, though, is not a very good way of portraying ourselves.

    Cheers!

    Leave a comment:


  • elanthis
    replied
    We'll see about that after I responded...
    The problem here is that you're someone without a clue, and I _do this for a freaking living_.


    The monitor? What monitor? Even my flatscreen does 75fps... Let me go ahead and facepalm you back, already... and even then; it updates the frame every 1/60th of a second.
    If your monitor is 75hz, then the game will run at 75 FPS with vsync. Simple as that.

    The physics will run at 60hz because it freaking has to stay at a constant rate, and that rate has to be the same for all players. If you don't understand why, don't argue the point. If they run at different rates, the simulation would be inconsistent across clients.


    You never played it, yet do seem to know if it is fun or not. Secondly, there is no jitteriness, only a state of movement that is constantly restricted to time intervals if 1/60th of a second.
    I don't care for FPS in general. You shoot shit, move, shoot shit, move. /yawn

    And alright, no jitteriness. Yes, physics are updated at a fixed rate. Turning off vsync isn't going to change that, though, so your argument is still invalid.

    Game physics are just part of a gameloop, if done right, which makes it limited to the amount of FPS. Game physics can be implemented in an endless amount of ways, actually, so there is nothing to understand here.
    There are many ways, but only one general correct approach that works on today's bread of hardware. In some theoretical future where we can use true continuous physics simulation, physics wouldn't be bound to a fixed rate. They are today.

    Yes, there are games that do without and sync with frames, and you can tell very easily. If the frame rate drops, then not only does the game look a little choppy, the entire simulation becomes choppy, and more and more errors are introduced. The liklihood of tunneling increases and simple object interactions become highly unstable.

    Knowing that there are games that do not limit themselves to individual physics frames, but just look at how much times has passed since last rendered frame, makes you wrong. Kaboom, baby, in your face(palm)!
    Said games are poorly coded and the bugs become apparent. it works for something like Asteroids; it does not work for today's physics-intensive games.


    But if the frames are limited, then explain to me how complex scenes can totally fsck up the fps. Oh wait, I know, because it got made my 'expierenced devs'? Rofl...
    Because complex scenes end up taking more than 16.666ms to perform all physics, AI, sound, and graphics processing, causing it to miss the vsync and hence drop below the required number of frames. Doesn't matter how good you are as a dev, if you throw too much data to be processed at an engine, it is going to take longer than you want. Simple as that.

    I've been playing games since pong and Pacman in black and white. Believe me... I do play other games than just Quake clones...
    So I have. And hence I have seen first hand all the shit that goes wrong when you half-ass your engine, have half-assed physics, half-assed graphics renderers, half-assed AI.


    No the trolling and ignorence and idiot assumptions did. If I were to care about penis size I would have put my system spec in my signature (if it is possible to have one on this forum)...
    ... so your system _does_ represent your penis?

    Leave a comment:


  • V!NCENT
    replied
    Originally posted by perpetualrabbit View Post
    Also I find it hard to believe that gamers have such enormously fast responses that a lag of 1/120th or even 1/60th of a second makes a difference.
    If you know your ping than you can calculate the true miliseconds you lag behind, which makes you able to predict the next move.

    But lets not forgot the following: lag on top of lag. Did you ever watch sports? Sometimes 1/1000th of a second can make the difference.

    Leave a comment:


  • V!NCENT
    replied
    Originally posted by BlackStar View Post
    There, fixed that for ya.
    Ehm... Why? There aren't any other games out there that have competetive multiplayer gameplay that relies on response?

    Personally, I find tearing extremely distractive. My solution to the issue you describe? Fire up my trusty CRT which does 1024x768@120Hz, modify the game to process at 120fps (if it offers such an option) and enable vsync. Same result, only without visual artifacts from viewing two frames on screen at the same time.
    Errrr.... right... That doesn't have anything to do with updating the framebuffer when a frame has only been rendered for half a frame yet...

    Originally posted by elanthis View Post
    /facepalm
    We'll see about that after I responded...

    Nope. The monitor itself is physically incapable of displaying faster than 60 FPS. If you have a monitor with a higher refresh rate, the sync-to-vblank will scale up to it, of course.
    The monitor? What monitor? Even my flatscreen does 75fps... Let me go ahead and facepalm you back, already... and even then; it updates the frame every 1/60th of a second. Half of that finnished frame could be there or a newer one. It is just time intervals. There is no framelimiter in a game engine that embeds itself into a monitor, idiot...

    Doom3 likely had other issues resulting in its jitteriness (I never played it, because I like games that are fun, personally).
    You never played it, yet do seem to know if it is fun or not. Secondly, there is no jitteriness, only a state of movement that is constantly restricted to time intervals if 1/60th of a second.

    You probably didn't know that, which is not surprising, because very few people other than experienced game devs actually understand how game physics work.
    Game physics are just part of a gameloop, if done right, which makes it limited to the amount of FPS. Game physics can be implemented in an endless amount of ways, actually, so there is nothing to understand here. Knowing that there are games that do not limit themselves to individual physics frames, but just look at how much times has passed since last rendered frame, makes you wrong. Kaboom, baby, in your face(palm)!

    Suffice to say, you have to keep physics at a fixed time increment or the simulation starts to do very crazy things when the frame rate varies often in complex scenes.
    Lol no... It all depends on how one constructed a game loop. 'Experienced devs' had better have good experience. You also have experienced idiots. Experience doesn't count. Skill does. But if the frames are limited, then explain to me how complex scenes can totally fsck up the fps. Oh wait, I know, because it got made my 'expierenced devs'? Rofl...

    It might show some portion of the screen sooner,
    Which is the entire point, smartass... And once again, the game enginesframelimitter isn't embedded in 'the' monitor

    Sounds like you're playing the wrong games. But then, it also sounds like you don't consider anything other than Quake and it's infinite number of clones and spin-offs to be a game. Good for you, I guess.
    I've been playing games since pong and Pacman in black and white. Believe me... I do play other games than just Quake clones...

    The small penis remark got your goat, didn't it? Sorry for your troubles, dude.
    No the trolling and ignorence and idiot assumptions did. If I were to care about penis size I would have put my system spec in my signature (if it is possible to have one on this forum)...

    Leave a comment:


  • RobbieAB
    replied
    Originally posted by perpetualrabbit View Post
    Also I find it hard to believe that gamers have such enormously fast responses that a lag of 1/120th or even 1/60th of a second makes a difference. Gaming relies on normal cognitive reaction times which are not faster than about 0.1 seconds for visual input. Wikipedia says the average minimum is 0.180 seconds. Some gamers may be pretty fast, maybe twice as fast, or four times as fast (0.045). But I find it really hard to believe that there are gamers who can respond to a visual stimulus reproducably within the refresh time of a normal monitor (0.017s). See http://www.humanbenchmark.com/tests/...time/stats.php

    So I fail to see why a maximum lag of 1/60th of a second, and even that can be partially mediated by extrapolation techniques, matters.

    But no, I don't know much about temporal antialiasing. I do know about math and physics.
    "I don't need to run faster than the tiger, I just need to run faster than you"

    It's not a reaction time of 1/60th of a second that's at stake, it's the fact you start reacting 1/120th of a second faster than your opponent. Assuming we both have the same reaction time, whoever starts reacting first wins.

    Leave a comment:


  • perpetualrabbit
    replied
    Originally posted by V!NCENT View Post
    It can not. You still have to render two frames so you can overlap them, which is true motion blur
    Real life blur is the sum of an infinite number of images with infinitesimally small time steps. In other words: an integral over time.
    But integration can be made sufficiently precise with for instance Runge Kutta methods or splines or many other techniques. Also extrapolation is possible. It should be possible to extrapolate 1/60th of a second.

    Also I find it hard to believe that gamers have such enormously fast responses that a lag of 1/120th or even 1/60th of a second makes a difference. Gaming relies on normal cognitive reaction times which are not faster than about 0.1 seconds for visual input. Wikipedia says the average minimum is 0.180 seconds. Some gamers may be pretty fast, maybe twice as fast, or four times as fast (0.045). But I find it really hard to believe that there are gamers who can respond to a visual stimulus reproducably within the refresh time of a normal monitor (0.017s). See http://www.humanbenchmark.com/tests/...time/stats.php

    So I fail to see why a maximum lag of 1/60th of a second, and even that can be partially mediated by extrapolation techniques, matters.

    But no, I don't know much about temporal antialiasing. I do know about math and physics.

    Leave a comment:

Working...
X