Announcement

Collapse
No announcement yet.

We need a Campaign for OpenGL Patent Exemptions for OSS, namely Mesa, and Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • mirv
    replied
    Take a chill pill guys and/or girls.
    Minimum framerate does matter - it is important. That's just plain old engineering, statistics be damned. If it's getting low, they're going to want to know where, and why, because there's no point working on a driver section if it's nice (stable) & speedy.

    @artvision:
    Generally, some of the more high-end developers might like access to lower level close-to-the-metal access, but they definitely don't want to have to program separate code paths for separate devices, so you'll be needing some type of API layer. If you're taking the words from a certain AMD member, he clarified his position later on.

    Leave a comment:


  • deanjo
    replied
    Here are two more runs Q. Same system, same drivers, same version of app.
    Does this now mean that Windows 7 / DX 11 / and 32 bit blobs are superior?

    Because according to your criteria that would be the case. If that is the case,
    how did you ever conclude that Linux 64 / ogl were superior?




    Leave a comment:


  • deanjo
    replied
    Originally posted by Qaridarium
    there is no "LOL" and yes it is! fore sure!

    all that doesn?t matter if level loading makes the system stuttering then your harddrive/ssd/ramdisk is bad and if an background app makes the system stuttering the system is more than bad for gaming and if there is an I/O bottleneck the system is just bad for gaming at all all that stuff just say 1 think: only min-fps matters.
    If a system does have the best of the best stuff the low-fps is high but if a system is crap the low-fps is low and if the hardware is the same then only the software is bad/good.

    means windows is bad.

    and again only min-fps matters. all other stuff is bullshit!

    no the openGL implementation is better because higher result on the only matter low-fps stuff.

    there is no more to talk.
    No it doesn't mean anything of a sort. It means on one run there was some reason for a low. The mere fact that runs on the same system on the same OS and different non identical results are to be had are a testament to that. It is so sad that you can't understand the obvious.

    Leave a comment:


  • deanjo
    replied
    Originally posted by artivision View Post
    I just said before, developers want direct access to hardware not APIs like DX11!!!
    Some developers do, not even close to the majority. Most developers would cringe at the idea of going back to having to code device specific code.

    Leave a comment:


  • artivision
    replied
    The Truth

    I just said before, developers want direct access to hardware not APIs like DX11!!! And i mean even to the asic or specific or fixed functions!!! OpenCL has this capability to utilise this functions even in a FPGA!!! Then they can produce for example (the unreal engine 4), with all the FX are made of custom designs and custom algorithms, not Shaders and Tringles but Rays and Spheres for example!!!

    Leave a comment:


  • deanjo
    replied
    There is also the fact that my system uses some pretty damn old drives separates for the windows system which may or not impact results slightly or any other variable (who knows, for all we know adobe or ms or something else may have been running an update checker in the background). The differences are far to minute to say Linux 64 is faster then Win 32 or Linux 32. Hell even a rogue flash ad on the browser that was open could account for the low.

    Leave a comment:


  • deanjo
    replied
    Originally posted by crazycheese View Post
    To neutralize the effect of min or max borders the test is run and sampled multitude of times, then the calculated weighted mean (instead of arithmetic mean) is taken as trustworthy.

    Which means, simplified, deanjo should run the tests several times and record min values each time, then do weighted mean calculation as well as provide highest min and call the end result.
    Exactly, all these runs show is that at some time for what ever reason the min and max differed. I could run the same bench on the same OS, on the same exact machine a hundred times and no two would be exactly the same which negates being able to say that A is greater then B. I can however with great degree of comfort say that there is virtually no accountable difference in terms of performance between them and they are on par given the variance of the system configurations.

    Leave a comment:


  • crazycheese
    replied
    To neutralize the effect of min or max borders the test is run and sampled multitude of times, then the calculated weighted mean (instead of arithmetic mean) is taken as trustworthy.

    Which means, simplified, deanjo should run the tests several times and record min values each time, then do weighted mean calculation as well as provide highest min and call the end result.

    I don't think it will change though. The API is on par with DX, unlike what elanthis said.

    His other points - wither DX is way easier to program or OpenGL offers no multithread rendering are still not disproved.

    Meanwhile microsoft has hate ++. Paying money to company (without choice mind you) that uses it to sabotage cross-platform standards and improve its own(so you have even less choice) is plain retarded. Way to go, bill & steve.

    Leave a comment:


  • deanjo
    replied
    Originally posted by Qaridarium
    the only true point is that the min-fps rate is the only point that matters.
    Lol again it isn't. Do you know if that minimum was a level loading? Do you know if that minimum was caused by a background unrelated app? Maybe it is a I/O bottleneck on another subsystem at that point in time.

    Min / Max mean nothing of value as there is absolutely no indication of how long it lasts or what was happening on the entire system.


    and this fact makes openGL/linux a winner.
    The only fact you can take from this is Nvidia's openGL implementation is just as good as their DX.

    Leave a comment:


  • deanjo
    replied
    Originally posted by Qaridarium
    on your exampel the Average 108fps shows like the game runs well in realworld... but in realworld the game runs like shit!

    so now we do an real good min-fps exampel the min-fps should be 60fps.

    60 fps
    100 fps
    100 fps
    100 fps
    100 fps
    100 fps
    100 fps
    100 fps
    100 fps
    140 fps
    ----------
    1000 frames / 10 seconds Average 100 fps

    the average fps is slower than on your example but because of the min-fps the game runs very very well in all situations.

    means your average fps is a lie and only min-fps matters.
    lol, you really have no clue


    60 fps
    60 fps
    60 fps
    100 fps
    100 fps
    100 fps
    100 fps
    100 fps
    100 fps
    140 fps
    ----------
    920 frames / 10 seconds = 92 fps. Min and Maxes are the same as in your example. What number reflects that your example is giving better performance?

    Leave a comment:

Working...
X