Announcement

Collapse
No announcement yet.

We need a Campaign for OpenGL Patent Exemptions for OSS, namely Mesa, and Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    To neutralize the effect of min or max borders the test is run and sampled multitude of times, then the calculated weighted mean (instead of arithmetic mean) is taken as trustworthy.

    Which means, simplified, deanjo should run the tests several times and record min values each time, then do weighted mean calculation as well as provide highest min and call the end result.

    I don't think it will change though. The API is on par with DX, unlike what elanthis said.

    His other points - wither DX is way easier to program or OpenGL offers no multithread rendering are still not disproved.

    Meanwhile microsoft has hate ++. Paying money to company (without choice mind you) that uses it to sabotage cross-platform standards and improve its own(so you have even less choice) is plain retarded. Way to go, bill & steve.

    Comment


    • #62
      Originally posted by crazycheese View Post
      To neutralize the effect of min or max borders the test is run and sampled multitude of times, then the calculated weighted mean (instead of arithmetic mean) is taken as trustworthy.

      Which means, simplified, deanjo should run the tests several times and record min values each time, then do weighted mean calculation as well as provide highest min and call the end result.
      Exactly, all these runs show is that at some time for what ever reason the min and max differed. I could run the same bench on the same OS, on the same exact machine a hundred times and no two would be exactly the same which negates being able to say that A is greater then B. I can however with great degree of comfort say that there is virtually no accountable difference in terms of performance between them and they are on par given the variance of the system configurations.

      Comment


      • #63
        There is also the fact that my system uses some pretty damn old drives separates for the windows system which may or not impact results slightly or any other variable (who knows, for all we know adobe or ms or something else may have been running an update checker in the background). The differences are far to minute to say Linux 64 is faster then Win 32 or Linux 32. Hell even a rogue flash ad on the browser that was open could account for the low.

        Comment


        • #64
          The Truth

          I just said before, developers want direct access to hardware not APIs like DX11!!! And i mean even to the asic or specific or fixed functions!!! OpenCL has this capability to utilise this functions even in a FPGA!!! Then they can produce for example (the unreal engine 4), with all the FX are made of custom designs and custom algorithms, not Shaders and Tringles but Rays and Spheres for example!!!

          Comment


          • #65
            Originally posted by artivision View Post
            I just said before, developers want direct access to hardware not APIs like DX11!!!
            Some developers do, not even close to the majority. Most developers would cringe at the idea of going back to having to code device specific code.

            Comment


            • #66
              Originally posted by Qaridarium
              there is no "LOL" and yes it is! fore sure!

              all that doesn?t matter if level loading makes the system stuttering then your harddrive/ssd/ramdisk is bad and if an background app makes the system stuttering the system is more than bad for gaming and if there is an I/O bottleneck the system is just bad for gaming at all all that stuff just say 1 think: only min-fps matters.
              If a system does have the best of the best stuff the low-fps is high but if a system is crap the low-fps is low and if the hardware is the same then only the software is bad/good.

              means windows is bad.

              and again only min-fps matters. all other stuff is bullshit!

              no the openGL implementation is better because higher result on the only matter low-fps stuff.

              there is no more to talk.
              No it doesn't mean anything of a sort. It means on one run there was some reason for a low. The mere fact that runs on the same system on the same OS and different non identical results are to be had are a testament to that. It is so sad that you can't understand the obvious.

              Comment


              • #67
                Here are two more runs Q. Same system, same drivers, same version of app.
                Does this now mean that Windows 7 / DX 11 / and 32 bit blobs are superior?

                Because according to your criteria that would be the case. If that is the case,
                how did you ever conclude that Linux 64 / ogl were superior?




                Comment


                • #68
                  Take a chill pill guys and/or girls.
                  Minimum framerate does matter - it is important. That's just plain old engineering, statistics be damned. If it's getting low, they're going to want to know where, and why, because there's no point working on a driver section if it's nice (stable) & speedy.

                  @artvision:
                  Generally, some of the more high-end developers might like access to lower level close-to-the-metal access, but they definitely don't want to have to program separate code paths for separate devices, so you'll be needing some type of API layer. If you're taking the words from a certain AMD member, he clarified his position later on.

                  Comment


                  • #69
                    Originally posted by mirv View Post
                    Take a chill pill guys and/or girls.
                    Minimum framerate does matter - it is important. That's just plain old engineering, statistics be damned. If it's getting low, they're going to want to know where, and why, because there's no point working on a driver section if it's nice (stable) & speedy.
                    Only if it is replicable on a fairly consistent basis and noticeably effects gameplay. That is something that reporting a low extreme does not tell you. You have to take a look at the frequency those low frame rates occur and that will usually be reflected in the overall fps.

                    Comment


                    • #70
                      Originally posted by deanjo View Post
                      Only if it is replicable on a fairly consistent basis and noticeably effects gameplay. That is something that reporting a low extreme does not tell you. You have to take a look at the frequency those low frame rates occur and that will usually be reflected in the overall fps.
                      Repeated occurrence at a particular point, gameplay or not, is important. What affects one game slightly, may affect another rather more - or worse for these companies, it could be something affecting their workstation software targets.
                      I'd also say that non-variation in framerate is more important for gameplay than anything else, but that's getting off to much into a tangent (much like the whole thread I suppose).
                      So to bring things back a little - what if that minimum framerate only occurred because they couldn't use floating point buffers, or S3TC, or something similar?

                      Comment

                      Working...
                      X