Announcement

Collapse
No announcement yet.

We need a Campaign for OpenGL Patent Exemptions for OSS, namely Mesa, and Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • deanjo
    replied
    In fact here are some updated results using Heaven 2.5




    Total deviation between 4 tests, Win 7 DX11, Win 7 openGL, Linux 32 and Linux 64 is .5% which falls well into a negligible difference due to no 2 results will ever be identical because of items like background apps/services, spread spectrum deviation, clock drift, etc, etc.

    Leave a comment:


  • deanjo
    replied
    Originally posted by Qaridarium
    you can not compare an 32bit shitty app against an 64bit nativ app.

    And FYI, there is virtually no difference with Unigine Heaven running the 32-bit blob or the 64-bit blob or running it in a 64-bit or 32-bit environment.








    All in all Q your comments are meaningless and full of bullshit as well.

    Leave a comment:


  • deanjo
    replied
    Originally posted by Qaridarium
    ok i was wrong in reading about the 32bit windows but "The windows version is only a 32-bit binary." means the same shit other way arround.

    you can not compare an 32bit shitty app against an 64bit nativ app.
    READ AGAIN!

    There are two windows runs. One with the DX Renderer and one openGL run. Compare those two if you think it makes such a big difference running the benchmark in 64-bit vs 32-bit. Ignore the linux run! The comparison is perfectly 100% valid! There is no "Unigine's Heaven demo runs at 60% of the speed on the OpenGL mode as it does in Direct3D 11." Again the statement is bullshit.
    Last edited by deanjo; 10 April 2011, 09:59 AM.

    Leave a comment:


  • mirv
    replied
    Originally posted by deanjo View Post
    It is not benched with Windows 32-bit. Read man read or get your eyes checked, it is ran on Windows 7 64. The windows version is only a 32-bit binary. There is no 64-bit binary for windows of the Heaven Benchmark. Further more look at the two windows runs, same OS, both running the same binary and there is a DX and GL run so as usual Q you have no idea what you are talking about.
    I think Q was referring to a 32bit build being used on windows, vs a 64bit build being used under linux. He's right that you can't compare those particular two.
    You can, of course, compare between 32bit windows dx11 and 32bit windows opengl. Performance wise, the two should be fairly similar (there's practically no difference in performance between the two API's on a desktop system today) - assuming each code path has been written sanely.
    Note that I said desktop - I'm leaving areas such as smartphones, consoles, embedded systems out of this.

    Leave a comment:


  • deanjo
    replied
    Originally posted by Qaridarium
    but why do you benchmark windows 32bit vs Linux 64bit ?
    overall 64bit win in 80% of the cases ...
    so your benchmark is useless.

    and your "bullshit" writing is just fake because you cheat your own benchmark and then arguing on that faked base.
    It is not benched with Windows 32-bit. Read man read or get your eyes checked, it is ran on Windows 7 64. The windows version is only a 32-bit binary. There is no 64-bit binary for windows of the Heaven Benchmark. Further more look at the two windows runs, same OS, both running the same binary and there is a DX and GL run so as usual Q you have no idea what you are talking about.
    Last edited by deanjo; 09 April 2011, 10:24 PM.

    Leave a comment:


  • XorEaxEax
    replied
    Originally posted by elanthis View Post
    Let's just say I'm totally wrong and filled with crack and you're all right and are geniuses who know everything,
    I can TOTALLY buy that

    And yes, for all the fun in arguing, you are right that it's quite fruitless in this particular case.

    Still, if companies like NVidia and ATI are having problems creating stable OpenGL drivers then they themselves are to blame since they are actually part of Khronos and thus in a position to actually change the situation, since they help define the api which they are later to implement (atleast in theory, given that they could be outvoted). So if you are right then they (NVidia, ATI, etc) are either incompetent or being blocked by incompetent/malicious members?

    Leave a comment:


  • deanjo
    replied
    Originally posted by elanthis View Post
    Unigine's Heaven demo runs at 60% of the speed on the OpenGL mode as it does in Direct3D 11.
    Bullshit.

    Leave a comment:


  • elanthis
    replied
    Ugh. I can't believe how much time I've spent arguing this. Something is wrong with me that I feel the need to argue on the Internet with people who have no impact on anything I actually do.

    Let's just say I'm totally wrong and filled with crack and you're all right and are geniuses who know everything, and I can get back to writing real code and you guys can get back to whatever it is you do.

    Out.

    Leave a comment:


  • mirv
    replied
    The multi-threaded nature of D3D is entirely software based. It's hidden behind the implementation, sure, but it's still software based. Interaction with the video card is serial in nature, whichever API you're using.
    This is not a for or against anything, I just wanted to make sure that point was understood.
    Hmm, this thread has gone quite offtopic. Suppose that happens whenever OpenGL is mentioned (or <insert desktop environment> too).

    Leave a comment:


  • elanthis
    replied
    Originally posted by XorEaxEax View Post
    Khronos is as far as I know nothing but a consortium of industry players (including NVidia and ATI) who submits api suggestions for consideration/voting, as for an 'official' test suite that would be Mesa afaik.
    So? Why does that mean they can't write code and test suites?

    Again, OSX, Linux, mobile devices (apart from Microsofts offerings) rely on OpenGL, if it was so bad as you describe it we would have seen a cross platform replacement by now.
    Bullshit. It's taken Linux years to get a usable OpenGL implementation working. OS X has an even worse one.

    Writing a new API takes effort and requires a level of knowledge and direct vendor access that some random hodgepodge of Open Source coders simply do not have.

    And yes, Linux is suffering under OpenGL. You can't even get Compiz or Firefox to reliably use it, even on the proprietary drivers.

    You're acting like everything is working just fine when this very site has posted countless articles about how every single fucking app that's tried to use OpenGL has run into numerous problems doing so, to the point of just disabling it half the damn time!

    All these sweeping statements with nothing to back them up, please point me to some objective comparisons regarding bugs in NVidia OpenGL vs Direct3D drivers.
    Why? You've ignored every link and example I've already given. Go do your own research and stop wasting my time.

    Originally posted by mirv
    The core OpenGL functionality of the big players (nvidia and ati/amd) is stable. It has to be. Most of the bugs that occur with desktop use are with handling outside the core (typically some X integration). By core of course, I mean defined spec. Both companies have had outlying issues on occasion, but they're about as rare as D3D issues now.
    Sorry, this simply isn't true.

    NVIDIA's own test programs can trigger bugs that cause mis-rendering with features as core (and essential to modern graphics) as FBOs. It's that bad.

    No, despite Xor's idiotic binary-logic arguments, that doesn't imply that it's impossible to use FBOs at all period. It just means that you can and often will run into totally weird bugs that sap away hours or days of your time while you try to sort out whether the platform is actually behaving sanely or if it's just a bug in your app, and then you spend even more time trying to figure out workarounds that maintain acceptable performance and don't trigger the bugs.

    And that's stupid. And that's why many developers don't even bother with OpenGL support anymore, because it's simply easier to use D3D and get 95% of the market for 5% of the engineering cost.

    I'm also going to note something else: neither one has hardware accelerated multi-threaded rendering. D3D does not have it - it's done in software.
    This isn't true, at least for D3D 11. Although it's possibly just stating something different than what is meant.

    With D3D, what "multi-threaded rendering" means is that you can create and manage buffers on other threads, and that you can compose rendering commands to be submitted to the GPU in those threads. You can let your rendering code build up independent batches in each thread efficiently and then those can be submitted to the GPU (serially) by the main thread. OpenGL doesn't allow this because every OpenGL call uses a hidden magic context and because of the limitations that imposes on buffer mapping.

    In metaphor speech, D3D is kinda like Linux today and OpenGL is like Linux 1.3 when the Big Kernel Lock was introduced. You simply can't do multi-threaded rendering in OpenGL without locking every GL call with a single global mutex, while D3D allows you do a lot of the work completely independently.

    The actual draw calls are actually pretty minor. Submitted those independently on each thread is unimportant because the hardware is serializing those. The vast majority of the work in a modern renderer is filling up buffers with data, which is very time-consuming in a complex renderer. You have to completely serialize that in OpenGL.

    Originally posted by artvision
    First of all developers don't like API's like dx11 or ogl4. They want to get rid of them. They want direct access to hardware with a C-like language. If they do that, then their games will have much batter graphics.
    Academics who sit around fantasizing what hardware could someday be like may wish for what you describe. Those of us who actually write real code today want API's that reflect what actual, real, hardware already in consumers' hands can do.

    GPU's still have a lot of "fixed function" core built in. The polygon rasterizer (as just one example) is entirely fiixed function, and there's no possible way to write an equivalent in OpenCL that can perform anywhere near as fast. You have to write a vertex shader to feed vertices to the polygon rasterizer and a separate fragment program to process the individual fragments. Trying to implement that middle step yourself just results in a massive drop in performance for absolutely no gain. Then there's things like the fixed-function hierarchial Z buffer, fixed-function alpha test, fixed-function scissor test, etc. No hardware implements those in a programmable fashion and it's unlikely that any hardware is going to stop doing those fixed-function any time soon.

    So far as the API for D3D11 or what OpenGL _should_ be, there really isn't much of an API anymore.

    Your ideal API basically is comprised of:

    (1) Allocate buffers in GPU memory
    (2) Upload compiled programs to GPU memory
    (3) Create input/output stream configurations for GPU programs
    (4) Run GPU programs with a particular stream configuration

    There's a number of finer details of course (particularly around textures, which are a bit more complex than other kinds of memory buffers due to mipmaping, tiling, etc.), but that's pretty much it.

    Leave a comment:

Working...
X