Announcement

Collapse
No announcement yet.

Any news about OpenGL 3.0?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by bogdanbiv View Post
    Even their contact email address has gone missing from the site. http://www.khronos.org/about/contact/
    Yes I know how it is, these days many companies do not offer any direct support contact - unless of course you fork over large amounts of money.

    The situation is pretty disturbing... I recall at the time of the announcement of transferring of ownership over to Khronos, the many people speaking out and asking "who is Khronos, and why should they be trusted?".

    I suppose it depends on how determined a person is, as to whether or not the information can be gotten? Of course it would help if someone of a "high profile" were to ask about it. (It should not need to be done like that, but sadly that is often the case!)

    Some ideas, if you're still interested.

    Have you tried to get a response out of this address?:
    Khronos Group Managing Director
    Elizabeth Riegel
    elizabeth (at) goldstandardgroup.com
    What about posting on their message boards?:
    Khronos Forums

    Have you tried to contact another "industry leader", because they may have information not already out in the public (or be able to get such more easily)? I would ask someone at SGI - they answered my noob email before, but that was back in the "olden days"; so I have no idea who will answer the door there. Now you've gotten me pretty interested in this

    Perhaps someone from another "friendly graphics firm" who patrols these boards would care to *please* enlighten us?

    Comment


    • #12
      try this thread over on the opengl forums

      Comment


      • #13
        Originally posted by hmmm View Post
        try this thread over on the opengl forums

        http://www.opengl.org/discussion_boa...229374&fpart=1

        Thanks.

        Here is my post in that thread, should you want to follow up the answers: http://www.opengl.org/discussion_boa...861#Post236861

        Comment


        • #14
          Originally posted by bogdanbiv View Post
          Here is my post in that thread, should you want to follow up the answers
          A bit OT but I found some other peoples' posts and links regarding raytracing on GPUs, and considering the raw power of the curent GPUs themselves it should already be cake (if the GPUs were designed to properly do RT in hardware).

          What erks me about it is when companies claim "it's only now that we have the power on silicon to do it".

          It was already proven doable on a 90Mhz FPGA FOUR YEARS AGO!
          See here:


          *bleh*

          /OTrant

          Comment


          • #15
            Originally posted by edged View Post
            What erks me about it is when companies claim "it's only now that we have the power on silicon to do it".
            That's because it's true. In fact, I doubt they do have the power even now.

            It was already proven doable on a 90Mhz FPGA FOUR YEARS AGO!
            If 10fps in Quake 3 at 512x384 with a precomputed polygon hierachy counts as 'doable' (read the actual papers, not some third-hand articles). Back then, a typical GPU was getting 300+fps in Quake 3 with no precomputation at 1024x768 or higher.

            Hardware realtime ray-tracing is pretty neat, and may well be required in the future as shadows and other effects become more important; but it's a long way from being easy, or even viable.

            Comment


            • #16
              Originally posted by movieman View Post
              That's because it's true. In fact, I doubt they do have the power even now.

              If 10fps in Quake 3 at 512x384 with a precomputed polygon hierachy counts as 'doable' (read the actual papers, not some third-hand articles). Back then, a typical GPU was getting 300+fps in Quake 3 with no precomputation at 1024x768 or higher.

              Hardware realtime ray-tracing is pretty neat, and may well be required in the future as shadows and other effects become more important; but it's a long way from being easy, or even viable.
              That's not quite true, because you're using a rather poor example that does not take the whole picture into account.

              First of all the current GPUs are not designed to do RT in hardware. If they were it would be an entirely different scenario. Just as it was when either AntiAliasing (AA) or Anisotropic Filtering (AF) were introduced to consumer graphics processors, it takes time to 'evolve' in many areas: the silicon (GPU), the software (drivers), the application (programs). High levels of AA were not feasible when it was introduced, but today 8x AA is done easily even 24x+ is available on mainstream GPUs.

              The point I made is very clear-cut: a 90 Mhz processor performing RT in hardware four years ago.
              As to performance, you use Q3A as a example, but you don't give any reference. Here's an example of Q3A engine doing nearly 30 fps at 512x384.
              The first prototype was doing 2-6 Million rays per second in a modified UT2003: 512x384 was nearly 30fps

              That fact is even more impressive when you realize it is a technology demo. That hardware is not a "multi-million-$-funded" commercial project, unlike current GPUs that are heavily funded to be developed.

              Last thing I wanted to point out is that I believe you are thinking about everything in-game or in-program to be replaced by RT and traditional lighting replaced upon introduction of the feature. That is not evolution, and you cannot expect an "all-at-once" approach. That was never the case that I can recall, and it is more likely at first for it to be available towards specific effects/instances - just as with AA or AF where only low levels were originally usable.

              Anyway the entire gist of what I was trying to convey here seems to have been lost (no offense). If in those years and millions-worth of investments a fraction had gone into RT, we'd already be seeing it used.

              Comment


              • #17
                Originally posted by edged View Post
                That's not quite true, because you're using a rather poor example that does not take the whole picture into account.
                No, I'm not. Read their papers.

                Their second-generation ray-tracer was getting 10fps in Quake 3 at 512x384. A typical GPU was getting 300+fps at 1024x768, over a hundred times faster per pixel.

                And they were only using primary rays, which is a much simplified version of what people regard as 'ray-tracing'. 'Real' ray-tracing would probably be an order of magnitude slower still.

                First of all the current GPUs are not designed to do RT in hardware.
                No, they're not. Because the performance would suck.

                The point I made is very clear-cut: a 90 Mhz processor performing RT in hardware four years ago.
                A hundred times slower than traditional rendering hardware of that period. Using a cut-down ray-tracing algorithm and precomputed polygon hierachy that made their job easy.

                The most time-consuming part of ray-tracing is determining which polygons your ray intersects. If you can off-load much of that into a precomputed hierachy, it's much easier... but that's moved a lot of processing onto the CPU and the performance cost rises substantially once you add the kind of dynamic content that is common in games and have to keep updating that hierachy.

                As to performance, you use Q3A as a example, but you don't give any reference.
                I did. Their papers, which give detailed performance figures for different scenes. Have you actually read them?

                The first prototype was doing 2-6 Million rays per second in a modified UT2003:
                And their second prototype was 20-50% of that speed. It's not really a surprise that as they added more functionality, the performance dropped... that's what happens when you start with a cut-down implementation and try to improve it.

                That fact is even more impressive when you realize it is a technology demo.
                Not really. Given that a PC CPU could already do real-time ray-tracing at a similar frame-rate, it's not a huge surprise that custom hardware could do it too.

                Anyway the entire gist of what I was trying to
                convey here seems to have been lost (no offense). If in those years and millions-worth of investments a fraction had gone into RT, we'd already be seeing it used.
                No-one would have bought a chip that ray-traced Quake 3 at 10fps at 512x384; even if it got 30fps at 1024x768 (scaling up the clock speed to something closer to an equivalent GPU), no-one would have bought it.

                I've known several graphics chip designers, and they're among the smartest people I've met. We got today's GPUs because they're the best way we have of rendering 3D graphics with current and past hardware; true ray-traced games in hardware are still a pipe-dream, and will be for some years to come.

                Seriously, you should really read the actual papers about what these people actually did with the actual hardware they actually used. Because what they did is neat, but a long way from what you think it is.

                Comment

                Working...
                X