Announcement

Collapse
No announcement yet.

Mesa's CPU-Based Vulkan Driver Now Supports Ray-Tracing

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by Anux View Post
    ... ID Software already did it on the 286 in 1992. https://en.wikipedia.org/wiki/Wolfenstein_3D
    Look how I pwnd you all!!11!!1!
    Pshhhhhaw! I was fooling with PovRay on a uVAX in 1990. Since VAX/VMS, it would certainly run on the likes of 8350, for 'multi-core' goodness.

    Comment


    • #42
      Originally posted by Anux View Post
      As we can see in many benchmarks on phoronix AVX* is a perfect fit for ray tracing (a ray is a vector).
      No, that would be wasteful. If all you had was SSEn, then it would sense to treat SSE operands as each representing a vector. Even the 4th component (assuming single-precision) wouldn't be a waste, if one were either using homogeneous coordinates or quaternions.

      However, once you move up to 256-bit or 512-bit, you'd be much better of using a SIMD programming model. That's how GPUs work, after all!

      Originally posted by Anux View Post
      But even a 96 core CPU is much to slow for real time RT.
      Intel famously demonstrated realtime ray-traced Quake at 768x768 @ 90 fps on a 3 GHz Yorkfield, back in 2007.

      Graphics coming back to the CPU? Not necessarily in a CPGPU format either!  The Inquirer caught a Ray Tracing demo from Intel, that used the 45nm


      Six months later, they supposedly had optimized it to the point that a 1.2 GHz Ultra Mobile CPU (probably a 45 nm Core 2, I'd guess) could render 512x256 at 25-45 fps.

      Intel demonstrates ray tracing on ultra-mobile PCs As frequent readers of PC Perspective know, we have been very interested in the work of on Daniel Pohl, now

      Comment


      • #43
        Originally posted by Anux View Post
        your funny or did you mean this serious? Following that "logic" we wouldn't even need a multi core CPU for ray tracing because ID Software already did it on the 286 in 1992. https://en.wikipedia.org/wiki/Wolfenstein_3D
        Look how I pwnd you all!!11!!1!
        I'm not sure how serious you're being, but that's ray-casting. It was purely 2D and nothing remotely like what we know as ray tracing... not even then.

        Comment


        • #44
          Originally posted by coder View Post
          No, that would be wasteful.
          What are you talking about? Have a look at any phoronix benchmark that compares AVX to non AVX with a raytracer, nearly 100% improvements in some cases.

          However, once you move up to 256-bit or 512-bit, you'd be much better of using a SIMD programming model.
          You seem to be confused, AVX as well as SSE is SIMD.

          Intel famously demonstrated realtime ray-traced Quake at 768x768 ...
          Yes and the same argument is to be made as for ET: Quake Wars. It will run on any modern cpu but do you really think that looks anything like modern raytracing?

          But yes if you want to make sacrifices modern real time RT is surely possible on the threadripper96c just reduce the resolution to 256x144 and it will run like a charm. 4k not so much.

          Originally posted by coder View Post
          I'm not sure how serious you're being, but that's ray-casting.
          I was somewhat serious. Can you explain the difference between raycasting and raytracing? Just a tip, the first thing you do in a modern raytracer is casting rays into your scene and at zero bounces you end up with raycasting.

          It was purely 2D and nothing remotely like what we know as ray tracing... not even then.
          Your Intel examples also are nothing remotely like what we know as ray tracing, that was my argument. Edit: And the majority of the rays are also only casted just in 3D.
          Last edited by Anux; 08 March 2024, 10:21 AM.

          Comment


          • #45
            Originally posted by Anux View Post
            What are you talking about? Have a look at any phoronix benchmark that compares AVX to non AVX with a raytracer, nearly 100% improvements in some cases.
            I didn't say not to use AVX, just not to waste an entire 256-bit operand on representing a single vector (or limit yourself to 128-bit).

            Originally posted by Anux View Post
            ​You seem to be confused, AVX as well as SSE is SIMD.
            If you don't know the difference between vector arithmetic and SIMD, then it sounds like you've got some reading to do.

            Originally posted by Anux View Post
            ​​Yes and the same argument is to be made as for ET: Quake Wars. It will run on any modern cpu but do you really think that looks anything like modern raytracing?
            With a 96-core CPU, I'd imagine you could afford to make some quality upgrades.

            Originally posted by Anux View Post
            ​​​I was somewhat serious. Can you explain the difference between raycasting and raytracing?


            They cast one ray per vertical column of the screen. If you played at 320x240 resolution, each frame was computed by casting only 320 rays.

            Originally posted by Anux View Post
            ​​​Your Intel examples also are nothing remotely like what we know as ray tracing, that was my argument.
            To be honest, I don't know anything about the technical underpinnings of Intel's demos.

            Comment


            • #46
              Originally posted by coder View Post
              I didn't say not to use AVX, just not to waste an entire 256-bit operand on representing a single vector (or limit yourself to 128-bit).
              I don't know what you try to say here? But I guess it comes down to you not understanding what SIMD is and how you use it.

              If you don't know the difference between vector arithmetic and SIMD, then it sounds like you've got some reading to do.
              I'm currently using SIMD in my hobby raytracer and have already read all I need to know about it, Michaels benchmarks prove that I'm right and 10s on wikipedia would have spared you the disgrace: https://en.wikipedia.org/wiki/Single..._multiple_data

              With a 96-core CPU, I'd imagine you could afford to make some quality upgrades.
              Just try it on you own CPU, raytracing scales linear with core count ... there are a million open source CPU raytracers available.

              They cast one ray per vertical column of the screen. If you played at 320x240 resolution, each frame was computed by casting only 320 rays.
              Exactly, do you get my argument now?

              To be honest, I don't know anything about the technical underpinnings of Intel's demos
              That's obvious. You could have watched the video I linked earlier or carefully read what I wrote about it ...

              Comment


              • #47
                Originally posted by Phoronos View Post
                Any cpu based graphics is a bad design from the start and should be avoided.
                Software OpenGL is great for legacy applications.
                At work we archived old in-house software releases that didn't support more recent versions of the proprietary framework via flatpak.

                People can install the old binaries as-needed and run the old Software. As the old software doesn't contain recent GPU drivers software rendering is used and it is plenty fast.

                Comment


                • #48
                  Phoronix users try to make modern ray-tracing somehow viable on CPUs: https://www.youtube.com/watch?v=7haqnQvrYfI

                  Comment


                  • #49
                    Originally posted by Anux View Post
                    I don't know what you try to say here? But I guess it comes down to you not understanding what SIMD is and how you use it.

                    I'm currently using SIMD in my hobby raytracer and have already read all I need to know about it, Michaels benchmarks prove that I'm right and 10s on wikipedia would have spared you the disgrace: https://en.wikipedia.org/wiki/Single..._multiple_data
                    Here's the part where you decide it's better to try and look like an asshole than a dumbass.

                    These ISA extensions support vector-oriented programming models, like horizontal sums, dot products, shuffles, etc. They also support SIMD programming, in which the vector register components are treated as scalar registers of independent sets of program state.

                    There's a reason Nvidia calls each SIMD lane a "thread". That's because their programming model is pure SIMD. If you want good scaling from AVX or AVX-512, it's generally better if you can follow a pure SIMD approach.

                    Originally posted by Anux View Post
                    Exactly, do you get my argument now?
                    No. Just because the original Wolfenstein game wasn't actually ray-tracing says nothing about anything else. If you want to make a point about whether something else was or wasn't ray tracing, the you need to provide details about how it was implemented. You can't just argue that by analogy.

                    Originally posted by Anux View Post
                    ​That's obvious. You could have watched the video I linked earlier or carefully read what I wrote about it ...
                    I don't watch youtube videos and your statements have been shown to be too unreliable for me to put any stock into. Try linking to an authoritative source and I might have a look.
                    Last edited by coder; 08 March 2024, 07:03 PM.

                    Comment


                    • #50
                      Originally posted by TemplarGR View Post

                      The "dedicated ray tracing hardware" are really just programmable shaders mostly.... There is no "fixed function" ray tracing hardware. It is not some kind of "special instructions only RT uses".
                      nVidia have dedicated RT cores in RTX series.
                      For the rest I agree with you.

                      Comment

                      Working...
                      X