Announcement

Collapse
No announcement yet.

Lavapipe CPU-Based Vulkan Driver Implements Ray-Tracing Pipelines

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Developer12
    replied
    Originally posted by Mitch View Post
    This is totally academic and beyond the use-case of the CPU solution, but I can't help wonder how many CPU cores and/or what clockspeed they'd have to run to get GPU-levels of performance using this solution. I imagine it might be in the hundreds or thousands of cores
    Having a software implementation like this is extreemely useful for doing single-frame renders when you aren't guarenteed to have a GPU. There's plenty of CAD software out there that can fall back to software openGL when nesisary.

    As RT moves towards being a standard API that everyone just uses for everything these fallbacks are going to become more and more useful. I can see software in the near future making use of RT to render fancy GUI elements and weird skeuomorphs and other random stuff that isn't particularly demanding but will completely crash the application if the RT APIs aren't available.

    Leave a comment:


  • Anux
    replied
    Originally posted by byteabit View Post
    Smartphones and game consoles have RayTracing too.
    Yes, but they have dedicated hardware for the RT part which is 100 to 1000 times faster than your typical CPU. If you compare the above 128 thread CPU to a 8 thread smartphone CPU (lets assume the threads are equal) its 16 times slower than the 15 FPS.

    And yes the software renderers do make progress (AVX optimizations) but so do GPU drivers. I myself am a fan of efficient resource usage, so I'd like to be proven wrong. And maybe there will be an edge case in the future that profits from CPU rendering.

    Leave a comment:


  • byteabit
    replied
    Originally posted by Anux View Post
    I think you don't understand why we even have graphics cards.
    Have a look at 128 threads server CPU vs integrated GPU (at higher resolution)
    I understand why we have graphics cards. My point was not to replace the GPU with CPU for gaming, rather "support" when it had time for. If this is feasible, I don't know. But that was the idea. I mean look at smartphones, without a dedicated graphics card or a game console. Smartphones and game consoles have RayTracing too.

    Leave a comment:


  • Anux
    replied
    Originally posted by byteabit View Post
    There are plenty of games (and configurations) where it is bottlenecked by the GPU and the CPU has to wait for each frame.
    As I tried to show you in the code block, it doesn't matter if the CPU is totally idle, it can only start to render when the world state for the frame is ready and is of no use if it takes longer than the GPU.

    And you are comparing todays workload and tools, not a game that is optimized to take advantage of
    What do you want to optimize or take advantage of? The time limit is there and you have to fit the workload in it.

    plus with AVX-512 (or even something better in the future) the CPU could help in some points.
    I think you don't understand why we even have graphics cards. They are much faster (100 times was much in favor of CPUs). Even if AVX gives us a 4 x improvement it would be a drop in the ocean.
    Have a look at 128 threads server CPU vs integrated GPU (at higher resolution): https://www.phoronix.com/review/mete...raphics#page-4 https://www.phoronix.com/news/LLVMpipe-Mesa-19.0-Performance

    And that is not even accounting for new features like ray tracing.

    Leave a comment:


  • byteabit
    replied
    Originally posted by Anux View Post
    your GPU is at least 100 times faster than you CPU for rendering, so my scenario is still too favorable for typical CPUs.
    There are plenty of games (and configurations) where it is bottlenecked by the GPU and the CPU has to wait for each frame. And you are comparing todays workload and tools, not a game that is optimized to take advantage of, plus with AVX-512 (or even something better in the future) the CPU could help in some points. But probably the video split would make perfect synchronization impossible. Well it was just an idea.

    Leave a comment:


  • Anux
    replied
    Of course you would only render a very small part of the full frame but you only have (depending on your FPS) a short amount of time.
    Imagine a single Frame at 60 FPS (~ 16 ms):
    Code:
    GPU:  | ... display of pref frame      | render 95% of frame |             | fuse CPU + GPU frame and display |
    CPU:  | game world state | send to GPU | render 5% of frame  | send to GPU | begin frame 2 ...‚Äč
    Time: | 4 ms             | 1 ms        | 10 ms               | 1ms         |
    Those are totally made up times but your GPU is at least 100 times faster than you CPU for rendering, so my scenario is still too favorable for typical CPUs. To render 5% of the frame on your CPU takes 10 ms while it would only take 0,5 ms on the GPU, faster than sending the data over PCIe and merging the frame.

    And then you have to think of all the things that could go wrong, for example Windows doing updates in the background and you have to delay the frame even further. The potential gain would be minimal compared to the work and risks involved. Remember all the stutter problems back in the days of dual GPU (SLI)? And those dual GPUs were identical in speed.
    Last edited by Anux; 11 April 2024, 08:10 AM.

    Leave a comment:


  • byteabit
    replied
    Originally posted by Anux View Post

    It's not about idling but to deliver your partly rendered frame from the CPU to the GPU in time to sync with the right frame, without delaying it further than what would have happened when doing everything on the GPU.
    It doesn't need to render a full image right? Only part of what is required to do. Doesn't the GPU break down the image in different parts? I'm not talking about current toolkits that are not aware of this advantage. Off course game and toolkits need to support this. Just in theory off course, maybe its not possible as you suggest.

    Leave a comment:


  • Anux
    replied
    Originally posted by byteabit View Post

    That's less of an issue if the CPU already has to wait for GPU and is just idling. If the game/application is programmed with this in mind, it could have some workload sitting for the GPU that the CPU could help out with.
    It's not about idling but to deliver your partly rendered frame from the CPU to the GPU in time to sync with the right frame, without delaying it further than what would have happened when doing everything on the GPU.

    Leave a comment:


  • byteabit
    replied
    Originally posted by Anux View Post
    I doubt that's feasible, remember CPU and GPU need to communicate over a slow PCIe connection.
    That's less of an issue if the CPU already has to wait for GPU and is just idling. If the game/application is programmed with this in mind, it could have some workload sitting for the GPU that the CPU could help out with.

    Leave a comment:


  • Anux
    replied
    Originally posted by byteabit View Post
    For games, it does not need to calculate RayTracing on its own, just support the GPU when it has some time left. So this is an excellent use case for when its GPU bottlenecked, so that the CPU has to wait for the GPU. Or when the game does not utilize all CPU cores otherwise. I mean I am talking about CPUs in most end users PCs.
    I doubt that's feasible, remember CPU and GPU need to communicate over a slow PCIe connection.

    Leave a comment:

Working...
X