Announcement

Collapse
No announcement yet.

Nouveau Driver Picks Up SVM Support Via HMM

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • coder
    replied
    Originally posted by Electric-Gecko View Post
    I asked the Blender devs if they can start using OpenCL 2.x & shared virtual memory for Blender cycles, but I was told it's not worth the effort.
    The natural follow-up is then to ask what benefit they'd expect and how much effort it would require. They might be right that if you look at the communication time, it's small enough relative to the total rendering time that it wouldn't yield a significant performance improvement.

    The other argument for SVM might be to render from main memory, to enable rendering support for larger scenes on smaller GPUs. However, they might feel that would hurt performance too much, relative to staying entirely in GPU memory.

    I don't know - I'm just speculating. So, go ahead and ask them what they mean and why they think so.

    Leave a comment:


  • Electric-Gecko
    replied
    I just want to see more software take advantage of OpenCL 2.0, but I don't know of any that does. Despite all the effort for drivers to support it, there's a severe shortage of OpenCL 2.0 software.
    I asked the Blender devs if they can start using OpenCL 2.x & shared virtual memory for Blender cycles, but I was told it's not worth the effort.

    Leave a comment:


  • cybertraveler
    replied
    Originally posted by coder View Post
    Well, consoles certainly have gone this direction.

    But the cooling argument is rather weak. I think higher-end GPUs tend to have direct-die cooling, which is not very consumer-friendly. Also, memory DIMMs can pose a challenge for larger CPU heatsinks. So, I wouldn't say that cooling is a benefit of powerful APUs, but rather a challenge. Now, going back to our console example, we see that a free hand in system design can work around these issues and still result in a price/performance advantage for APUs.

    Also, because consoles' memory is soldered on board, they can use graphics memory, with tighter timing and power requirements than you can afford, if you need to put the memory on DIMMs.

    As for PCs, @ath0 has good points. Also, consider that the performance improvement curve on GPUs has tended to be much steeper than CPUs. I think gamers would typically upgrade their GPUs about twice as often as their CPUs. That's true for me, at least. And part of that improvement comes from faster memory, which would mean replacing your RAM and mobo, if your GPU is integrated with your CPU.
    All, very good points! Thanks for sharing.

    Leave a comment:


  • coder
    replied
    Originally posted by starshipeleven View Post
    I think it's more likely that "someone" has already found a way to extract the firmware from the binary driver and is secretly sponsoring driver development so they and their other illuminati friends can use NVIDIA without the proprietary driver.
    Maybe they're just trying to enable out-of-the-box OpenCL support for all platforms. Even if it's not super-fast, it would still be better than running such tasks on the CPU. And that would be a big enabler for using OpenCL in more ways and places.

    Leave a comment:


  • coder
    replied
    Originally posted by cybertraveler View Post
    I wonder if there is a market for powerful integrated GPUs. Presumably there could be a large cost benefit due to the ability to share one big memory pool with both CPU and GPU. You can also have more efficient shared cooling. You will also need less circuitry as you don't have this entirely separate board which has to have tons of circuitry purely for handling the CPU<->GPU<->mainboard buses.
    Well, consoles certainly have gone this direction.

    But the cooling argument is rather weak. I think higher-end GPUs tend to have direct-die cooling, which is not very consumer-friendly. Also, memory DIMMs can pose a challenge for larger CPU heatsinks. So, I wouldn't say that cooling is a benefit of powerful APUs, but rather a challenge. Now, going back to our console example, we see that a free hand in system design can work around these issues and still result in a price/performance advantage for APUs.

    Also, because consoles' memory is soldered on board, they can use graphics memory, with tighter timing and power requirements than you can afford, if you need to put the memory on DIMMs.

    As for PCs, @ath0 has good points. Also, consider that the performance improvement curve on GPUs has tended to be much steeper than CPUs. I think gamers would typically upgrade their GPUs about twice as often as their CPUs. That's true for me, at least. And part of that improvement comes from faster memory, which would mean replacing your RAM and mobo, if your GPU is integrated with your CPU.

    Leave a comment:


  • aht0
    replied
    Originally posted by cybertraveler View Post
    I wonder if there is a market for powerful integrated GPUs. Presumably there could be a large cost benefit due to the ability to share one big memory pool with both CPU and GPU. You can also have more efficient shared cooling. You will also need less circuitry as you don't have this entirely separate board which has to have tons of circuitry purely for handling the CPU<->GPU<->mainboard buses.

    Checkout these benchmarks for the Ryzen 5 2400G (a high end, desktop CPU with integrated GPU). The performance is pretty good. Notice that the GPU is the bottleneck in most of the games.
    I would never dare call integrated GPU 'powerful'. GPU's have up to dozen times higher memory bandwith (compared to old DDR3 common dual channel setups). Even with DDR4, it might not reach 50GB/s (i7-9700K, max 41,6GB/s, DDR4-2666, 2-channel).

    For example GTX 1080 has 8Gb of GDDR5X video RAM with total bandwith of 320 GB/s(!). That's why iGPU's suck, compared to dGPU's, in gaming. And why using faster RAM instantly would reflect in frame-per-second when you use iGPU for gaming.

    Leave a comment:


  • pal666
    replied
    Originally posted by bemerk View Post
    Maybe all the ground work needs to be done and put in to place first before nvidia unlocks the true potential with the necessary firmware?
    ability to install any firmware was used by devs as substitute to missing documentation (to do reverse engineering). if nvidia will provide only "necessary firmware" then it needs to provide hardware documentation to facilitate driver development, otherwise it will suck as now

    Leave a comment:


  • starshipeleven
    replied
    Originally posted by bemerk View Post
    Maybe all the ground work needs to be done and put in to place first before nvidia unlocks the true potential with the necessary firmware?
    I think it's more likely that "someone" has already found a way to extract the firmware from the binary driver and is secretly sponsoring driver development so they and their other illuminati friends can use NVIDIA without the proprietary driver.

    Leave a comment:


  • cybertraveler
    replied
    I wonder if there is a market for powerful integrated GPUs. Presumably there could be a large cost benefit due to the ability to share one big memory pool with both CPU and GPU. You can also have more efficient shared cooling. You will also need less circuitry as you don't have this entirely separate board which has to have tons of circuitry purely for handling the CPU<->GPU<->mainboard buses.

    Checkout these benchmarks for the Ryzen 5 2400G (a high end, desktop CPU with integrated GPU). The performance is pretty good. Notice that the GPU is the bottleneck in most of the games.

    Leave a comment:


  • bemerk
    replied
    Maybe all the ground work needs to be done and put in to place first before nvidia unlocks the true potential with the necessary firmware?

    Leave a comment:

Working...
X