Announcement

Collapse
No announcement yet.

Nouveau Driver Picks Up SVM Support Via HMM

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Nouveau Driver Picks Up SVM Support Via HMM

    Phoronix: Nouveau Driver Picks Up SVM Support Via HMM

    The Nouveau kernel driver has queued patches for introducing Shared Virtual Memory (SVM) support for this open-source NVIDIA driver as a step forward to its OpenCL/compute opportunities...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Maybe all the ground work needs to be done and put in to place first before nvidia unlocks the true potential with the necessary firmware?

    Comment


    • #3
      I wonder if there is a market for powerful integrated GPUs. Presumably there could be a large cost benefit due to the ability to share one big memory pool with both CPU and GPU. You can also have more efficient shared cooling. You will also need less circuitry as you don't have this entirely separate board which has to have tons of circuitry purely for handling the CPU<->GPU<->mainboard buses.

      Checkout these benchmarks for the Ryzen 5 2400G (a high end, desktop CPU with integrated GPU). The performance is pretty good. Notice that the GPU is the bottleneck in most of the games.

      Comment


      • #4
        Originally posted by bemerk View Post
        Maybe all the ground work needs to be done and put in to place first before nvidia unlocks the true potential with the necessary firmware?
        I think it's more likely that "someone" has already found a way to extract the firmware from the binary driver and is secretly sponsoring driver development so they and their other illuminati friends can use NVIDIA without the proprietary driver.

        Comment


        • #5
          Originally posted by bemerk View Post
          Maybe all the ground work needs to be done and put in to place first before nvidia unlocks the true potential with the necessary firmware?
          ability to install any firmware was used by devs as substitute to missing documentation (to do reverse engineering). if nvidia will provide only "necessary firmware" then it needs to provide hardware documentation to facilitate driver development, otherwise it will suck as now

          Comment


          • #6
            Originally posted by cybertraveler View Post
            I wonder if there is a market for powerful integrated GPUs. Presumably there could be a large cost benefit due to the ability to share one big memory pool with both CPU and GPU. You can also have more efficient shared cooling. You will also need less circuitry as you don't have this entirely separate board which has to have tons of circuitry purely for handling the CPU<->GPU<->mainboard buses.

            Checkout these benchmarks for the Ryzen 5 2400G (a high end, desktop CPU with integrated GPU). The performance is pretty good. Notice that the GPU is the bottleneck in most of the games.
            I would never dare call integrated GPU 'powerful'. GPU's have up to dozen times higher memory bandwith (compared to old DDR3 common dual channel setups). Even with DDR4, it might not reach 50GB/s (i7-9700K, max 41,6GB/s, DDR4-2666, 2-channel).

            For example GTX 1080 has 8Gb of GDDR5X video RAM with total bandwith of 320 GB/s(!). That's why iGPU's suck, compared to dGPU's, in gaming. And why using faster RAM instantly would reflect in frame-per-second when you use iGPU for gaming.

            Comment


            • #7
              Originally posted by cybertraveler View Post
              I wonder if there is a market for powerful integrated GPUs. Presumably there could be a large cost benefit due to the ability to share one big memory pool with both CPU and GPU. You can also have more efficient shared cooling. You will also need less circuitry as you don't have this entirely separate board which has to have tons of circuitry purely for handling the CPU<->GPU<->mainboard buses.
              Well, consoles certainly have gone this direction.

              But the cooling argument is rather weak. I think higher-end GPUs tend to have direct-die cooling, which is not very consumer-friendly. Also, memory DIMMs can pose a challenge for larger CPU heatsinks. So, I wouldn't say that cooling is a benefit of powerful APUs, but rather a challenge. Now, going back to our console example, we see that a free hand in system design can work around these issues and still result in a price/performance advantage for APUs.

              Also, because consoles' memory is soldered on board, they can use graphics memory, with tighter timing and power requirements than you can afford, if you need to put the memory on DIMMs.

              As for PCs, @ath0 has good points. Also, consider that the performance improvement curve on GPUs has tended to be much steeper than CPUs. I think gamers would typically upgrade their GPUs about twice as often as their CPUs. That's true for me, at least. And part of that improvement comes from faster memory, which would mean replacing your RAM and mobo, if your GPU is integrated with your CPU.

              Comment


              • #8
                Originally posted by starshipeleven View Post
                I think it's more likely that "someone" has already found a way to extract the firmware from the binary driver and is secretly sponsoring driver development so they and their other illuminati friends can use NVIDIA without the proprietary driver.
                Maybe they're just trying to enable out-of-the-box OpenCL support for all platforms. Even if it's not super-fast, it would still be better than running such tasks on the CPU. And that would be a big enabler for using OpenCL in more ways and places.

                Comment


                • #9
                  Originally posted by coder View Post
                  Well, consoles certainly have gone this direction.

                  But the cooling argument is rather weak. I think higher-end GPUs tend to have direct-die cooling, which is not very consumer-friendly. Also, memory DIMMs can pose a challenge for larger CPU heatsinks. So, I wouldn't say that cooling is a benefit of powerful APUs, but rather a challenge. Now, going back to our console example, we see that a free hand in system design can work around these issues and still result in a price/performance advantage for APUs.

                  Also, because consoles' memory is soldered on board, they can use graphics memory, with tighter timing and power requirements than you can afford, if you need to put the memory on DIMMs.

                  As for PCs, @ath0 has good points. Also, consider that the performance improvement curve on GPUs has tended to be much steeper than CPUs. I think gamers would typically upgrade their GPUs about twice as often as their CPUs. That's true for me, at least. And part of that improvement comes from faster memory, which would mean replacing your RAM and mobo, if your GPU is integrated with your CPU.
                  All, very good points! Thanks for sharing.

                  Comment


                  • #10
                    I just want to see more software take advantage of OpenCL 2.0, but I don't know of any that does. Despite all the effort for drivers to support it, there's a severe shortage of OpenCL 2.0 software.
                    I asked the Blender devs if they can start using OpenCL 2.x & shared virtual memory for Blender cycles, but I was told it's not worth the effort.

                    Comment

                    Working...
                    X