Announcement

Collapse
No announcement yet.

New Heterogeneous Memory Management For Linux, Will Be Supported By NVIDIA/Nouveau

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by waxhead View Post
    The world may not be black and white, but it's for sure binary. As a C programmer I know that casting malloc is a very bad idea. You have yourself linked to the reasons for it. if you compare for and against I think you will find that the disadvantages are far worse than the advantages. But then again, you may be programming in C++ for all I know
    I'm using both, and many other languages as well, so you're not wrong What the wiki page suggests and I wanted to show you, is that while it has disadvantages, you may need one of the advantages and can live with the disadvantages. And these are valid use cases.

    Comment


    • #22
      Originally posted by robclark View Post

      fair 'nuf :-P

      to c117152:
      There is a time and a place for optimizing sw explicitly for a single purpose on a single hw platform. And yeah, you can extract out that last bit of perf by pinning all your pages and explicitly controlling what data lives in what memory when, using hand written asm, etc.

      But there are a lot of cases where you want some sw to run on a lot of different hw platforms but not economically feasible to hand-tune for each one, yet you still want to get the benefit of gpu offload when possible, since while that might be 10% slower (made up number) than something specifically tuned for some particular hw, it is still a lot faster than the alternative ;-)
      Don't you lot have AI Driven hot spot optimising automatic code compilers to do all that sort of stuff for you!? #ideas #tongue #cheek

      Comment


      • #23
        Originally posted by pal666 View Post
        compile-time malloc? are you on drugs?
        GPUs are DSP... lazy mallocs when libc already manages it all is... lazy. REALLY lazy.

        Comment


        • #24
          Originally posted by c117152 View Post
          GPUs are DSP... lazy mallocs when libc already manages it all is... lazy. REALLY lazy.
          what are you trying to say again?

          Comment

          Working...
          X