Announcement

Collapse
No announcement yet.

Intel Announces Iris Xe Desktop Graphics For OEMs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    The implications for VDI hosts are tantalizing. The scaled-up version of this could deliver reasonably accelerated desktop experiences without having to shell-out big bucks for Nvidia GRID licenses and high-end hardware. This one could already be a kick-ass part of a small office's VDI or terminal server.

    Comment


    • #52
      Originally posted by Danielsan View Post

      Rendering actually. Opencl is not available for Ryzen APU with Mesa, i found a workaround but never had time to test it.
      And somehow that is up to blender developers to fix... right. Instead maybe point your finger at, I don't know, AMD?

      Comment


      • #53
        Originally posted by kpedersen View Post
        And if you can only do that using blobs.. then why not go the full way and just use Windows or Maya?
        Wow, what a black and white world view. Must suck to be you.

        Comment


        • #54
          Originally posted by Kemosabe View Post
          Why is it so damn hard to build a high-performing GPU? They have been trying for so long and all the money it possibly could require. Are there only 5 experts on this planet who can do this? Is all the knowledge proprietary? What is it?
          Creating any architecture that is original and minimizes the cost of using others IP is very difficult. It requires people who are good at deconstructing new methods and being able to reconstruct them without patent violations. Apple for example, who has billions to use on GPU research, still had to poach talent from Imagination. Intel had to poach a lead designer from AMD for Xe. Intel prior "waste" in GPU development was the fact they were trying to avoid NVidia IP as much as possible. Before that they were too cheap and tried to build a GPU with existing in house IP. It flopped.

          Comment


          • #55
            Originally posted by numasan View Post

            And somehow that is up to blender developers to fix... right. Instead maybe point your finger at, I don't know, AMD?
            Didn't know that was about AMD... But I also found the workaround: https://linuxconfig.org/install-open...ian-and-ubuntu

            Comment


            • #56
              Originally posted by numasan View Post

              Wow, what a black and white world view. Must suck to be you.
              Heh perhaps. But it is more the fact that there is no need to compromise in this day and age when decent Intel and AMD hardware exists.

              So yes, possibly a little black and white but in this case there really is no benefit to fiddling around in a gray area.
              My approach to hardware is rip the crap out and replace it with something that is more appropriate. The cost of second hand hardware is almost free. I feel if everyone did this, NVIDIA would either already be open-source by now or bankrupt, opening up room for a better company. Both are good outcomes.

              And my suggestion for Windows and Maya was actually being honest (even though it probably came across as a little sharp), in a professional setting you will probably be using everything anyway. Open-source, proprietary, cloud, etc.
              Last edited by kpedersen; 28 January 2021, 10:04 AM.

              Comment


              • #57
                Originally posted by kpedersen View Post
                Heh perhaps. But it is more the fact that there is no need to compromise in this day and age when decent Intel and AMD hardware exists.
                It is not a "fact", and "decent" is relative. You can't claim that OpenCL support is complete and pain free across all of Intel and AMD hardware, compared to CUDA on Nvidia hardware. Whether you need it or not is irrelevant, just as my dream of ditching Nvidia is.

                So yes, possibly a little black and white but in this case there really is no benefit to fiddling around in a gray area.
                My approach to hardware is rip the crap out and replace it with something that is more appropriate. The cost of second hand hardware is almost free. I feel if everyone did this, NVIDIA would either already be open-source by now or bankrupt, opening up room for a better company. Both are good outcomes.
                What does this have to do with anything Blender developers and users do?

                And my suggestion for Windows and Maya was actually being honest (even though it probably came across as a little sharp), in a professional setting you will probably be using everything anyway. Open-source, proprietary, cloud, etc.
                In this context why even mention Windows? Linux is the de facto OS in the animation and VFX industry and has been for the last two decades. If you've watched any Pixar or AAA blockbusters since 2000, you've literally watched movies created on Linux workstations and rendered on Linux server farms. For that reason Maya has been on Linux for just as long, together with other highend DCC software. The choice of Linux is for the same reason many of us love it: stability, flexibility and performance. Nvidia also played a huge role in this choice for this industry, and continues to do so. These last years Blender is getting noticed more and more by the "big boys" and even getting contributions from them, both monetarily and code. So honestly, your suggestion is both useless and baseless.

                Nothing personal, I'm just tired of reading comments from people that for some reason wants to voice their opinion on subjects they are clearly ignorant about.

                Comment


                • #58
                  Originally posted by edwaleni View Post
                  Creating any architecture that is original and minimizes the cost of using others IP is very difficult. It requires people who are good at deconstructing new methods and being able to reconstruct them without patent violations. Apple for example, who has billions to use on GPU research, still had to poach talent from Imagination. Intel had to poach a lead designer from AMD for Xe. Intel prior "waste" in GPU development was the fact they were trying to avoid NVidia IP as much as possible. Before that they were too cheap and tried to build a GPU with existing in house IP. It flopped.
                  Would you point me in the direction of some more reading about Intel trying to avoid nVidia IP in their GPU dev? I know about Larrabee being a mess, so that covers the "before that" section. Thanks.

                  Comment


                  • #59
                    Originally posted by Paradigm Shifter View Post
                    Would you point me in the direction of some more reading about Intel trying to avoid nVidia IP in their GPU dev? I know about Larrabee being a mess, so that covers the "before that" section. Thanks.

                    Comment


                    • #60
                      Cheers mate!

                      Comment

                      Working...
                      X