Page 1 of 2 12 LastLast
Results 1 to 10 of 17

Thread: Is Xeon Phi every OSS enthusiast`s wet dream

  1. #1
    Join Date
    Sep 2011
    Location
    Rio de Janeiro
    Posts
    192

    Default Is Xeon Phi every OSS enthusiast`s wet dream

    The first time i read this article http://semiaccurate.com/2012/11/12/w...or-a-xeon-phi/ i wondered if this card would be the best way to have high—end graphics with OSS drivers. It should be pretty straight forward to run LLVM on top of it right?

    Obviously these cards are directed at HPC, but the prices are in line with Teslas, so Intel should be able to sell these chips at similar prices directed at consumers.

    Can anyone with better understanding of the driver stack tell me if I'm wrong our wright?

  2. #2
    Join Date
    Mar 2012
    Posts
    106

    Default

    In short, the answer is "It IS the best way, but a little longer than you thought."

    Despite of the pricing, the most outstanding problem is LLVM doesn't have a backend for KnightCorner. SSE instructions also won't work on KNC.
    We also need a (virtual?) DRM driver to handle all the mess such memory management and interactions with i915. You won't want a pure rendering card.
    Finally, may Intel push KNC kernel driver into Linux mainline, or we need to stick with RHEL6/CentOS6/SL6/Suse as well as their ancient software.

    Nevertheless, it is still the brightest way towards high performance rendering with OSS.

    P.S.
    http://software.intel.com/en-us/arti...ck-start-guide
    and
    http://registrationcenter.intel.com/.../readme-en.txt
    show a nicer figure about the stack. Better than I thought, but not perfect.
    Last edited by zxy_thf; 11-13-2012 at 05:54 AM.

  3. #3
    Join Date
    Sep 2011
    Location
    Rio de Janeiro
    Posts
    192

    Default

    Thanks for the reply.Some version of KC will probably trickle down to the consumer market eventually. The times of mixing and matching different vendors parts are close to an end, so if intel wants to be relevant at the graphically intensive applications they must develop a high performance GPU. I remember reading rumors that future integraded graphics might even be based on it. It will probably take some time though...

    What I find most interesting is that, if I understood it correctly, since KC is "easier" to program, developing a driver for it would probably be "easier" than for other GPUs. This would have the consequence of reducing the amount of effort a complete GPU driver requires, improving our chances of having open drivers. If such a trend catches with other GPU manufacturers, should be great for consumers, right?

  4. #4
    Join Date
    Mar 2012
    Posts
    106

    Default

    Quote Originally Posted by Figueiredo View Post
    Thanks for the reply.Some version of KC will probably trickle down to the consumer market eventually. The times of mixing and matching different vendors parts are close to an end, so if intel wants to be relevant at the graphically intensive applications they must develop a high performance GPU. I remember reading rumors that future integraded graphics might even be based on it. It will probably take some time though...

    What I find most interesting is that, if I understood it correctly, since KC is "easier" to program, developing a driver for it would probably be "easier" than for other GPUs. This would have the consequence of reducing the amount of effort a complete GPU driver requires, improving our chances of having open drivers. If such a trend catches with other GPU manufacturers, should be great for consumers, right?
    (I only skimmed part of KC's ISA. Sorry for any mistake)
    OpenGL developers don't need to write any "driver", but an "OpenGL server". That's what developers were doing in the "good old days":P, when SGI dorminated workstations.
    However, unfortunatelly the efforts to implement one are still a lot, but this time people don't need to working on two ISAs for each card.

    For the open-source GPU driver, I'm still not optimistic. It would take long time before community have real threatens to NV/AMD's solutions. For example, the state tracker of Mesa can only handle OpenGL 3.1, rather than 4.x.

  5. #5
    Join Date
    Sep 2011
    Location
    Rio de Janeiro
    Posts
    192

    Default

    Quote Originally Posted by zxy_thf View Post
    For the open-source GPU driver, I'm still not optimistic. It would take long time before community have real threatens to NV/AMD's solutions. For example, the state tracker of Mesa can only handle OpenGL 3.1, rather than 4.x.
    I guess what I was expecting is that if the GPU handles a code general enought, the same driver would work for every GPU this general. Bear with me for a moment.

    Obviously i'm going out on a limb here, but let's imagine Xeon Phi crushes the competition in the HPC space and enters consumer market (probably being integrated as the GPU of some future intel SoC).

    AMD and nVidia will be pressed to put out more "general" GPUs. Probably accepting ARM instruction set. Maybe they even decide to push this "programabilty" into future OpenGL revisions.

    In this scenario, if a driver similar to LLVM-pipe runs on every GPU out there, the efforts of building and maitaining a driver would be much less then they are now. Right? This could potentially be a huge win for opensource, a single driver to rule them all. Much like Linux itself. One can only dream...

    I'm sorry if I made any mistake with the terms or concepts, I'm not a programmer.

  6. #6
    Join Date
    Mar 2012
    Posts
    106

    Default

    Quote Originally Posted by Figueiredo View Post
    I guess what I was expecting is that if the GPU handles a code general enought, the same driver would work for every GPU this general. Bear with me for a moment.

    Obviously i'm going out on a limb here, but let's imagine Xeon Phi crushes the competition in the HPC space and enters consumer market (probably being integrated as the GPU of some future intel SoC).

    AMD and nVidia will be pressed to put out more "general" GPUs. Probably accepting ARM instruction set. Maybe they even decide to push this "programabilty" into future OpenGL revisions.

    In this scenario, if a driver similar to LLVM-pipe runs on every GPU out there, the efforts of building and maitaining a driver would be much less then they are now. Right? This could potentially be a huge win for opensource, a single driver to rule them all. Much like Linux itself. One can only dream...

    I'm sorry if I made any mistake with the terms or concepts, I'm not a programmer.
    It's possible, but firstly AMD&NV have to make agreement on the ISA. At least, they need to make agreement on some features of their processors.
    Like one can't develop OS easily for two CPUs, one with MMU&Interruption and one without, currently we can't develop such a general driver for all GPUs as they have too many differences. For example, on the context switch, I/A/N choose three different ways.

    BTW, maybe the most funny thing is, if a GPU is "general enough", why we have to call it GPU? Just because it can output videos?

  7. #7
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,285

    Default

    I guess the two big challenges will be texture processing (Larrabee had dedicated texture units) and scaling to a larger number of threads.

    My recollection was that recent llvmpipe versions scaled pretty well to 3 cores but hit diminishing returns after that (see Michael's test below but ignore 12-thread because there you're running "hyper-threads" instead of more cores :

    http://www.phoronix.com/scan.php?pag...llvmpipe&num=1

    I think the scaling issue should be manageable (GPUs manage it today with the equivalent of 20+ cores) -- I'm less sure about texturing simply because there's a lot of processing power hidden in the texture filtering.

    Quote Originally Posted by zxy_thf View Post
    BTW, maybe the most funny thing is, if a GPU is "general enough", why we have to call it GPU? Just because it can output videos?
    Have you no faith in Marketing ? GPU will just become "General-purpose Processing Unit"
    Last edited by bridgman; 11-13-2012 at 11:22 AM.

  8. #8
    Join Date
    Sep 2011
    Location
    Rio de Janeiro
    Posts
    192

    Default

    bridgman,

    Due to my ignorance in the subject I couldn't grasp from AMD's roadmap if such "programability" is also also expected in AMD camp. Obviously you can only share what's been made public already, but if can be so kind as to briefly clarify how the HSA improvements differs from Xeon+XeonPhi chip I'm sure us layman users would greatly appreciate.

  9. #9
    Join Date
    Jan 2012
    Posts
    18

    Default

    Quote Originally Posted by zxy_thf View Post
    It's possible, but firstly AMD&NV have to make agreement on the ISA. At least, they need to make agreement on some features of their processors.
    Like one can't develop OS easily for two CPUs, one with MMU&Interruption and one without, currently we can't develop such a general driver for all GPUs as they have too many differences. For example, on the context switch, I/A/N choose three different ways.
    I think the keyword here is LLVM. Yes you can easily develop OS for two CPUs utilizing LLVM. Of course if one CPU lacks some feature like an MMU, the OS must be able to cope with the lack of such a component. However you are talking about a quite small difference here that LLVM should have no problem in handling.

    Code that targets LLVM do not target a specific ISA...

    Quote Originally Posted by zxy_thf View Post
    BTW, maybe the most funny thing is, if a GPU is "general enough", why we have to call it GPU? Just because it can output videos?
    Why are we still calling something a "sound card" when its most often part of the chipset? Because of historical reasons. We already have OpenCL and other standards that makes a GPU a lot more than a GPU.

  10. #10
    Join Date
    Jan 2012
    Posts
    18

    Default

    Quote Originally Posted by zxy_thf View Post
    (I only skimmed part of KC's ISA. Sorry for any mistake)
    OpenGL developers don't need to write any "driver", but an "OpenGL server". That's what developers were doing in the "good old days":P, when SGI dorminated workstations.
    However, unfortunately the efforts to implement one are still a lot, but this time people don't need to working on two ISAs for each card.

    For the open-source GPU driver, I'm still not optimistic. It would take long time before community have real threatens to NV/AMD's solutions. For example, the state tracker of Mesa can only handle OpenGL 3.1, rather than 4.x.
    Actually they would need to write a Mesa driver, as Mesa already have an OpenGL Server.

    What do you mean? The mesa drivers for NV/AMD are at 3.1 to. The Xeon Phi with a good Mesa driver have a fair chance to give us performance that's neither NV or AMD can currently match.

    Yes I know that the proprietary driver have more features and performance, but that's totally irrelevant. For a bunch of reasons i need FOSS drivers and have to judge a device based on how it perform with FOSS drivers. And I know I'm not alone with such use cases.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •