Announcement

Collapse
No announcement yet.

Intel Opens Up nGraph Source Code For DNN Model Compiler

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel Opens Up nGraph Source Code For DNN Model Compiler

    Phoronix: Intel Opens Up nGraph Source Code For DNN Model Compiler

    Intel tonight announced they are open-sourcing their nGraph compiler code, which serves as a framework-neutral deep neural network model compiler...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    So, getting a GPU with tensor cores like nvidia's Volta range will give a step up in performance with TensorFlow, and in addition to that so will using Intel with nGraph?

    I wonder if there is any benchmarks for this type of workload that Michael could run with Phoronix Test Suite? It's been something I've been meaning to dive into at some point, and benchmarks would be pretty handy when some training of models can take days?

    Does Michael have a Titan V or able to be sponsored one for such by nvidia? Not sure what his relationship with nvidia is, I know someone that gets sponsored hardware sometimes, maybe could reach out to him to ask his contact about sponsoring a Titan V to Phoronix if it'd help(no promises it'll be approved).

    Comment


    • #3
      Originally posted by polarathene View Post
      Does Michael have a Titan V or able to be sponsored one for such by nvidia? Not sure what his relationship with nvidia is, I know someone that gets sponsored hardware sometimes, maybe could reach out to him to ask his contact about sponsoring a Titan V to Phoronix if it'd help(no promises it'll be approved).
      The thing is: they don't need any help selling them. Titan V's already sell themselves, by virtue of nothing comparable being on the market.

      Comment


      • #4
        Originally posted by coder View Post
        The thing is: they don't need any help selling them. Titan V's already sell themselves, by virtue of nothing comparable being on the market.
        Uhh... so we shouldn't have benchmarks for any AMD GPUs either then I guess? Since those are selling so well to miners and what not. Same with ThreadRipper and EPYC because those sold so well too, I remember wanting to get an EPYC but it was out of stock, in my entire country, on Amazon and NewEgg, with none knowing when more would be available. Got a ThreadRipper instead, it was the last one at the store I think?

        I don't need benchmarks to know the top tier product is good. I might like to know how much better it is over others, and in this case, how much of a difference do the tensor cores make, the volta architecture(which has some perf enhancements only available via Volta in CUDA), how much of an effect does Intel's nGraph have on top of all that? If you don't think that information is useful, why do we benchmark GPUs with different games? The games often sell themselves too, you don't need to know how well the game performs with the GPUs then right, or the information of the GPUs strength/weaknesses with various workloads.

        Comment


        • #5
          Originally posted by polarathene View Post
          Uhh... so we shouldn't have benchmarks for any AMD GPUs either then I guess? Since those are selling so well to miners and what not. Same with ThreadRipper and EPYC because those sold so well too, I remember wanting to get an EPYC but it was out of stock, in my entire country, on Amazon and NewEgg, with none knowing when more would be available. Got a ThreadRipper instead, it was the last one at the store I think?
          Since you seem to care so much, maybe you can buy a Titan V for Michael to test. I just said why I think it's unlikely for Nvidia to give him one. All of your examples are products which *do* have legit competition (so, comparisons like Michael's are relevant) or have some other sort of supply-shortage that might be skewing demand (and let's not forget that Michael's GPU review samples nearly all preceded the recent cryptomining boom).

          They don't need him to run a bunch of random-ass Linux benchmarks on it, on the off chance that a handful of Linux gamers will buy one.

          Originally posted by polarathene View Post
          I don't need benchmarks to know the top tier product is good. I might like to know how much better it is over others, and in this case, how much of a difference do the tensor cores make, the volta architecture(which has some perf enhancements only available via Volta in CUDA), how much of an effect does Intel's nGraph have on top of all that?
          They do give them away, but they give them to deep learning researchers. Volta is in a class where they don't really need to prove how fast it is, nor are they likely to entrust that task to someone who lacks a background in the field.

          Comment


          • #6
            Originally posted by coder View Post
            Since you seem to care so much, maybe you can buy a Titan V for Michael to test.
            My earlier post stated I could try reach out to nvidia through a contact I have whom has received sponsored hardware in the past from nvidia. But if Michael already has direct contact with nvidia and has tried to get a Titan V sponsored for testing(or any other nvidia hardware), then I doubt that will make much of a difference. If Michael doesn't, I don't mind trying to help reach nvidia.


            > They don't need him to run a bunch of random-ass Linux benchmarks on it, on the off chance that a handful of Linux gamers will buy one.

            Titan V's are useful for a lot more than just gaming The contact I referred to uses them for heavy compute work. Phoronix users aren't all interested in articles and benchmarks on Phoronix for gaming purposes, surely that is evident.

            > Volta is in a class where they don't really need to prove how fast it is

            Not the point I was trying to make. I understand it's top tier, but I don't know how it compares to a Titan XP(or 1080Ti), what the cost/perf ratio is for certain workloads between the Titan V and others, how Intel nGraph scales across these models and what additional advantage that offers, how much of a difference does the Volta only improvements for CUDA make(a bit difficult since we only have the Titan V currently?), how much of a difference do the dedicated tensor cores make vs general compute?(same issue as before).

            If I want to just throw money without that information and just buy Titan V's for the sole reason they're capable of the best performance presently, sure benchmarks aren't too useful then. When you need to properly justify the gains especially in relation to the cost for such purchases, that's not so easy to do without the data to back it up.

            > nor are they likely to entrust that task to someone who lacks a background in the field.

            Phoronix has been around a long time, is well known for what it focuses on and has a reasonable audience does it not? Nvidia would surely be able to advise Michael on how to go about it or provide feedback to improve on his testing. With Phoronix Test Suite, it could further cater towards that.

            Comment


            • #7
              Originally posted by polarathene View Post
              My earlier post stated I could try reach out to nvidia through a contact I have whom has received sponsored hardware in the past from nvidia. But if Michael already has direct contact with nvidia and has tried to get a Titan V sponsored for testing(or any other nvidia hardware), then I doubt that will make much of a difference. If Michael doesn't, I don't mind trying to help reach nvidia.
              If Michael can get one, that's great. I wouldn't mind seeing how it runs his gauntlet, either. I'm just looking at it from Nvidia's perspective and it's not so compelling. You have to remember they're selling a $10k GPU for $3k, so the actual cost probably isn't that much lower.

              It's a distinct market segment, with very little crossover between it and other GPUs. There's almost no one who's in the market for another GPU that would decide to go with a Titan V on the basis of any of his benchmarks. So, the upside for them just isn't there.

              Originally posted by polarathene View Post
              Not the point I was trying to make. I understand it's top tier, but I don't know how it compares to a Titan XP(or 1080Ti), what the cost/perf ratio is for certain workloads between the Titan V and others,
              It's pretty simple. If you're training models that fit in 12 GB and can't afford multi-GPU training, then get a Titan V. If you need good double-precision GPU compute on a budget, get a Titan V. If you need the best gaming card money can buy, get a Titan V (or, actually, now the Quadro V100 version).

              For fp32 GPU-compute, GTX 1080 Ti (or Titan Xp, given current 1080 Ti pricing) is still the way to go.

              Originally posted by polarathene View Post
              Intel nGraph scales across these models and what additional advantage that offers,
              The limiting factor is probably going to be the benchmarks. Finding realistic and interesting benchmarks and getting it all to work is no small undertaking. Your time & energy is much better spent working on such contributions to PTS than arguing with me. For a proper comparison, you'll want to look at TensorRT, which also does network optimization.

              Originally posted by polarathene View Post
              how much of a difference does the Volta only improvements for CUDA make(a bit difficult since we only have the Titan V currently?),
              Not nearly enough to justify the price difference.

              Originally posted by polarathene View Post
              how much of a difference do the dedicated tensor cores make vs general compute?(same issue as before).
              They don't. They're special-purpose units that are accessible via special intrinsics (or, better yet, via optimized libraries like cuDNN or TensorRT). The compiler isn't randomly inserting them into your code, although some people have dabbled in using them for purposes other than deep learning.

              Originally posted by polarathene View Post
              > nor are they likely to entrust that task to someone who lacks a background in the field.

              Phoronix has been around a long time, is well known for what it focuses on and has a reasonable audience does it not? Nvidia would surely be able to advise Michael on how to go about it or provide feedback to improve on his testing. With Phoronix Test Suite, it could further cater towards that.
              I'm not sure PTS is the best vehicle for deep learning benchmarks. Besides, there are already others out there:



              You might also find this a worthwhile read:

              https://www.anandtech.com/show/12170...ew-titanomachy
              Last edited by coder; 22 April 2018, 05:00 PM.

              Comment

              Working...
              X