Announcement

Collapse
No announcement yet.

Vega 12/20 Added To AMDGPU LLVM, Confirms New GCN Deep Learning Instructions For Vega 20

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by olesalscheider View Post

    Well, that might be true for some models. And of course it does not make sense to throw away prior knowledge if you have it.
    But there are also models where you can't really model everything. Take image processing for example. Even here, the models can be greatly simplified by using a bit of prior knowledge (e. g. that objects in an image are usually translation invariant to some degree -> you can use a CNN instead of a fully-connected NN and you have to learn a lot less parameters). And still, these models need days to weeks on ten(s) of high performance GPUs...
    I generally agree %100 with what you wrote. I mostly deal with problems that can almost be reduced to outcome = frequency x severity. So images, voice, etc, really I know nothing about, and you're probably right. But recently during a breakfast with colleagues, the subject of an image type problem came up. I know nothing at all about images. The problem was to identify structures that suddenly appeared on satellite images, and then determine if the new structures were natural or man-made. A super hard problem supposedly. Well, apparently no one in that team had much mathematical knowledge, because to me the heart of the difficulty, determine if a shape is natural vs man-made, was solved at least 100 years ago. A few hours in the library will save weeks and month on high performance GPU time for many problems. People working with voice or images would save a lot of time if they took a class in harmonics analysis and another in measure theory. This is all my own faltering opinion.

    Comment


    • #22
      Originally posted by AndyChow View Post
      The current Tensorflow git fails.

      git+https://github.com/tensorflow/tensorflow

      nccl problem, which is nVidia tech. I have AMD tech.
      So, why aren't you using the ROCm fork I linked above?

      Comment


      • #23
        Originally posted by AndyChow View Post
        A few hours in the library will save weeks and month on high performance GPU time for many problems. People working with voice or images would save a lot of time if they took a class in harmonics analysis and another in measure theory. This is all my own faltering opinion.
        Pretty much across the board, deep learning has reached and exceeded the best classical solutions to problems like speech and object recognition. Moreover, it can work without burning valuable development time on hand-coded models built around assumptions that might or might not hold. Then, you have all the fascinating work on generative applications.


        Unsupervised image-to-image translation aims at learning a joint distribution of images in different domains by using images from the marginal distributions in individual domains. Since there exists an infinite set of joint distributions that can arrive the given marginal distributions, one could infer nothing about the joint distribution from the marginal distributions without additional assumptions. To address the problem, we make a shared-latent space assumption and propose an unsupervised image-to-image translation framework based on Coupled GANs. We compare the proposed framework with competing approaches and present high quality image translation results on various challenging unsupervised image translation tasks, including street scene image translation, animal image translation, and face image translation. We also apply the proposed framework to domain adaptation and achieve state-of-the-art performance on benchmark datasets.


        I'm all for a good, closed-form solution, where one exists. But a lot of interesting, real-world problems don't have one. Furthermore, the potential for deep learning to solve these without requiring practitioners to become a world-class domain experts shouldn't be underestimated.

        I'm not saying it's a panacea, nor do I mean to trivialize the difficulties in applying it. There used to be a lot of skepticism around neural networks, but most of the sort of complaints you raised have scarcely been heard in the past few years.

        Comment


        • #24
          Originally posted by coder View Post
          Pretty much across the board, deep learning has reached and exceeded the best classical solutions to problems like speech and object recognition. Moreover, it can work without burning valuable development time on hand-coded models built around assumptions that might or might not hold. Then, you have all the fascinating work on generative applications.


          Unsupervised image-to-image translation aims at learning a joint distribution of images in different domains by using images from the marginal distributions in individual domains. Since there exists an infinite set of joint distributions that can arrive the given marginal distributions, one could infer nothing about the joint distribution from the marginal distributions without additional assumptions. To address the problem, we make a shared-latent space assumption and propose an unsupervised image-to-image translation framework based on Coupled GANs. We compare the proposed framework with competing approaches and present high quality image translation results on various challenging unsupervised image translation tasks, including street scene image translation, animal image translation, and face image translation. We also apply the proposed framework to domain adaptation and achieve state-of-the-art performance on benchmark datasets.


          I'm all for a good, closed-form solution, where one exists. But a lot of interesting, real-world problems don't have one. Furthermore, the potential for deep learning to solve these without requiring practitioners to become a world-class domain experts shouldn't be underestimated.

          I'm not saying it's a panacea, nor do I mean to trivialize the difficulties in applying it. There used to be a lot of skepticism around neural networks, but most of the sort of complaints you raised have scarcely been heard in the past few years.
          You've pointed out excellent examples of unmatched applications for deep learning. Predictive models aren't one of them. We might have to agree to disagree, but the current predictive success of deep learning is currently mediocre, imo. I can only judge on what I see. Other things, like deciding "this is a cat, this is a dog", yeah, it's truly impressive.

          I will try the ROCm fork, because why not. The git page says that it does now build under linux, which changed from when I wrote my original comment. At that time, only the MacOS status was working, and compiling with my mac would take a week. First, I'd have to find it. It's in a box somewhere.

          Comment


          • #25
            Originally posted by AndyChow View Post
            The current Tensorflow git fails. git+https://github.com/tensorflow/tensorflow

            nccl problem, which is nVidia tech. I have AMD tech.
            Yep... on AMD/ROCm you would use RCCL instead.

            ROCm Communication Collectives Library (RCCL). Contribute to ROCm/rccl development by creating an account on GitHub.


            That said, I don't think the latest Tensorflow git includes HIP/ROCm support yet.

            Originally posted by AndyChow View Post
            I will try the ROCm fork, because why not. The git page says that it does now build under linux, which changed from when I wrote my original comment. At that time, only the MacOS status was working, and compiling with my mac would take a week. First, I'd have to find it. It's in a box somewhere.
            I don't understand this comment - the ROCm fork has never included or mentioned MacOS support as far as I know. Can you check ?
            Last edited by bridgman; 03 May 2018, 01:41 AM.
            Test signature

            Comment


            • #26
              Originally posted by AndyChow View Post

              You've pointed out excellent examples of unmatched applications for deep learning. Predictive models aren't one of them. We might have to agree to disagree, but the current predictive success of deep learning is currently mediocre, imo. I can only judge on what I see. Other things, like deciding "this is a cat, this is a dog", yeah, it's truly impressive.
              Well, there's this which I thought was interesting: https://www.quantamagazine.org/machi...haos-20180418/

              But the Nvidia toolchain is currently a huge pain to deal with. Not to mentions it's not portable of course.

              Comment


              • #27
                Originally posted by coder View Post
                Old news. We have received clarification that the name indicates nothing more than the sequence in which the chip was designed. So, no clue as to how big it is.

                We also know it's not Vega M, since that actually appears to be Polaris-based.


                No, why? AFAIK, AMD doesn't even use the word "Vega" in their workstation cards.

                Okay, there's Vega Frontier, but that was a weird sort of pro-sumer hybrid. Their main Vega 64 workstation card is the AMD Radeon Pro WX 9100. To be honest, it feels like the biggest different is probably ECC support in the Pro.

                Anyway, AMD doesn't normally make completely new chips just for workstation. They follow the industry standard practice of using the same GPUs in both consumer and workstation products. It's only the very largest GPUs that fail to reach mass market, such as Nvidia's P100 and V100 chips. Vega 20 looks to follow this path.
                That was my point

                Comment


                • #28
                  Originally posted by AndyChow View Post
                  The current Tensorflow git fails.

                  git+https://github.com/tensorflow/tensorflow

                  nccl problem, which is nVidia tech. I have AMD tech.
                  Tensorflow 1.8 with ROCm support is live:

                  Test signature

                  Comment

                  Working...
                  X