Announcement

Collapse
No announcement yet.

Mozilla Releases DeepSpeech 0.6 With Better Performance, Leaner Speech-To-Text Engine

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by oleid View Post

    No, not OpenCL. They have hcc and hip (for CUDA support) these days. Their Tensorflow port is NOT OpenCL based. The very first version was. But that was slow.
    Cool. Didn't know about HCC. Do you know how does it compare custom tuned CUDA code to HCC code? (In GPUs of comparable raw capability). Also, if you know, what's the overhead of writing HIP and compiling for Nvidia GPUs vs. writing a classical CUDA program?

    Comment


    • #12
      Originally posted by Espionage724 View Post

      Completely ignoring the part that there's CPU acceleration support that works fine regardless of hardware, does AMD have something that is comparable to CUDA and applicable for this project?
      I'm not ignoring the CPU acceleration, but I am really curios why Mozilla is not focusing on GPU side on open standards and open source ?
      If they work also on GPU acceleration, why do they use Nvidia's products ?
      Why can't they use AMD's like OpenCL or ROCm ?
      I don't own anymore and I don't intend every to own a Nvidia product in the future because of their anti-open source attitude.

      Comment


      • #13
        Originally posted by Espionage724 View Post

        Completely ignoring the part that there's CPU acceleration support that works fine regardless of hardware, does AMD have something that is comparable to CUDA and applicable for this project?
        Have you heard of our lord and saviour OpenCL? Or Vulkan Compute?

        Comment


        • #14
          Originally posted by oleid View Post

          No, not OpenCL. They have hcc and hip (for CUDA support) these days. Their Tensorflow port is NOT OpenCL based. The very first version was. But that was slow.
          SYCL, most probably.

          Comment


          • #15
            Originally posted by sandy8925 View Post

            Have you heard of our lord and saviour OpenCL? Or Vulkan Compute?
            That is besides the point, because Mozilla uses Tensorflow, which in turn uses whatever you tell it to. Which is either the CPU, CUDA or AMD's ROCm if you use the tensorflow-rocm port. If you want Tensorflow to support Vulkan Compute for some strange reason, you are free to contribute a port...

            Comment


            • #16
              Originally posted by sabian2008 View Post

              Also, if you know, what's the overhead of writing HIP and compiling for Nvidia GPUs vs. writing a classical CUDA program?
              I've never used Cuda or hip, but as far I've read, it compiles to the equivalent Cuda code. So it's more like a preprocessor wrapper. Write hip and you get CUDA and stuff for AMD.

              I think AMD's tensorflow code is a HIPed variant of the CUDA backend. But I don't know how much tuning after the hipification was needed.
              Last edited by oleid; 08 December 2019, 05:49 PM.

              Comment


              • #17
                Originally posted by Danny3 View Post

                I'm not ignoring the CPU acceleration, but I am really curios why Mozilla is not focusing on GPU side on open standards and open source ?
                If they work also on GPU acceleration, why do they use Nvidia's products ?
                Why can't they use AMD's like OpenCL or ROCm ?
                They don't write Cuda code. They used a popular library (tensorflow), which has a Cuda backend. As well as a CPU backend. But AMD has ported that library to ROCm and it seems to work on stock kernel, as long as you compile AMD's code for your distribution (or use their Debian or RHEL repo).

                AFAIK there is no vulkan compute backend for tensorflow. There used to be a OpenCL backend by AMD, but that wasn't competitive. Hence the new Ansatz.

                Comment


                • #18
                  Originally posted by Danny3 View Post
                  "NVIDIA CUDA acceleration" ?
                  No thanks, either AMD or nothing!
                  I don't care about anything that relies on the "middle finger" company.
                  The following is my personal take on this -- not representing any company here:

                  The choice of NVIDIA here for something as sensitive as speech is curious. I for one won't use this for privacy reasons, as the large, kernel + root binary-only NVIDIA stack is required to use the cards at all due to the signed firmware (and obviously CUDA itself).

                  Plus, while NVIDIA did revamp parts of their EULA, there still seems to be weasel words for "you're responsible for the hardware if it breaks or NVIDIA renders it useless -- no refunds and no recourse. And, we can stop you using that hardware at any time by cancelling your driver license, especially if you try to legally enforce any of your other rights against NVIDIA (e.g. defective hardware, violated contracts, or unlawful restrictions on end use of sold NVIDIA products)."

                  Who needs any of that as an ongoing business risk?

                  The SOFTWARE is not sold, and instead is only licensed for use, strictly in accordance with this document. The hardware is protected by various patents, and is sold, but this LICENSE does not cover that sale.
                  If Customer commences or participates in any legal proceeding against NVIDIA, then NVIDIA may, in its sole discretion, suspend or terminate all license grants and any other rights provided under this LICENSE during the pendency of such legal proceedings.

                  Comment


                  • #19
                    With this getting so fast and small we can maybe have Wake-on-Voice as a standard future soon

                    Comment


                    • #20
                      As much as I support AMD and want Radeon support for everything, as long as they don't manage to get officially supported by Tensorflow, I can't blame someone for not (officially) supporting AMD.

                      Comment

                      Working...
                      X