Announcement

Collapse
No announcement yet.

Mozilla Releases DeepSpeech 0.6 With Better Performance, Leaner Speech-To-Text Engine

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Mozilla Releases DeepSpeech 0.6 With Better Performance, Leaner Speech-To-Text Engine

    Phoronix: Mozilla Releases DeepSpeech 0.6 With Better Performance, Leaner Speech-To-Text Engine

    One of the side projects Mozilla continues to develop is DeepSpeech, a speech-to-text engine derived from research by Baidu and built atop TensorFlow with both CPU and NVIDIA CUDA acceleration. This week marked the release of Mozilla DeepSpeech 0.6 with performance optimizations, Windows builds, lightening up the language models, and other changes...

    http://www.phoronix.com/scan.php?pag...h-0.6-Released

  • #2
    And to make DeepSpeech and other Speech-to-text engines perform even better, you can contribute your voice to the Common Voice training data set. Many languages besides English are represented:
    https://voice.mozilla.org/

    Please contribute!

    Comment


    • #3
      I wasn't aware of this project. Pretty interesting.

      Comment


      • #4
        Originally posted by jf33 View Post
        And to make DeepSpeech and other Speech-to-text engines perform even better, you can contribute your voice to the Common Voice training data set. Many languages besides English are represented:
        https://voice.mozilla.org/

        Please contribute!
        Wasn't aware of this project either, but I'll make it part of my daily to-do list to listen/speak some for it; thanks!

        Comment


        • #5
          "NVIDIA CUDA acceleration" ?
          No thanks, either AMD or nothing!
          I don't care about anything that relies on the "middle finger" company.

          Comment


          • #6
            wait a second, shouldn't mozilla be busy rewriting tensorflow in rust?

            Comment


            • #7
              Originally posted by Danny3 View Post
              "NVIDIA CUDA acceleration" ?
              No thanks, either AMD or nothing!
              I don't care about anything that relies on the "middle finger" company.
              Completely ignoring the part that there's CPU acceleration support that works fine regardless of hardware, does AMD have something that is comparable to CUDA and applicable for this project?

              Comment


              • #8
                Originally posted by Espionage724 View Post

                Completely ignoring the part that there's CPU acceleration support that works fine regardless of hardware, does AMD have something that is comparable to CUDA and applicable for this project?
                There is tensorflow for AMD. To quote users:

                The RADEON VII's performance is crazy with tensorflow 2.0a.
                In our tests, we reached close to the same speed like our 2080ti(about 10-15% less)! But the Radeon VII has more memory which was a bottleneck in our case. On this price this videocard has the best value to do machine learning we think that in our company!

                We are glad to open our eyes towards AMD products, we are buying our first configuration which is 40% cheaper and as we measured capable to perform better in our scenario than our well optimised server configuration.

                Thank you for all the work!
                https://github.com/ROCmSoftwarePlatf...eam/issues/362


                EDIT:


                I'm not really sure how easy it is to get ROCm go on a mainline kernel (or if still a custom amdgpu module is needed). But AFAIR the plan is to get everything needed for ROCm mainline. Last time I checked ROCm (about 1.5h year ago) I could compile all the stuff for ArchLinux.
                Last edited by oleid; 12-08-2019, 12:08 PM.

                Comment


                • #9
                  Originally posted by Espionage724 View Post

                  Completely ignoring the part that there's CPU acceleration support that works fine regardless of hardware, does AMD have something that is comparable to CUDA and applicable for this project?
                  OpenCL. However it's way behind in capability and ease of use. I do think, however, that if AMD GPUs manage to win a few more calls for bids for DOE supercomputers, we might start seeing considerable funds going the OpenCL way (or an equivalent new open standard) and significant pressure for NVidia to support recent specifications of OpenCL (the last one they support is 1.1 I think, like from 2011).

                  Comment


                  • #10
                    Originally posted by sabian2008 View Post

                    OpenCL. However it's way behind in capability and ease of use. I do think, however, that if AMD GPUs manage to win a few more calls for bids for DOE supercomputers, we might start seeing considerable funds going the OpenCL way (or an equivalent new open standard) and significant pressure for NVidia to support recent specifications of OpenCL (the last one they support is 1.1 I think, like from 2011).
                    No, not OpenCL. They have hcc and hip (for CUDA support) these days. Their Tensorflow port is NOT OpenCL based. The very first version was. But that was slow.

                    Comment

                    Working...
                    X