Announcement

Collapse
No announcement yet.

KDE Working On "Plasma Bigscreen" As TV Interface With AI Voice Assistant

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • KDE Working On "Plasma Bigscreen" As TV Interface With AI Voice Assistant

    Phoronix: KDE Working On "Plasma Bigscreen" As TV Interface With AI Voice Assistant

    Plasma Bigscreen is a new KDE project aiming to provide a user interface for television screens..

    http://www.phoronix.com/scan.php?pag...asma-Bigscreen

  • #2
    As someone who has used a big screen TV as my monitor for over 20 years and Plasma for the past 7 years, I'll give y'all a review as soon as it makes its way to my system.

    Comment


    • #3
      To be honest I'd rather see them further optimizing their stack for lower latency, a Vulkan backend for KWin etc.

      Comment


      • #4
        Originally posted by ms178 View Post
        To be honest I'd rather see them further optimizing their stack for lower latency, a Vulkan backend for KWin etc.
        While I don't disagree, I'm not gonna suggest other things when their new feature caters directly to my usage.

        I hope the CEC works between my RokuTV's remote and my RX 580. Never even looked into that before.

        There's plasma-bigscreen.org with some RPi4 images for folks who have one of those and want to give this a whirl. Also has a video of it in action. Reminds me of a combination of Steam's Big Picture Mode, Android, and Gnome all using QT and Breeze Dark.

        Comment


        • #5
          Advantages of Plasma Bigscreen: Free (as in Freedom) and Open Source

          Plasma Bigscreen powers the interface on a Single Board Computer and uses the Mycroft AI voice assistant to provide a Smart TV platform
          -- https://dot.kde.org/2020/03/26/plasm...asma-bigscreen

          1. If Mycroft AI is not libre how can Plasma Bigscreen be libre?

          Mycroft AI voice assistant is NOT free as in freedom. There are forks that are trying make Mycroft AI's stack completely Open Source but AFAIK it's not merged and it's certainly not enabled by default.

          2. Mycroft AI is not bad, but transparency could be improved.

          I have nothing against Mycroft AI, on the contrary I think they are doing amazing work to provide a framework that does not add to vendor lock-in like Siri/Cortana/Alexa. The thing is Mycroft AI planned to move to mimic1(-full), their TTS module, to Mozilla Deepspeech more than a year ago. I'm guessing they have not committed to it, because Deepspeech is not very practical choice at this point in time. There are conflicting information even on the official website, there are old articles state that they are moving and newer ones that state that they have not moved. This is the most reliable query that I have found to fact-check: https://github.com/search?q=org%3AMy...ch&type=Issues

          3. Mycroft AI might become bad.

          It looks like they are collecting their own dataset that they will use in combination with Mozilla's Deepspeech. https://mycroft.ai/voice-mycroft-ai/#an-open-dataset . Their argument is that they want to protect their users' data and allow people to opt-in to submit "training data" (everyone calls it that today, however I find the term hilarious) and allow them to request to have their data removed (since it's not in public domain). This seems like a very noble cause, but I'm still not sure if they will do like most companies with valuable personal information will do.

          4. PS

          It pisses me off when people deliberately or ignorantly hide the truth. I really am a fan of both KDE and Mycroft yet I'm critical of software regardless.

          I found this useful when researching models a few months ago https://github.com/Picovoice/speech-to-text-benchmark

          Comment


          • #6
            Originally posted by ms178 View Post
            To be honest I'd rather see them further optimizing their stack for lower latency, a Vulkan backend for KWin etc.
            That was my first thought as well, but judging by a quick glance at the devs' names and their posts, it's not that the KDE community have diluted their efforts on the truly important stuff (Wayland, i18n, general usability and performance, etc) by spreading their devs thin, it's just that there are two "extra" guys doing this thing in parallel with and not in spite of that other stuff. That's good, not bad. There are use-cases out there beyond the Desktop that need to also be catered to if we want Linux to be truly competitive, not only in marketing terms but in reality as well; as in, useful for its variety of users.

            Also, when it comes to KDE specifically, "accusing" them of neglecting the important stuff is clueless at best; they've been doing a tremendous job on that front these past few years.

            Comment


            • #7
              Originally posted by Jabberwocky View Post
              It pisses me off when people deliberately or ignorantly hide the truth. I really am a fan of both KDE and Mycroft yet I'm critical of software regardless.
              In the very same link, there is a whole section about the voice control which addressees some of these concerns:
              "For the current beta img, the team connects to Mycroft's Home server, which by default uses Google's STT (Speech to text) which sends anonymized utterances to Google. This, of course, is not ideal, but being Open Source, you can switch out the back end and use whatever you want, even self-hosted systems like Mozilla Deepspeech. Or you can de-activate voice recognition altogether. Your choice."
              So hardly "deliberately or ignorantly hiding the truth"

              Also, which part of Mycroft AI is not libre?

              Comment


              • #8
                Originally posted by Nocifer View Post

                That was my first thought as well, but judging by a quick glance at the devs' names and their posts, it's not that the KDE community have diluted their efforts on the truly important stuff (Wayland, i18n, general usability and performance, etc) by spreading their devs thin, it's just that there are two "extra" guys doing this thing in parallel with and not in spite of that other stuff. That's good, not bad. There are use-cases out there beyond the Desktop that need to also be catered to if we want Linux to be truly competitive, not only in marketing terms but in reality as well; as in, useful for its variety of users.

                Also, when it comes to KDE specifically, "accusing" them of neglecting the important stuff is clueless at best; they've been doing a tremendous job on that front these past few years.
                Don't get me wrong, they certainly have their own reasons for doing this work and I cannot judge if it is a distraction at all and I would be glad to hear that it is not, as I do not follow KDE development as closely as other upstream projects. But from my limited perspective the KWin-Vulkan work seems to have stalled (and wasn't a high priority to begin with, Martin Gräßlin was even openly hostile to that idea when it first came up in 2015 and argued for implementing newer OpenGL features instead, but here we are in 2020 and we are still at OpenGL 3.1 level). I have noticed the work of Roman Gilg on optimizing KWin but this work hasn't trickled down to distributions yet and as of now I prefer kwin-low latency which was giving me a better desktop experience. Therefore I'd just love to see more upstream improvements in that area which are more important for me personally.

                Comment


                • #9
                  Originally posted by Jabberwocky View Post

                  I have nothing against Mycroft AI, on the contrary I think they are doing amazing work to provide a framework that does not add to vendor lock-in like Siri/Cortana/Alexa. The thing is Mycroft AI planned to move to mimic1(-full), their TTS module, to Mozilla Deepspeech more than a year ago. I'm guessing they have not committed to it, because Deepspeech is not very practical choice at this point in time. There are conflicting information even on the official website, there are old articles state that they are moving and newer ones that state that they have not moved. This is the most reliable query that I have found to fact-check: https://github.com/search?q=org%3AMy...ch&type=Issues
                  It's likely they're not intentionally screwing this up, they just haven't done a great job keeping their documentation up to date as their software evolves. That happens to a lot of projects.

                  Originally posted by Jabberwocky View Post
                  3. Mycroft AI might become bad.

                  It looks like they are collecting their own dataset that they will use in combination with Mozilla's Deepspeech. https://mycroft.ai/voice-mycroft-ai/#an-open-dataset . Their argument is that they want to protect their users' data and allow people to opt-in to submit "training data" (everyone calls it that today, however I find the term hilarious) and allow them to request to have their data removed (since it's not in public domain). This seems like a very noble cause, but I'm still not sure if they will do like most companies with valuable personal information will do.
                  Your concern is completely valid, but their reason for not contributing user data directly to Deepspeech is valid too. I think it's fair to wait to see what they do here before passing judgement.

                  Comment


                  • #10
                    Meh... just use something like this: https://www.logitech.com/it-it/produ...oard-k400-plus (you can find it cheaper or similar ones around)
                    and think to fix bugs instead, please.

                    Comment

                    Working...
                    X