Announcement

Collapse
No announcement yet.

Schaller On Linux In 2018: Rust Rules, Apple Declines, Linux Graphics Compete

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by Leopard View Post

    Flathub is available. On Linux Mint 18.3 , software center has Flatpak support and it is linked to Flathub.
    Did not know that, not using that much software tho.., still good to know, thanks.

    Comment


    • #32
      H.265 will be considered a failure & Apple will decline:

      The first is hardly controversial: HEVC IP owners are snatching defeat from the jaws of victory

      The emergence of XVC, a stripped down version of HEVC I think is a testament to that.

      The industry is accepting that HEVC is a crippled license bastard of a format. And now, a mature one at that. 5 years of non-adoption (outside broadcast) has given products time to mature. It's not that anyone would doubt it from a technical standpoint.

      Too bad you have to be a free software user to grasp how destructive patent royalties are to any format (if you want adoption anyway): Royalty bearing is royalty bearing to the free software community. As such, H.264 on the web was "an outrageous disaster" (if anyone can find that blog post from 2007) that only happened because Nokia and Apple complained when Theora was suggested as mandatory to implement for the video element, Theora wasn't good enough, besides, the format war was already lost because H.264 was already in use through Adobe Flash. Or was it Macarena? Point being: Royalty free video on the web was meant to be, and the circumstances are different now.

      The second is coupled with the first: Apple revealed their bet on HEVC after everyone (outside broadcast) had fleed to AoM. Apple has always been trendy and such, and liked to do it their way. But format war is format war. Seriously, what were AppleTV customers thinking this year?

      As for AV1, a lot of work will remain after bitstream freeze before it is practical. People who don't get this will be disappointed. Such as making an encoder that isn't 400 times slower than realtime. That number sounds wild, but it's just a practicality, for two reasons:
      1. Making a fast encoder is trivial – just drop features. The art is in making heuristics that can reach reasonably good decisions fast. This work should be at the bottom of the priority list for a reference encoder before bitstream freeze.
      2. Encoder slowness is multiplicative (many features = giant search space); decoder slowness is additive. Not comparable, and also explains the wild number.
      Last edited by andreano; 16 December 2017, 08:30 PM.

      Comment


      • #33
        Originally posted by LoneVVolf View Post
        - Rust puts itself on a trajector to replace C/C++ for low-level programming.
        Where can i find the linux kernel , userspace drivers, dbus written in rust ?
        (maybe i misunderstand what low-level programming means)
        That, my friends, is a prime example of a serious mental handicap. The more lines an existing project has, the less likely it will be ported to a totally new language. Even if a new language was a 100% silver bullet which solved all the problems, the kernel would be the last project to adopt this language. Just because there's way too much legacy code and too many developers to communicate changes with. Besides Rust is not C. By default, it comes with a runtime library. So it's obvious to any developer that it's targeting the markets of end user applications and libraries.

        User space drivers is another niche market. I run fairly standard Linux systems and can only come up with almost none: Mesa, ntfs-3g, sane, cups - any others? The other 99% of my 20G root volume contents are something quite different.

        Comment


        • #34
          Hevc is used every day by millions of people. It isn’t going away. If by PCs you mean desktops and laptops, sure. That isn’t what computers will look like for long and apple knows this.

          Comment


          • #35
            Originally posted by andreano View Post
            H.265 will be considered a failure & Apple will decline:
            Too bad you have to be a free software user to grasp how destructive patent royalties are to any format (if you want adoption anyway): Royalty bearing is royalty bearing to the free software community. As such, H.264 on the web was "an outrageous disaster" (if anyone can find that blog post from 2007) that only happened because Nokia and Apple complained when Theora was suggested as mandatory to implement for the video element, Theora wasn't good enough, besides, the format war was already lost because H.264 was already in use through Adobe Flash. Or was it Macarena? Point being: Royalty free video on the web was meant to be, and the circumstances are different now.
            That was a debate that flared up during the HTML5's creation, mainly in regards to the <video> tag and the default format to be used with it.
            It was quite the dust-up at W3C at the time since WHATWG was suggesting Theora and Apple/Netflix/Adobe already had an established library of H264 video/usage. Things have changed a lot since then.

            As for AV1, a lot of work will remain after bitstream freeze before it is practical. People who don't get this will be disappointed. Such as making an encoder that isn't 400 times slower than realtime. That number sounds wild, but it's just a practicality, for two reasons:
            1. Making a fast encoder is trivial – just drop features. The art is in making heuristics that can reach reasonably good decisions fast. This work should be at the bottom of the priority list for a reference encoder before bitstream freeze.
            2. Encoder slowness is multiplicative (many features = giant search space); decoder slowness is additive. Not comparable, and also explains the wild number.
            Well keep in mind that AV1's encoder/decoder will likely be a reference one. At the end of the day I am unsure that it will be the popular encoder/decoder that will be used in the public once the bitstream freezes, there will likely be other implementations of AV1 going forward that will clearly over take it. As has happened with other formats.

            Comment


            • #36
              Originally posted by trivialfis
              I don't see the benefits of Flatpak over Guix or Nix as a packaging scheme.
              Well, the main one is that people actually take Flatpak seriously, to the point where the desktops and distros are working to integrate support for it.

              By contrast, systems like Guix have never really attracted widespread interest... and honestly, that's not much of a surprise when the ability to run it in Emacs is considered one of the most important features for them to talk about on their website. I'm sure that's useful to some people, but it's not exactly a selling point for the masses.

              Comment


              • #37
                Originally posted by LoneVVolf View Post
                - Meson becomes the defacto build system in the Linux community.

                Number of applications requiring meson in arch linux repos that are not gnome related : very low
                Xorg-server and mesa are both working on building with meson, but even if they do switch in 2018 there's still QT applications.

                - Rust puts itself on a trajector to replace C/C++ for low-level programming.
                Where can i find the linux kernel , userspace drivers, dbus written in rust ?
                (maybe i misunderstand what low-level programming means)

                - Traditional Linux distribution packaging for desktop applications will start fading in favor of Flatpak.
                if that happens, i expect a new prediction in the near future : 202x : linux systems banned from internet due to increasing number of viruses / malware
                Flatpak does have some good points, but it's not suited to replace more then a small part of the applications

                The others i don't know enough about.
                Flatpak is a great idea, but poorly designed regarding UX. Because, access to system is defined by package, and can't be altered by user during or after installation.

                Though, Flatpak isn't good yet. I think it's worth the effort, and it needs just to think about UX and practical aspects or security. That user must be in control, not the package! Package should only indicate what it wants to access, user must explicitly allow list of permissions to grant (selection/all).

                Comment


                • #38
                  Originally posted by kravemir View Post

                  Flatpak is a great idea, but poorly designed regarding UX. Because, access to system is defined by package, and can't be altered by user during or after installation.
                  Completely false

                  flatpak --nofilesystem=home or filesystem=~/something

                  Comment


                  • #39
                    I think a lot of the people replying is reading it wrong. "Rust puts itself on a trajector to replace C/C++ for low-level programming." does not mean "will replace C/C++ in all existing low level code bases" nor does it mean "will become the default choice for a new low level code base", it means, that it puts itself on the trajectory to become the default for a new low level code base.

                    I really don't get all the rust hating. It seems large without argument. I agree that defaulting to "let's rewrite the world I rust" is silly. But the rationale behind rust seems solid. It simply is: C has been the de facto choice for low level programming for decades, in fact for the largest part of amputee since progamming, and it has served us well. But, during all that usage, weaknesses A, B and C have been identified that leads to bugs and security issues (particularly around parallel code, which is becoming more and more relevant all the time). So let's see if we can design something new, that retains the mean number-crunching-compilers-can-optimize-the-hell-out-of performance, but which has abstractions (that compiles into nothing, or very little performance penalty) that solves A, B and C. It seems like a good idea to make that experiment, and it needs real world testing and usage to be able to continue the experiment. No doubt, maturing rust to the level where it is able to solve that problem, will take time, but what the hell is everyone's problem with "let's see if we can do better"?

                    Comment


                    • #40
                      And BTW. With regards to search statistics, it is hardly relevant to compare meson to cmake or rust to C++ at this point. What matters is what the meson and rust graphs look like when you zoom inπŸ˜‰ I imagine similar comments being made a few years ago about git, by people that learned SVN and couldn't be bothered to learn something new. "git hardly even registers in search interest compared to SVN" and loom where that "let's rethink the fundamental problem" project is today.

                      Comment

                      Working...
                      X