Announcement

Collapse
No announcement yet.

Adobe Is Finally Ending Flash Support... In 2020

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by caligula View Post

    Doom used to work on pretty low end machines (386SX). Even Raspberry Pi is probably thousands of times faster than that.
    Say, modern video codecs are using fairly complicated algos and overall there is considerable amount of math and many layers of clever tweaks and tricks and it have to be applied to a large amount of data while real time can't wait so if you can't do it in time, user readily notices stutter, frame drops and other fancy things. All this had to be done to reduce bandwidth while still retaining good-looking picture. Maybe because network bandwidth proven to evolve much slower compared to CPUs, memory and so on, so it proven to be preferrable to increase processing complexity if it leads to bandwidth reduction at same visual quality. It's not like if 386 stands a chance to decode modern video codec with all its options and tricks at reasonable frame rate. Its just way too much time-critical math, on purely algorithmic side. If you think you could do it better than that... er, feel free to design video codec like that. Or join AV1 efforts. Or something like that. i386 so slow even vintage codecs like MPEG1 and 2 had to be played on dedicated HW card decoding it. And these are fairly simple and straightforward, to degree their obsession on low-end hardware kills their bitrate to quality ratio to nowhere, that's what made them obsolete in first place. Huge movies coming at crappy quality isn't something ppl want to see. Even if they are easier to decode on low-end HW. These old days one stood chance to squeeze a bit more from old MPEGs, e.g. by using large VBV and so on. Ironically stream would no longer be compliant and would not play on many HW devices like cards and players. Btw, stuff like this could still be an issue, even in modern world. Say, turn on all H.264 features... and only few things would be able to play THAT in HW. And if someone resorted to HW playback, there is decent chance they lacked cpu capable of doing it in SW... and since it hurts codec adoption, it still puts some backpressure to keep decoding reasonably fast . Btw, as for Pi, its RAM speed isn't what one would consider fast (though muych faster than 386) and it suxx when it comes to IO since it utterly lacks reasonable storage and networking. So it could crunch numbers, sure, but if you'll need IO, it would not be thousands times faster. Maybe it would even be slower compared to 386 reading/writing IDE drive, lol.

    p.s. as for flash: "good riddance". Whatever, this technology is obviously unmaintained/abandoned and these days it misbehaves and backfires all the time, not to mention many devices just do not support it at all and since it proprietary it not going to change.

    Comment


    • #52
      Originally posted by Hi-Angel View Post
      Actually, not VLI4/VLI5, but any GPU supporting VAAPI, i.e. Intel ones, and the Gallium which covers r300, r600, radeonsi, and NVidia GPUs via Nouveau (don't know the generations, but I think the list is big too). That is a majority of GPUs in use.
      Well, this other guy put it nicely 'XXX isn't either, unless you tweak some settings. Even then, it's very unstable and unsuitable for everyday use.'. Nouveau is unstable and unsuitable for everyday use. On top of that, it won't support decoding on more recent hardware (Geforce 9x0 and 10x0). Only the latest Intel generations have any power to decode videos. Still I'm not sure what kind of tuning it would require. I've tested firefox and chromium on Intel hardware and they sure as hell won't hw accelerate any video decoding unless you rip the video and play with mpv.
      On the other hand are you sure Flash still have video acceleration? I'm asking because as we know they've updated it, this is the officially recommended version at https://get.adobe.com/flashplayer/ , and they didn't implement some things. And I couldn't find neither confirmation nor rebuttal of whether video acc. is still working.
      I use Flash & Firefox on my Atom HTPC and it sure does hw acceleration. The CPU couldn't even decode it without serious frame skipping.

      Comment


      • #53
        Originally posted by SystemCrasher View Post
        Say, modern video codecs are using fairly complicated algos and ...
        Don't see how it's related to Doom or gaming on Flash. Yes, video decoding is serious business but video != games. I was just trying to say that smoothly running games such as Doom is a very bad way of demonstrating the power of some high level runtime in any way. Those games are not very demanding. Any toy language written in 30 minutes can do that even with an artificially slowed down interpreter.

        Btw, as for Pi, its RAM speed isn't what one would consider fast (though muych faster than 386)
        That's not quite true. Raspberry Pi (3) has both L1 and L2 caches. The memory is 900 MHz LPDDR2. 386 had 32-bit memory bus and bus speed equal to CPU freq (e.g. 33 MHz). It did not have L1 or L2. Just try using RPi3 without any caches. It's a HUGE speed difference. IIRC 3 cycles from RPi L1 cache, 4 cycles from slow 386 DRAM. That's something like 50 times better random access speed. On top of that, other stuff to worry about such as feeding the CPU with instructions which is really efficient with L1 I-cache and predictors. It's basically so fast that any toy interpreter outperforms a real 386 when interpreting bytecode.

        it suxx when it comes to IO since it utterly lacks reasonable storage and networking. So it could crunch numbers, sure, but if you'll need IO, it would not be thousands times faster. Maybe it would even be slower compared to 386 reading/writing IDE drive, lol.
        Well, games like Doom wouldn't need to use the slow I/O. RPi has 512 MB of memory while Doom requires only few megabytes. Besides you don't need all the data at the same time. Only care about the current level (out of ~30).

        Comment


        • #54
          Originally posted by grok View Post
          Thin clients and VM are another environment where 3D is a rare and high end feature. Flash games ran/run on thin clients. I never got 3D working in Virtualbox (There is a solution as llvm-pipe is a software OpenGL implementation tuned for speed, might be good if you have an eight-core or at least eight-thread CPU, perhaps someone with a 3+GHz Zen or socket 2011/2066 CPU or i7 2600K can test if WebGL games are playable)
          I don't know about the other hypervisors(although I know they have some 2D/3D accelleration when using their tools/plugin support), but QEMU/KVM can give you either direct hardware access or GL forwarding.

          Comment


          • #55
            Years ago there was Doom on javascript, till it was removed due to a copyright concern - even though it was the shareware version I think.
            It was a slide show, nevermind the > 2GHz or almost 3GHz CPU I ran it on. Perhaps Firefox's javascript was quite slow back then, perhaps it was an unoptimized "transpiling" job, perhaps the simple scaling also took its toll. I also ran a simple Tetris clone which was high res and smooth (while looking 1991 Windows graphics) except it stuttered severely, like a garbage collection pose is happening every second or so.

            Which is to say, javascript gaming is not trivial. Web browsers specifically don't use javascript to decode video, although that may be tried for curiosity. There are javascript mp3 etc. decoders, also kind of a curiosity but they do work real-time and not skipping . You technically can use that as a heavy-handed way to serve flac or opus music in a browser if you wish.
            Surely javascript doom with new "transpiler" software and e.g. Firefox 52 ESR (to keep things "conservative") would work a lot lot better.
            I'm sure I'm not writing an especially ground-breaking post here having said all that.

            Flash was much better than HTML5/Javascript since it ran the same no matter what your browser or browser version was (and to some limited extent OS and CPU architecture e.g. it was available for Solaris on Sparc. But for e.g. linux on PowerPC your were out of luck)
            It took YEARS for javascript/HTML5 to get somewhat usable, and the road to better javascript/HTML5 features and speed might have not ended yet (and very lately, Web Assembly). Not to mention catering to whatever is available on Android 4.1, Android 4.4 or perhaps various incarnations of Safari.
            Last edited by grok; 27 July 2017, 05:52 AM.

            Comment


            • #56
              Originally posted by polarathene View Post
              I don't know about the other hypervisors(although I know they have some 2D/3D accelleration when using their tools/plugin support), but QEMU/KVM can give you either direct hardware access or GL forwarding.
              Indeed QEMU/KVM should be very much worth trying . There is a bit of naming confusion with QEMU being known as an emulator, KVM known for more than one thing.
              But well thus I can understand what I'm looking for. Should I try "gnome-boxes" maybe?, as it's also mainly about lazyness of devising and launching from terminal a very long command that looks lik "kvm --switch --this --that --dev0 --garble --gargle (...) -blah (...) --/path/to/diskimage.xyz --force --warping --forward"
              Virtualbox has been incredibly simple to use for years, very cross-platform. With just a bit sillyness about "extension pack", updating, and bashing on the USB 2.0 passthrough feature until it works.

              that's off-topic, maybe. but your post makes me confident I can at least get GL forwarding from linux guest to linux host running with kvm. (heck getting OpenGL working and nothing else in XP or 98 would be awesome if that's possible)

              Comment


              • #57
                Originally posted by caligula View Post
                Well, this other guy put it nicely 'XXX isn't either, unless you tweak some settings. Even then, it's very unstable and unsuitable for everyday use.'.
                Why? Unless this sentence refers to the next one — agreed, NVidia with their signed nonsense are holding off their VAAPI support for newest products very well.
                Originally posted by caligula View Post
                Nouveau is unstable and unsuitable for everyday use. On top of that, it won't support decoding on more recent hardware (Geforce 9x0 and 10x0). Only the latest Intel generations have any power to decode videos. Still I'm not sure what kind of tuning it would require.
                If the support for VDPAU (which Flash is using) among Intel cards differs from the support of VAAPI, it's sure not in favor of VDPAU. Because VAAPI was first on Intel, so it covers at least as much cards as VDPAU, may be more.
                Originally posted by caligula View Post
                I've tested firefox and chromium on Intel hardware and they sure as hell won't hw accelerate any video decoding unless you rip the video and play with mpv.
                This means the browser you tested did not use VAAPI As I said, I been using chromium-vaapiᴬᵁᴿ. Although given recent news, it's possible chromium will start working without AUR.

                Comment


                • #58
                  Originally posted by caligula View Post
                  Don't see how it's related to Doom or gaming on Flash. Yes, video decoding is serious business but video != games.
                  Flash sometimes still used to stream videos on some old/ignorant//abandoned/unmaintained web sites. It eventually seen as advantage by users, as it tends to use "simple" codecs (vp6, simple mp4 profiles, mp3) and low bitrates. Picture quality is crap, but it plays on really old computers (like PIII). Also some nokias had flash V7 or so and vintage Androids often had Flash V8/9. So I've seen some ppl using it "because it works better with old/weak hardware". Though best would be to fetch url directly by vlc of ffmpeg or something.

                  As for games, most Flash games I've seen were trivial/undemanding or had low quality/oversimplified gfx or were laggy. So I do not get why flash is exciting as gaming platform. I never seen anything I would consider sane game engine. I also do not get why e.g. HTML5 is supposed to be fundamentally worse at it. Same laggy crap, different flavour. The only advantage of flash I could imagine is the fact it also used really compact storage format to deliver both code and resources.

                  On purely GFX/performance level, flash things I've met would not beat even prehistoric Wing Commander, where one could had decent-looking cockpit and heated, intense 3D space dogfights, exciting dangerous missions and FX sound effects without major/unpredictable lags. All of that on a mere 80286 @ 12MHz and 1M RAM. That's how The Real Pros were doing their magic. Implies rapid viewport update and computed GFX.

                  I was just trying to say that smoothly running games such as Doom is a very bad way of demonstrating the power of some high level runtime in any way.
                  Agree. Furthermore, bulk performance is one thing. Latency is another. There was Doom II port to HTML5 (now offline due to copyright issues). It has been playable on modern computers. But eventually would face lag spikes. So bulk performance isn't enough for gaming. DOS has been almost perfect platform: program had exclusive control over all resources. It simplifed matters a lot, allowing decent timings. Just to give idea: I've once did unorthodox thing, carefully arranging timing and firing 22050Hz PCM WAV to SoundBlaster straight from memory buffer. Good luck to do it under multi-tasked environment, just not going to work except maybe from kernel or realtime thread totally yanking cpu. Though paying attention to sound card 22050 times per second involves considerable overhead.

                  Those games are not very demanding. Any toy language written in 30 minutes can do that even with an artificially slowed down interpreter.
                  Still, interpreters tend to have big overhead and modern/portable Doom engines are probably far less optimized. In dos age and win9x it has been a norm to write e.g. drivers in assembly. One of reasons why even crappy HW feeled unexpectedly fast (compared to what one would face using modern SW on HW like this).

                  That's not quite true. Raspberry Pi (3) has both L1 and L2 caches. The memory is 900 MHz LPDDR2.
                  Still, DDR2 tech is obsolete, even old chinese boards used faster (LP)DDR3 since single-core sub-ghz ages. Though it also number of channels that matters. Best boards (and just tablets) go for 4x16 bit interleaved channels, 64-bit bus and pc-like clock rates. Still not as fast as PCs using even higher clocks and wider buses though. Yet, it takes 4 x 16 bit ICs (more expensive) and larger board, which is harder to route. Some crappy SBCs could opt for single 16-bit IC. At which point it isn't big deal if it DDR3. It could go as bad as display controller firing 1920x1080x60FPS to display would waste almost all RAM bandwidth just to refresh display. Everything else almost comes to a halt unless you hit cache all the time. ARMs could show quite wild variance in terms of performance .

                  386 had 32-bit memory bus and bus speed equal to CPU freq (e.g. 33 MHz). It did not have L1 or L2. Just try using RPi3 without any caches. It's a HUGE speed difference.
                  I do not have any pis. Though I have load of other SBCs and nope, I wouldn't disable caches :P. Though still, caches are somewhat corner case. There're always things which would not fit it and you'll face just raw RAM performance. Say, decompressing fairly large file. You both care about speed and intense data flow would mostly kill cache. I've once stumbled on corner case: decompressing small data to same location few times in a row could score wild cache hit, speeding things dramatically. So if someone evaluates compression algo speed, that's how to do it really wrong. Because real-world uses would always be far worse than that.

                  IIRC 3 cycles from RPi L1 cache, 4 cycles from slow 386 DRAM. That's something like 50 times better random access speed. On top of that, other stuff to worry about such as feeding the CPU with instructions which is really efficient with L1 I-cache and predictors. It's basically so fast that any toy interpreter outperforms a real 386 when interpreting bytecode.
                  Somehow, i386 was using native code, which is executed quite fast. Most of time it has been carefully optimized assembly. Even device drivers like VxDs in windows were mostly written in assembly.

                  Well, games like Doom wouldn't need to use the slow I/O.
                  They would need some IO to load levels, etc. Though since whole wad of doom2 was about 14Mb IIRC, it isn't a major issue for Doom. But, erm, if one wants better GFX qaulity, improving resolution usually O(N^2) in both size and computation.

                  RPi has 512 MB of memory while Doom requires only few megabytes. Besides you don't need all the data at the same time. Only care about the current level (out of ~30).
                  Sure thing, Doom ran on far smaller configurations. Portable version has been launched on mp3 players and cameras, which had far less RAM and weaker cpu. Even these are faster than 386. For the reference, I've had Cortex A8 playing mp3 in software at mere 20MHz. It has been lowest DVFS freq, just that. It didn't bothered self to upclock at all.

                  Comment


                  • #59
                    Originally posted by SystemCrasher View Post
                    Flash sometimes still used to stream videos on some old/ignorant//abandoned/unmaintained web sites. It eventually seen as advantage by users, as it tends to use "simple" codecs (vp6, simple mp4 profiles, mp3) and low bitrates. Picture quality is crap, but it plays on really old computers (like PIII). Also some nokias had flash V7 or so and vintage Androids often had Flash V8/9. So I've seen some ppl using it "because it works better with old/weak hardware". Though best would be to fetch url directly by vlc of ffmpeg or something.

                    As for games, most Flash games I've seen were trivial/undemanding or had low quality/oversimplified gfx or were laggy. So I do not get why flash is exciting as gaming platform. I never seen anything I would consider sane game engine. I also do not get why e.g. HTML5 is supposed to be fundamentally worse at it. Same laggy crap, different flavour. The only advantage of flash I could imagine is the fact it also used really compact storage format to deliver both code and resources.

                    On purely GFX/performance level, flash things I've met would not beat even prehistoric Wing Commander, where one could had decent-looking cockpit and heated, intense 3D space dogfights, exciting dangerous missions and FX sound effects without major/unpredictable lags. All of that on a mere 80286 @ 12MHz and 1M RAM. That's how The Real Pros were doing their magic. Implies rapid viewport update and computed GFX.


                    Agree. Furthermore, bulk performance is one thing. Latency is another. There was Doom II port to HTML5 (now offline due to copyright issues). It has been playable on modern computers. But eventually would face lag spikes. So bulk performance isn't enough for gaming. DOS has been almost perfect platform: program had exclusive control over all resources. It simplifed matters a lot, allowing decent timings. Just to give idea: I've once did unorthodox thing, carefully arranging timing and firing 22050Hz PCM WAV to SoundBlaster straight from memory buffer. Good luck to do it under multi-tasked environment, just not going to work except maybe from kernel or realtime thread totally yanking cpu. Though paying attention to sound card 22050 times per second involves considerable overhead.


                    Still, interpreters tend to have big overhead and modern/portable Doom engines are probably far less optimized. In dos age and win9x it has been a norm to write e.g. drivers in assembly. One of reasons why even crappy HW feeled unexpectedly fast (compared to what one would face using modern SW on HW like this).


                    Still, DDR2 tech is obsolete, even old chinese boards used faster (LP)DDR3 since single-core sub-ghz ages. Though it also number of channels that matters. Best boards (and just tablets) go for 4x16 bit interleaved channels, 64-bit bus and pc-like clock rates. Still not as fast as PCs using even higher clocks and wider buses though. Yet, it takes 4 x 16 bit ICs (more expensive) and larger board, which is harder to route. Some crappy SBCs could opt for single 16-bit IC. At which point it isn't big deal if it DDR3. It could go as bad as display controller firing 1920x1080x60FPS to display would waste almost all RAM bandwidth just to refresh display. Everything else almost comes to a halt unless you hit cache all the time. ARMs could show quite wild variance in terms of performance .


                    I do not have any pis. Though I have load of other SBCs and nope, I wouldn't disable caches :P. Though still, caches are somewhat corner case. There're always things which would not fit it and you'll face just raw RAM performance. Say, decompressing fairly large file. You both care about speed and intense data flow would mostly kill cache. I've once stumbled on corner case: decompressing small data to same location few times in a row could score wild cache hit, speeding things dramatically. So if someone evaluates compression algo speed, that's how to do it really wrong. Because real-world uses would always be far worse than that.

                    Somehow, i386 was using native code, which is executed quite fast. Most of time it has been carefully optimized assembly. Even device drivers like VxDs in windows were mostly written in assembly.


                    They would need some IO to load levels, etc. Though since whole wad of doom2 was about 14Mb IIRC, it isn't a major issue for Doom. But, erm, if one wants better GFX qaulity, improving resolution usually O(N^2) in both size and computation.


                    Sure thing, Doom ran on far smaller configurations. Portable version has been launched on mp3 players and cameras, which had far less RAM and weaker cpu. Even these are faster than 386. For the reference, I've had Cortex A8 playing mp3 in software at mere 20MHz. It has been lowest DVFS freq, just that. It didn't bothered self to upclock at all.
                    My very first computer was a 25mhz 486SX, no FPU, and it was not able to play mp3. My next computer after that was 166mhz 486DX, with FPU, and it played everything including MPEG and MPEG2. It was the first machine I could watch video on. Later on I bought a ATi TV Wonder card and I could even watch cable TV on it.

                    Comment


                    • #60
                      Originally posted by grok View Post
                      Indeed QEMU/KVM should be very much worth trying . There is a bit of naming confusion with QEMU being known as an emulator, KVM known for more than one thing.
                      Qemu is an emulator, but thanks to all accelerations these days it provides fast paths for almost everything. So when it backed by KVM and uses virtio it hard to distinguish from real HW machine in most regards.

                      command that looks lik "kvm --switch --this --that --dev0 --garble --gargle (...) -blah (...) --/path/to/diskimage.xyz --force --warping --forward"
                      There are things like aqemu for those who needs it. It could be as simple as apt-get install aqemu (or installing it via gnome software, synaptic, or whatever). That's how you could create KVM VMs really fast. That's how I learned basic ideas how to launch VMs using qemu. Still, doing "qemu-system-x86 ... " ("kvm" syntax is obsolete long ago, lol) is far more flexible and convenient if you need to run some VM in batch unsupervised mode (KVM hostings would want it for sure) or want something "unusual" (e.g. PCI device passthrough, coordinating it with informing your host kernel it have to pike off, etc). After all, qemu is pretty much like ffmpeg: so many options and features one would struggle to create reasonable UI exposing all these options and their valid combinations. So at most you could get dumbed down UI, exposing like 5% of features. If these 5% are fine for you, here's your aqemu and so on. But lol, most of Linux power comes from its powerful command line and batch processing abilities. GUI is great, but e.g. getting idea how to boot these 3 VMs once my OS has booted isn't straightforward. Even less srtaightforward if I want to start up them 5 minutes since boot completion to reduce system contention. Nope, I do not want to click buttons each and every time my OS boots. I'm too lazy. Actually I eventually running non-interactive VMs, lacking GUI but still accessible over (virtual) serial console or ssh. Not sure how one does something like this with gui.

                      Virtualbox has been incredibly simple to use for years, very cross-platform.
                      It depends!
                      1) Installing 3rd party kernel module isn't simple and always stands a chance things would go nuts upon kernel or module update. Up to degree OS would fail to boot or it would boot but you'll lack virt. Vbox is WAY TOO INTRUSIVE. It something to consider.
                      2) Hey, could one pass USB device to VM in Vbox? At least now? Without all fuckin proprietary code and other stupid shit? Well, I never managed to get there with vbox. Kvm? Alright, it works. Without 3rd party modules, proprietaty code or something. Oracle could go fsck themself if they want to.

                      Comment

                      Working...
                      X