Announcement

Collapse
No announcement yet.

Mozilla Firefox 53.0 Released, Drops Old Linux CPU Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by s_j_newbury View Post
    A system that draws 100W idle is misconfigured or really poorly designed! CPU halt has been around a long time!
    My Pentium D 945 on a normal board with integrated graphics and a hdd draws like 80W. I never managed to decrease that in significant ways by BIOS settings like undervolting. I blame the VIA chipset on the mobo.

    Comment


    • #52
      Hi all, please let me say that if you think lack of SSE2 in the machine code is holding back your Core i thing Nehalem or later, I think that's silly since you will likely run linux 64 bit where everything uses SSE2 anyway.

      As for suggesting dillo, well, that's stupid. You can't do anything with it save browse wikipedia or some web site from 1997, or some open source software site written in vi. There still are some forums or forum-like site that work or can work in a "web 1.0" way but dillo is unable to log in.
      There are ways to play youtube outside a browser, that works. HTML5 video sucks, and it also sucks with SSE2. (not that flash was any better. Flash sucked differently). Anyway, 15 years ago I was playing full screen videos and movies on a low end PC. Wow!

      Although, people might use Core 2 Duo, Athlon II or Athlon 64, Pentium 4 with 3GB, 2GB or less memory. 32bit is useful to save a bit memory, constraint hungry processes to 2GB max, and save some more memory when not needing multi arch to run some Wine or other things. I'm even sure there's some ***hole somewhere running an i7 920 with 3GB RAM, 32bit OS and refusing to upgrade. Ruining it all for us who would like to keep some i586 or i686 around Big Grin


      But well

      Originally posted by hsivonen View Post

      Hi,

      I'm the developer who made the CFLAGS change. I highly doubt that weird non-IEEE 387 floating-point math is going to make a comeback as part of a trendy future architecture.

      Some reasons to drop non-SSE2 x86 as a tier-1 architecture (not necessarily in the order of importance):

      * When a Microsoft compiler bug practically forced Mozilla to drop non-SSE support on Windows, Mozilla dropped non-SSE2 support and Windows. (It didn't make sense to drop non-SSE support without dropping on-SSE to support.) Since SSE2 is part of x86_64 and macOS and Android have never shipped on non-SSE2 x86 hardware, this left 32-bit x86 Linux is the only tier-1 platform that still used non-IEEE 387 floating-point math. By making 387 floating-point math tier-3, Mozilla gives itself permission to no longer spend time tracking down 32-bit-Linux-only issues arising from floating-point behavior that differs from standard IEEE behavior.

      * Mozilla's Web Assembly

      * Rust

      * For the near future, Rust
      Much thanks for your explanation. Although, 387 is supposed to support IEEE 754-1985, but if there's a newer standard or some corner cases that I know nothing about, this still makes a lot of sense.


      Ummm can old CPUs run this code by full software emulation of SSE2 on the integer pipeline? Can linux do that, even just for Firefox??
      This is not a very serious question, but if you only end up running youtube at 0.1 fps instead of 2 fps who cares EEK!

      Comment


      • #53
        starshipeleven, I agree with most of what you're saying, just a couple of things:

        There is a culture of designing hardware and implictly software to push the limits instead of long term service. With software this comes from the apparent long term increase in computing capacity, it just becomes less efficient with CPU and especially memory as more resources are available on the target hardware. This further re-enforces the hardware upgrade cycle. This might well be coming to an end, barring a major breakthrough, we'll see.

        Supporting new hardware and instruction set extensions can be done without losing the ability to write portable code. Or even code with runtime code selection. [although as a Gentoo guy I always patch that away!]

        The ability to compile code for any architecture supported by the toolchain is one of the greatest strengths of F/OSS software. Being able to bring up a complete OS on a new CPU design or refined but incompatible implementation is a remarkable achievement. To lose that because it means extra work to maintain the generic code paths is really sad and regressive. It can only hurt in the long run.
        Last edited by s_j_newbury; 21 April 2017, 10:47 AM.

        Comment


        • #54
          Originally posted by starshipeleven View Post
          My Pentium D 945 on a normal board with integrated graphics and a hdd draws like 80W. I never managed to decrease that in significant ways by BIOS settings like undervolting. I blame the VIA chipset on the mobo.
          That and not only your rig has a Pentium 4, it has TWO of them!
          just kidding - since it's kind of the latest, finest pair of Pentium 4 you have there I bet it's no worse than a single Prescott gas guzzler.

          But that's supported by Firefox 53, Windows 8 and 10 (32 and 64bit), etc. In fact you have have SSE3 in there..

          Comment


          • #55
            Originally posted by s_j_newbury View Post
            There is a culture of designing hardware and implictly software to push the limits instead of long term service. With software this comes from the apparent long term increase in computing capacity, it just becomes less efficient with CPU and especially memory as more resources are available on the target hardware.
            The main reason for this is that hardware is usually cheaper and less risky "solution" to software issues than additional development time.

            I mean, if your stereotypical software maker builds a program in a stereotypical Java language, you know that it will run like crap on anything less than a stereotypical i5, but the point is that buying only i5 or better is still far cheaper than making better software.

            Because "making better software" isn't flipping a switch, it may happen, it may not, it may cost more, it may cost a fuckton more, it may sink your businness too with bugs and additional issues. Meanwhile a hardware upgrade is simple and sure investment that is highly unlikely to add additional risks.

            Supporting new hardware and instruction set extensions can be done without losing the ability to write portable code. Or even code with runtime code selection. [although as a Gentoo guy I always patch that away!]
            No it cannot. The point here is that your code isn't portable anyway unless you write in ANSI C or whatever (and this kinda limits your performance usually so not a lot of serious programs do that). You can have multiple code paths for multiple hardware (for example Firefox on ARM uses NEON instruction sets that don't exist in x86, and Firefox would need a SSE2 and a SSE2-less codepath), but someone has to maintain these different code paths. So it is not truly "portable" (same code runs everywhere) but more like "multi-platform" (can be run on other hardware by changing some modules).

            So if the code path for some specific hardware ain't worth maintaining anymore, why should it be still maintained? Consider that it's all time not spent on stuff more important for actual users. There is a dev here on a blog that claims that from their telemetry they saw that PCs that lacked SSE2 were roughly a 0.5% of their userbase, which imho makes the switch justified.
            Let me tell you a story. Intel invented the x86 assembly language back in the Dark Ages of the Late 1970s. It worked, and many CPUs implemented it, consolidating a fragmented landscape into a more …


            The ability to compile code for any architecture supported by the toolchain is one of the greatest strengths of F/OSS software. Being able to bring up a complete OS on a new CPU design or refined but incompatible implementation is a remarkable achievement. To lose that because it means extra work to maintain the generic code paths is really sad and regressive. It can only hurt in the long run.
            Meh, don't take me wrong but this is something that is veeeery niche. It will be done by a handfew of people worldwide at most.

            Comment


            • #56
              Originally posted by grok View Post

              That and not only your rig has a Pentium 4, it has TWO of them!
              just kidding - since it's kind of the latest, finest pair of Pentium 4 you have there I bet it's no worse than a single Prescott gas guzzler.

              But that's supported by Firefox 53, Windows 8 and 10 (32 and 64bit), etc. In fact you have have SSE3 in there..
              I was just pointing out the low computing-power-to-heat ratio of CPUs of that era, the processor was still good enough for general desktop usage tho, it worked fine on Linux Mint MATE.

              It's now retired due to the main board sucking big way (AGP slot, 2GB max ram, BIOS kept erasing itself mysteriously).

              Comment


              • #57
                Originally posted by s_j_newbury View Post

                A system that draws 100W idle is misconfigured or really poorly designed! CPU halt has been around a long time!
                I'm not even sure where to start. Ever heard of 10k RPM SCSI drives? Well, have 3-4 of those. 80+ standards for PSUs? They weren't that common 10 years ago. More like 60+. Discrete GPUs? The idle GPU consumption has improved a lot. Current gen mid range/high end GPUs use around 10-15W when idle, it used to be a lot worse. 3-4 extra cards for RAID, second NIC, 4 memory slots, and so on. The world has changed a lot.

                Comment


                • #58
                  Originally posted by grok View Post
                  Much thanks for your explanation. Although, 387 is supposed to support IEEE 754-1985, but if there's a newer standard or some corner cases that I know nothing about, this still makes a lot of sense.
                  There's a new version of IEEE 754, but the differences hardly affect anything.
                  Ummm can old CPUs run this code by full software emulation of SSE2 on the integer pipeline? Can linux do that, even just for Firefox??
                  This is not a very serious question, but if you only end up running youtube at 0.1 fps instead of 2 fps who cares EEK!
                  The main difference is, old x87 FPU is 80-bits while new double precision is 64-bits. Also the old is stack based, new ones use registers. So it's kind of possible to emulate and even achieve higher precision in the process, but some tasks fail with a different precision level.

                  Comment


                  • #59
                    Originally posted by s_j_newbury View Post
                    starshipeleven, I agree with most of what you're saying, just a couple of things:

                    There is a culture of designing hardware and implictly software to push the limits instead of long term service. With software this comes from the apparent long term increase in computing capacity, it just becomes less efficient with CPU and especially memory as more resources are available on the target hardware. This further re-enforces the hardware upgrade cycle. This might well be coming to an end, barring a major breakthrough, we'll see.
                    Luckily there are different trends and developers with different mindsets at work. For instance Alpine Linux is a great example of stuff that requires very little resources. Even if you install systemd, pulseaudio, avahi, and all the fancy modern crap, your system boots up nicely with < 64 MB of RAM, < 32 MB is doable with proper software, without even trying any exotic compiler switches. Let's be honest, the smallest DDR3/DDR4 RAM chips you can find are 256 MB. Even RPi clones come with 2 gigs now and the cheapest Linux based routers have around 64 megs. When Linux started, not everyone had a 80386 with 4 megs of RAM. It was considered a high end machine. Now, typical desktop machines support up to 32 000 or 64 000 megs of RAM. Instead of 0.1 gig you have 10 000 gig hard drives. The software requirements haven't increased all that fast.

                    CPU load has increased more, but there are good reasons for that. In 1991, smooth 320x200x8b graphics was ok, now people expect dual monitor 4k HDR 60 fps h265. If Firefox supported proper HW acceleration for Linux (also compositing), we could easily offload most of the stuff to the GPUs and use cheap quad core ARM Cortex A9 for day to day stuff. Back then, people didn't play music on computers. Maybe MIDI, but not MP3. Now digital audio is ubiquitous. I'd say we've pretty much reached the saturation point with audio and 2d video now. We can also offload much of the work, which improves power efficiency in tremendious ways.

                    Currently the real problem barring any major breakthrough is the mindset that assumes fast single cores. CPU tech won't work that way anymore. It hasn't for 10 years. You get 100% speedups with multicore every 1-2 years and 5% with single core. This has been true for 10 years. You also get huge speedups with heterogeneous cores, type A for task 1, type B for task 2 (e.g. h264, h265, aes). It's not hard to see where this is going. The old times and pre-sse2 era instruction sets? Definitely not.

                    Also we are not running out of hardware. They've shipped over 4 billion PCs since sse2. On top of that, something like 1+ billion smartphones and tablets each year.

                    Comment


                    • #60
                      caligula, the tablets and smartphones do not have sse2 though, being predominantly ARM they usually have NEON however. This supports what I'm trying to get at, being able to leverage a different technology because the source code and tooling is flexible enough creates new opportunities. I ported Gentoo, including Firefox to ARM EABI I when it first appeared in 2005 for the PXA270 iwmmxt SoC so I have some personal experience, and this was only possible because code was written to be portable and generic with optimizations for specific fast paths. Yet I used iwmmxt (integer) SIMD where I could.

                      Comment

                      Working...
                      X