Announcement

Collapse
No announcement yet.

Linux Might Better Plan Its Code/Hardware Obsolescence From The Kernel

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by oiaohm View Post

    Yes its a 20 year card. Problem here is particular server motherboard vendors licensed the right to make Matrox g200 chip. Yes right down to complete silicon design. There are quite a few server motherboards with a vga port that behind that port is a matrox-g200 and that is new server motherboards released this year. Yes some are stupidly behind a pci to agp bridge chip at as well even that matrox g200 was able todo direct pci.. Yes this leads to a wacky stack of bridges AMD EPIC cpu pcie then pcie to pci bridge chip then pci to agp bridge chip then the matrox g200. Yes there is a real AMD EPIC motherboard with that wacky and of course the UEFI default output is the matrox g200 and ingnores other insert graphics cards unless told otherwise so you still need a vga supporting monitor to set up a 2020 motherboard.

    Really I think that the Matrox G200 would be the oldest design graphics card chip you can buy new on something. Mind you Matrox g200 is kind of upgrade to about 4 years ago when you had some new server boards with pcie to pci bridge then pci to ISA and then a ISA vga card this was for board construction sanitised to a single soc. Some of the stuff you find on server boards that are basically obeying the rule its not broke don't fix it.


    I hope these vendors will finally get the G200 out of their designs once AMD releases server APUs. As they could save some money, I am sure they will be on board then. In the meantime I wouldn't mind if such Kernel initiative would push them harder to license newer tech, such as a low-cost ARM SoC to get the same functionality (and newer display connectivity). Or they will milk that Matrox license forever until you can't get a VGA display anymore...

    Comment


    • #12
      get rid of the max things you can !

      32bit / useless arch / uselss fs / prehitoric hardware drivers...

      Programs need a shower time to time too.

      People who want to use very obsolete hardware shouldn't not be given the privilege of getting the very latest software... efforts go in both direction.

      Comment


      • #13
        Originally posted by mercuriete View Post
        I have an acer laptop with a Intel Celeron (32 bits) with i915 integrated graphics.

        The i915 driver works fine, even that card have 2 options, classic or gallium driver.
        The driver exposes OGL 2.1 version if you drop some configuration to .drirc

        the kernel part I think is DRI1 and they are proposing to remove it.

        I wanted to know what I have to do to keep mesa and the kernel supporting that legacy hardware.

        Of course I have more laptops and more systems in my home, but that one refuses to break. I installed gentoo on that laptop for many years and that laptop is capable of being one week 100% cpu non stop and it still works like a boss.

        I would like more people to share his experiences with not too old hardware that is refusing to break and works perfectly with >= 5.4.x kernel
        If you want to keep running ancient hardware, then at some point you're going to find yourself running ancient software on it as well.

        Comment


        • #14
          Originally posted by uid313 View Post
          Is 32-bit support getting dropped for any architectures?
          no, also, 32bit is bad only on x86
          Maybe that can clean up some code and legacy 32-bit workarounds and quirks?
          there is no such thing. Workaround and quirks are for specific hardware devices, not for a general CPU arch.

          Comment


          • #15
            Originally posted by schmidtbag View Post
            I said it before and I'll say it again:
            What would make for a fantastic Linux distro is one built around the idea of supporting obsolete devices.
            rene 's distro T2 SDE https://t2sde.org/ seems to fit the bill so far. He is always showcasing how it runs on ancient weird crappy PC systems.

            Comment


            • #16
              Originally posted by ms178 View Post

              I hope these vendors will finally get the G200 out of their designs once AMD releases server APUs. As they could save some money, I am sure they will be on board then. In the meantime I wouldn't mind if such Kernel initiative would push them harder to license newer tech, such as a low-cost ARM SoC to get the same functionality (and newer display connectivity). Or they will milk that Matrox license forever until you can't get a VGA display anymore...
              A few issues here with this. The servers you'd find G200s in won't migrate to server APUs. These servers these tend to have 16+ cores, with 2-4 sockets per motherboard, and server APUs are probably going to top out at 8 cores, and probably not support multiple sockets at all. The server APUs are also still going to require an Out of Band Management chip, which is where these G200 graphics components live anyways. So, you're still going to need /something/ to manage remote poweron/poweroff, console over IP redirection, etc, and thus not really save any money on fewer components.

              Newer tech also wouldn't necessarily mean new display connectivity. The G200 itself can handle DVI output just fine, and was one of the first chips that could use such a connector. In the server space, though, most devices continue using VGA connectors because of the huge inertia in the server management space. KVM switches, etc, have VGA connectors almost exclusively, so putting other connectors in would also mean needing to replace all of the other perfectly functional management gear that lives in the datacenter. Thus, any replacement chip will almost certainly also have a VGA connector.

              Economics is also an important consideration here. These servers live in datacenters, with racks and racks of other servers, where even in Windows, the actual console hardware may only be used for 20-30 minutes -- just long enough to do basic configuration and make sure it's talking to the PXE server before everything else is handled through SSH/RDP/etc. Putting in a more fully featured framebuffer would just waste the server vendor's, and customers', money.

              The G200 was chosen for a few very specific reasons. It's a reasonably-priced licenseable core with reasonable performance, and was well supported by modern operating systems. Any replacement would have to live in the same space, where it can be expected to Just Work whether the box is running Linux or Windows or Solaris, or any number of other OSes. Any replacement is also going to have the very same problems the G200 has now, where most of the people who "use" it only ever interact with it a handful of times per year.

              Comment


              • #17
                Originally posted by starshipeleven View Post
                rene 's distro T2 SDE https://t2sde.org/ seems to fit the bill so far. He is always showcasing how it runs on ancient weird crappy PC systems.
                Of course, that's the trick. Do you really think SDE stands for System Development Edition? It's really a T2 Skynet Distribution Edition.



                Anyways, that's the joke that plays in my head whenever that OS is mentioned.

                Comment


                • #18
                  Originally posted by oiaohm View Post

                  Yes its a 20 year card. Problem here is particular server motherboard vendors licensed the right to make Matrox g200 chip. Yes right down to complete silicon design. There are quite a few server motherboards with a vga port that behind that port is a matrox-g200 and that is new server motherboards released this year. Yes some are stupidly behind a pci to agp bridge chip at as well even that matrox g200 was able todo direct pci.. Yes this leads to a wacky stack of bridges AMD EPIC cpu pcie then pcie to pci bridge chip then pci to agp bridge chip then the matrox g200. Yes there is a real AMD EPIC motherboard with that wacky and of course the UEFI default output is the matrox g200 and ingnores other insert graphics cards unless told otherwise so you still need a vga supporting monitor to set up a 2020 motherboard.

                  Really I think that the Matrox G200 would be the oldest design graphics card chip you can buy new on something. Mind you Matrox g200 is kind of upgrade to about 4 years ago when you had some new server boards with pcie to pci bridge then pci to ISA and then a ISA vga card this was for board construction sanitised to a single soc. Some of the stuff you find on server boards that are basically obeying the rule its not broke don't fix it.
                  Believe it or not, I know that. I'm actually surprised that it's still in use and that neither AMD nor Intel have come up with a solution for vendors to replace them or that the server operators aren't using the cheapest RX 550 they can find and buying boards w/o any GPU at all so that VGA monitor in 2020 issue isn't an issue (although I'm sure most of those admins have VGA-to-HDMI adapters laying around).

                  Comment


                  • #19
                    Originally posted by KesZerda View Post

                    A few issues here with this. The servers you'd find G200s in won't migrate to server APUs. These servers these tend to have 16+ cores, with 2-4 sockets per motherboard, and server APUs are probably going to top out at 8 cores, and probably not support multiple sockets at all. The server APUs are also still going to require an Out of Band Management chip, which is where these G200 graphics components live anyways. So, you're still going to need /something/ to manage remote poweron/poweroff, console over IP redirection, etc, and thus not really save any money on fewer components.

                    Newer tech also wouldn't necessarily mean new display connectivity. The G200 itself can handle DVI output just fine, and was one of the first chips that could use such a connector. In the server space, though, most devices continue using VGA connectors because of the huge inertia in the server management space. KVM switches, etc, have VGA connectors almost exclusively, so putting other connectors in would also mean needing to replace all of the other perfectly functional management gear that lives in the datacenter. Thus, any replacement chip will almost certainly also have a VGA connector.

                    Economics is also an important consideration here. These servers live in datacenters, with racks and racks of other servers, where even in Windows, the actual console hardware may only be used for 20-30 minutes -- just long enough to do basic configuration and make sure it's talking to the PXE server before everything else is handled through SSH/RDP/etc. Putting in a more fully featured framebuffer would just waste the server vendor's, and customers', money.

                    The G200 was chosen for a few very specific reasons. It's a reasonably-priced licenseable core with reasonable performance, and was well supported by modern operating systems. Any replacement would have to live in the same space, where it can be expected to Just Work whether the box is running Linux or Windows or Solaris, or any number of other OSes. Any replacement is also going to have the very same problems the G200 has now, where most of the people who "use" it only ever interact with it a handful of times per year.
                    I'm assuming the mentioned servers are without a BMC, right? Because if they have a BMC they could have used an ASPeed 2xxx BMC and get a host VGA bonus in the package, no need to license ancient crap.

                    Comment


                    • #20
                      Originally posted by starshipeleven View Post
                      rene 's distro T2 SDE https://t2sde.org/ seems to fit the bill so far. He is always showcasing how it runs on ancient weird crappy PC systems.
                      thanks for the shoutout, I would not call a previously $50,000 high end workstation crappy PC system though: https://www.youtube.com/watch?v=AU_RV8uoTIo and t2 also works just perfectly fine on my overkill 3950X Ryzen build ;-) https://www.youtube.com/watch?v=NUq39Jz5ZJI

                      Comment

                      Working...
                      X