Announcement

Collapse
No announcement yet.

NVMe SSD Systems May Boot Slightly Quicker With Linux 5.7

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by mppix View Post


    I would argue that it is a low hanging fruit for anyone that runs efi, i.e. all x86_64 hardware and some ARM systems. If one really needs a boot-manager beyond selecting different OS (efi can do that), there are slimmer/faster ones than grub.

    Distros will probably try to avoid that. Grub is just too convenient to provide compatibility with pretty much any hardware that you can think of.

    This obviously does not really matter to servers that take minutes to initialize HW.

    If interested here is a more complete discussion
    https://unix.stackexchange.com/quest...-uefi-and-grub
    That isn't correct at all. Damn near the first decace of x86_64 was BIOS only and that reason is why GRUB is the default bootloader everywhere.

    Also, just commenting in general here, but a lot of us have workstations where the longest part of the boot up chain is the system initializing. It takes around 20 seconds after I power on for my system to do its pre-boot checks, another 5 seconds with GRUB, 5 at the password prompt, and the OS is finally booting. Oh, that's my "Fast Boot"

    While these little fixes and speedups help, they don't fix, and will never fix, the real weak link in the chain -- systems with molasses based firmware.

    Comment


    • #12
      Originally posted by mppix View Post
      I would argue that it is a low hanging fruit for anyone that runs efi, i.e. all x86_64 hardware and some ARM systems. If one really needs a boot-manager beyond selecting different OS (efi can do that), there are slimmer/faster ones than grub.
      Many ARM systems don't really need EFI. The boot process is extremely slim. For local booting you don't even need uboot, just falcon boot -> directly boot to Linux from SPL. If you need network boot or more complex settings, there's U-Boot and Syslinux style menus.

      Comment


      • #13
        Improvements are always welcome, but the quickest reboot, by far, is still the reboot that you do not have to do at all, because your system runs stable around the clock, and does not crash should you opt to suspend and resume it. Improvements in that area not only lower the boot time to zero, they also get rid of all the application level time consumed to get back to work.

        Comment


        • #14
          Cold boot to desktop (Manjaro KDE) on my 3200G on Gigabyte AORUS mini-itx (B450)* was so fast with UEFI set to speed boot, I thought the machine had broken in the half second Kodi started and bypassed its own splash screen.

          *Also, on NVMe.

          Not a fan of the cruft UEFI, but this implementation certainly flew.
          Hi

          Comment


          • #15
            Originally posted by dwagner View Post
            Improvements are always welcome, but the quickest reboot, by far, is still the reboot that you do not have to do at all, because your system runs stable around the clock, and does not crash should you opt to suspend and resume it. Improvements in that area not only lower the boot time to zero, they also get rid of all the application level time consumed to get back to work.
            Yep,... though, some people prefer to completely shutdown down their electrical devices during night.

            Comment


            • #16
              Originally posted by Danny3 View Post
              Good job.
              I always wondered why modern computers that have 8+ cpu cores running at 3.5+ GHz with RAM at 3200+ GHz and SSD with a transfer speed of 300-500+ MB/s boot so slow.
              I mean why so powerful computers cannot boot in less than 5 seconds ?
              There are many things we could do. One thing that comes to mind if the boot experience is important, would be to postpone setting up network until the login screen has been displayed. Also, checking filesystems could be postponed until the login screen has been shown. In my system, that would've shaved off a couple of seconds. Automatic preloading of commonly used files to cache would help hard disk users and network booters a lot. Grabbing a screenshot at login could maybe be used to improve perceived login time by displaying it full screen until the desktop is actually ready to be displayed.

              There are lots of ways you could improve perceived performance for desktop users.

              Comment


              • #17
                Originally posted by skeevy420 View Post
                That isn't correct at all. Damn near the first decace of x86_64 was BIOS only and that reason is why GRUB is the default bootloader everywhere.
                Sure, it should probably read "all reasonably modern x86_64." I just haven't seen any x86_64 efi compatible HW in a while now and I tent to run systems until they die (or are surpassed by a Raspberry PI in computation).

                Originally posted by skeevy420 View Post
                Also, just commenting in general here, but a lot of us have workstations where the longest part of the boot up chain is the system initializing. It takes around 20 seconds after I power on for my system to do its pre-boot checks, another 5 seconds with GRUB, 5 at the password prompt, and the OS is finally booting. Oh, that's my "Fast Boot"
                20s init on a workstation seems long. Is this for full sys checks or just init? What are you running?

                Originally posted by skeevy420 View Post
                While these little fixes and speedups help, they don't fix, and will never fix, the real weak link in the chain -- systems with molasses based firmware.
                There are systems with different use cases where boot times matter, e.g. embedded and industrial platforms.
                You hardly ever have to reboot workstations or laptops but other use cases may have to..

                Comment


                • #18
                  Originally posted by mppix View Post
                  Sure, it should probably read "all reasonably modern x86_64." I just haven't seen any x86_64 efi compatible HW in a while now and I tent to run systems until they die (or are surpassed by a Raspberry PI in computation).

                  20s init on a workstation seems long. Is this for full sys checks or just init? What are you running?

                  There are systems with different use cases where boot times matter, e.g. embedded and industrial platforms.
                  You hardly ever have to reboot workstations or laptops but other use cases may have to..
                  Dell T5500. Dual Xeons with 48GB ECC. One Gen before Sandy and one gen before EFI was mandated.

                  Some days I feel that I have the last worth-a-shit BIOS-based Intel system.

                  But I only reboot when either rpm-ostree updates or I'm switching over to Windows to play Call of Duty (at least I'm honest about the game).

                  I'm tired of rebooting as much so I'm gonna be picking up an RX 550 or RX 560 for my desktop and use my RX 580 with VMs. Need one w/o a power adapter due to having to use my two six pins with an eight pin adapter cause I has no free GPU powers

                  Comment


                  • #19
                    Originally posted by skeevy420 View Post

                    Dell T5500. Dual Xeons with 48GB ECC. One Gen before Sandy and one gen before EFI was mandated.

                    Some days I feel that I have the last worth-a-shit BIOS-based Intel system.

                    But I only reboot when either rpm-ostree updates or I'm switching over to Windows to play Call of Duty (at least I'm honest about the game).

                    I'm tired of rebooting as much so I'm gonna be picking up an RX 550 or RX 560 for my desktop and use my RX 580 with VMs. Need one w/o a power adapter due to having to use my two six pins with an eight pin adapter cause I has no free GPU powers
                    Gang 2 PSU's together by bridge? Used to be part of cryptominers equipment. Fault tolerance is going to suffer but it would solve the issue of not enough cables.

                    Comment


                    • #20
                      Originally posted by aht0 View Post

                      Gang 2 PSU's together by bridge? Used to be part of cryptominers equipment. Fault tolerance is going to suffer but it would solve the issue of not enough cables.
                      .....if I only had pictures of my system from back in the day

                      I actually did that + sawzalled a hole in the case because I didn't read the ATX specs correctly

                      You can't get much more white trash geek than that system was

                      Two damn power buttons...one for the motherboard and the other for extra hard drives and muh GPU

                      It wasn't actually

                      Comment

                      Working...
                      X