Announcement

Collapse
No announcement yet.

Linux 5.12 Lands Fix For File-System Corruption Caused By Swapfile Issue

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Linux 5.12 Lands Fix For File-System Corruption Caused By Swapfile Issue

    Phoronix: Linux 5.12 Lands Fix For File-System Corruption Caused By Swapfile Issue

    For those wanting to help in testing out the Linux 5.12 kernel, at least it should no longer eat your data now if you rely on a swapfile...

    http://www.phoronix.com/scan.php?pag...rruption-Fixed

  • #2
    If I understand correctly, it's been merged after rc1 and it's then safer to wait for rc2 or latest daily builds before trying out 5.12?

    Comment


    • #3
      Originally posted by Mez' View Post
      If I understand correctly, it's been merged after rc1 and it's then safer to wait for rc2 or latest daily builds before trying out 5.12?
      Right was merged last night so either the newest daily builds or waiting for -rc2. i.e. anything today or later days.
      Michael Larabel
      http://www.michaellarabel.com/

      Comment


      • #4
        SWAP? In 2021

        SWAP is an emergency buffer and of no use when RAM is affordable. But I hear still of people who follow this ancient MS-DOS rule "SWAP shall be twice as big as RAM". The result? They add 8 GB to already installed 8 GB RAM, now total 16 GB RAM. And then they add 32 GB of SWAP. But that doesn't make any sense and they should actually remove the SWAP entirely.

        On the other hand we still don't use the full capabilities of CGROUPS to control actual resource usage. When a web browser consumes more memory than allowed (let us say 8 GB for Firefox in 2021) the kernel shouldn't provide more - which most likely kill the application nowadays. Maybe the new daemons and changes in the kernel improve that situation but starting swapping till the system becomes unresponsive isn't a solution.

        SWAP itself isn't bad. But I didn't added SWAP for many years to my systems. And I don't have huge amounts of RAM.

        Comment


        • #5
          I don't think a file system bug caused the problem in the image

          Comment


          • #6
            Originally posted by hsci View Post
            SWAP? In 2021

            SWAP is an emergency buffer and of no use when RAM is affordable. But I hear still of people who follow this ancient MS-DOS rule "SWAP shall be twice as big as RAM"
            I never had swap back when I was running MS-DOS. Maybe you're thinking of some aftermarket software? Windows 3.1?

            Anyway, I'd always have at least a small amount of swap available in case I run out of RAM. Otherwise you'd be worried about OOM.

            Comment


            • #7
              Originally posted by hsci View Post
              SWAP? In 2021

              SWAP is an emergency buffer and of no use when RAM is affordable... the new daemons and changes in the kernel improve that situation but starting swapping till the system becomes unresponsive isn't a solution.

              SWAP itself isn't bad. But I didn't added SWAP for many years to my systems. And I don't have huge amounts of RAM.
              I don't disagree the "reserve 2x RAM for swap" is dumb advice in #currentyear . But on the other hand, I would argue against thinking of swap as "slow RAM for the poors" or "malloc go brrrr, let's bandaid"

              instead think of it like this - RAM is volatile, it would be useful to have a persistent backing store. We already have storage for the files and programs we care about, so why not have storage for the current system state when when the system needs it. Today that's suspend to disk & hibernate. But as RAM gets closer to the CPU (on die or on package) and as storage gets faster, having a mechanism to power down everything and nearly instant power up is useful.

              Also as we get RAM closer to the CPU, it's less likely to be upgradeable, and won't be growing as fast as compute & SSD capabilities.

              No doubt as computing evolves, the way Linux deals with working sets and persistent storage will evolve with it.

              Comment


              • #8
                Originally posted by nranger View Post

                I don't disagree the "reserve 2x RAM for swap" is dumb advice in #currentyear . But on the other hand, I would argue against thinking of swap as "slow RAM for the poors" or "malloc go brrrr, let's bandaid"

                instead think of it like this - RAM is volatile, it would be useful to have a persistent backing store. We already have storage for the files and programs we care about, so why not have storage for the current system state when when the system needs it. Today that's suspend to disk & hibernate. But as RAM gets closer to the CPU (on die or on package) and as storage gets faster, having a mechanism to power down everything and nearly instant power up is useful.

                Also as we get RAM closer to the CPU, it's less likely to be upgradeable, and won't be growing as fast as compute & SSD capabilities.

                No doubt as computing evolves, the way Linux deals with working sets and persistent storage will evolve with it.
                It takes less than a second on my current laptop to resume from suspend. I guess m.2 PCIe NVMe helps with that. How much faster is needed exactly?

                Also, I have 32 Gb of RAM, is a swap partition or a swapfile still needed when I barely go over 9 Gb being used?

                Comment


                • #9
                  Originally posted by hsci View Post
                  SWAP? In 2021
                  How do you hibernate your system without swap?

                  Comment


                  • #10
                    Originally posted by hsci View Post
                    SWAP? In 2021

                    SWAP is an emergency buffer and of no use when RAM is affordable. But I hear still of people who follow this ancient MS-DOS rule "SWAP shall be twice as big as RAM". The result? They add 8 GB to already installed 8 GB RAM, now total 16 GB RAM. And then they add 32 GB of SWAP. But that doesn't make any sense and they should actually remove the SWAP entirely.

                    On the other hand we still don't use the full capabilities of CGROUPS to control actual resource usage. When a web browser consumes more memory than allowed (let us say 8 GB for Firefox in 2021) the kernel shouldn't provide more - which most likely kill the application nowadays. Maybe the new daemons and changes in the kernel improve that situation but starting swapping till the system becomes unresponsive isn't a solution.

                    SWAP itself isn't bad. But I didn't added SWAP for many years to my systems. And I don't have huge amounts of RAM.
                    1. "Swap" isn't an acronym like "RAM" is.

                    2. Having swap available is kind-of necessary for the kernel and the RAM-management as far as I have been informed in the other 5.12-thread on Phoronix.

                    3. Having swap is necessary for suspend-to-disk, which is a valid use-case.

                    4. Having memory compressed zram-"swap" is better than not having it. I know, ye olde memory-compressors of the DOS and Windows 9x age have a kind of smell with them but compression algorithms and speeds have improved in the last decades and If you can have RAM or RAM*2 because of highly compressible memory structures - why not? It's practically for free.

                    5. Having swap can help when unusual spikes of memory consumption appear, e.g. when compiling or when the memory used in the VM cluster suddenly goes up because $USER did something he is allowed to do.

                    6. Killing processes because of OOM situations is *the* *last* *resort* and should never ever occur in real-life situations. It's like doing a heart transplant without narcotics because it's the last thing left to do. For example before killing processes instruct then to dump their buffers (/proc/sys/vm/drop_caches). Is this done? What's the kernels/systemds OOM strategy? Is there a flowchart?

                    7. Killing processes because they exceed an arbitary specified memory limit is like deleting database tables because they exceeded one million rows.
                    Last edited by reba; 03 March 2021, 10:01 AM.

                    Comment

                    Working...
                    X