Announcement

Collapse
No announcement yet.

Bcachefs Looks Like It Won't Make It For Linux 6.6

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by reba View Post
    4) disable swap altogether, let the system handle memory allocations within the constraints
    That option you need to be careful with. Some Linux kernel depend on being able to push data out to swap then pull it back in to de-fragment physical memory. Disabling swap can be way to break your system. How it normally turns up applications that perform way better due to having access to huge page allocations not being able to get huge page allocations because the physical memory is fragmented and Linux kernel having a bits of memory it cannot move without sending it out to swap and pulling it back in spread over the memory space preventing huge pages from be allocated.


    Kernel samepage merging is a feature people forgot to turn on as well.

    There is also the sledgehammer option

    Disable over-commit.



    Comment


    • Originally posted by WereCatf View Post

      They're not using mailing lists as a communications system, though; they're using it both for communications and for patch management and submission. I far prefer keeping patch and source-code management to a system that makes it easy to submit, track, modify and discuss specific portions of or the entire patch, like e.g. Github (I use Gitea for my own personal needs). Github, for example, makes it very easy to see with a few clicks the entire history of the submission, any specific portions of the code that were reviewed and needed modifications and so on -- you don't get that kind of ease with mailing lists.

      For communications, I don't know. I would like the conversation related to the project in general to a platform built for that, but emails -- including mailing lists -- are a way too clunky system for my tastes. Alas, I don't know what specific platform that'd be as that's not an area I've paid any attention to.

      But this is all irrelevant as I've never been in charge of any large project and I am a wholly incompetent hobbyist anyways. I'm not a professional, so my opinion hardly matters.
      I've worked on medium size corp and university teams.

      A university HPC group running 5 compute clusters, they used a mailing list to manage the machines and communicate with clients, I found it insane because it was difficult to keep track of discussions. I tried to get a list serve set up but the response was it was too big a change.

      A corp where they used Slack and then later MS Teams. Git for souce control. This was better because a given conversation would stay under a root node in the Slack channel's listing.

      That one little extra layer of organization made all the difference.

      The LKML has that extra layer when it is viewed thru the listserve.

      Comment


      • Originally posted by avis View Post
        If you read anything which contradicts with this statement on the net, it's just plain wrong. SWAP is only necessary when you don't have enough physical RAM to accommodate all the running applications and if that's the case, latency will be horrible.
        No Linux kernel developer who work in the memory section say otherwise. There is a catch here. Early stages with a new structure added to the Linux kernel to defrag it in physical in memory it will be pushed to swap decal-located from physical then reallocated in another location in physical and pulled back in. Zero swap means you break this work around. The effect is longer system runs less and less huge pages you have to use. Desktop users may not notice the problem at all due to short reboot cycles.

        Why its like this is when they are working out if they have all the locking right. If the item end up pulled back from being paged out before it replaced normally this means the function doing the move did not have all the users to the structure correctly locked.

        So depending on the Linux kernel version you are using depend on if swap is required or not perfect functionality. Not all the times are there new structures in the Linux kernel that the kernel developers are not sure if they have the locking on moving them in physical memory right or not. If you disable swap the Linux kernel will not move any of those structures in physical memory end up with you having the problem of losing you huge page allocations.

        Originally posted by avis View Post
        Yeah, Linux uses SWAP for hibernation but it's a brain damaged concept and a bad idea which should have never been implemented this way. Windows has a separate hibernate.sys file for that (or something like that - I don't remember its name).
        pagefile.sys​ and hiberfil.sys​ is what you are referring to. To be correct when windows first implemented hibernation they also just used the pagefile.sys. There is a saving on writes to storage device doing it that way.

        Modern day Windows hibernate is in fact closer to https://criu.org/Main_Page that has not been functional with X11 but as demo recently can be functional with Wayland. Yes modern day Windows hibernation starts with Windows Vista and required a graphical stack change.

        Why Linux distribution hibernate sux the answer is X11. Good hibernate you need to be able split the hardware bring up from the applications.

        Comment


        • Originally posted by timofonic View Post
          Linux kernel management has to substantially improve, it's extremely incomplete. Documentation is incomplete, not every procedure is completely documented. There's massive holes in documentation that causes lots of confusion and conflicts. There should be full-time paid managers to cope with certain tasks, such as training and conflict solving.
          You are on the right track but you miss something.

          When you get into most companies upper management you have the secretary/assistant role. You low level management normally does not have direct secretary/assistant so they have to care about human emotions.

          In most companies if programmer is the manager he/she is assisted an assistant who is normally not programmer but a documentation writer. Part of assistance job is conflict management.

          This is the problem upper management has to made cold hard choices.

          At a banking conference to which they had accidentally been invited, "Dow representative" Erastus Hamm unveiled "Acceptable Risk," a Dow industry standard for determining how many deaths are acceptable when achieving large profits.

          The Yes mans Golden Skeleton stunt this is very important to understand upper management logic. Yes killing 1billion + people if it profitable is acceptable if you just room full of pure upper management people. The yes men were in shock because they thought that had present something so outlandish that the room was going to condemn them but due to what they presented being logical upper management people had zero moral problems with it with the worse bit agreed with the complete idea.

          There is a reason why you don't want upper management people without having assistants/secretaries/PR/HR/Legal. Guess what we have with Linux kernel development. People like Linus Torvards are paid for. What about the matching assistant to Linus Torvald to keep everything running smoothly yes is not paid for so we have problems.

          Please note something assistants/secretaries don't have authority to give approvals or rejections they are their to smooth over issues and keep documentation and so on in order so that the manager can work effectively as tasks include interaction with non upper management.

          Linux kernel has tones of developers the Linux kernel does not have tons of documentation writers or assistants. Yes some of the company assistants end up specialized as HR/PR and legal. This is a particular class of people that are missing from Linux kernel developer and I have no clue how you would recruit them and get them involved for Linux kernel. I know how to sell this to a upper management person in company that they pay the person a wage todo this stuff..

          Comment


          • Originally posted by oiaohm View Post

            That option you need to be careful with. Some Linux kernel depend on being able to push data out to swap then pull it back in to de-fragment physical memory. Disabling swap can be way to break your system. How it normally turns up applications that perform way better due to having access to huge page allocations not being able to get huge page allocations because the physical memory is fragmented and Linux kernel having a bits of memory it cannot move without sending it out to swap and pulling it back in spread over the memory space preventing huge pages from be allocated.
            If you need swap to allow defragmentation of physical memory, can you use swap on zram to achieve this? Or does the fact that zram uses physical memory mean that you are trying to pull yourself up by your own bootstraps, and it will fail?
            I can see ways of arguing this both ways, so I guess it would be dependent on how zram is implemented, and I'm not an expert in this area.

            Comment


            • Originally posted by Old Grouch View Post
              If you need swap to allow defragmentation of physical memory, can you use swap on zram to achieve this? Or does the fact that zram uses physical memory mean that you are trying to pull yourself up by your own bootstraps, and it will fail?
              I can see ways of arguing this both ways, so I guess it would be dependent on how zram is implemented, and I'm not an expert in this area.
              zram works.

              Yes a ramdisk block device with swap put on it also works.

              Any form of functional swap will do. When I say functional that memory is transferred from where it is and put else where and is able to be put back automatically in case of the memory being accessed.

              Why the no swap problem such is argument end user effects depend on:
              1) configuration of kernel
              2) version of the kernel
              3) that hardware your kernel is using(yes what drivers are being used)
              4) software you are using.(does the software take advantage of huge pages? is a key question here)
              5) uptime of system.(short uptime fragmentation comes less of issue)
              Any one or combination of these 5 facts means running with no swap file you might see any problem. Also particular combinations you are going to not have problems with huge pages when you should have them.

              Lots of works for me people out there but there is also lots of it breaks for me people out there as well. Hard part is both parties are technically right.


              Using Huge Pages
              If the user applications are going to request huge pages using mmap system call, then it is required that system administrator mount a file system of type hugetlbfs:
              This is the catch Huge pages that have the biggest problem with memory fragmentation is something you opt into. Lot of your general desktop distributions configurations come with huge pages basically off by default but user can turn this feature on because it built into the distribution kernel.

              You do need to have heads up that turned of swap if you have to turn on huge pages in future for some applications you may be needing something swap back. Good part is the physical memory defragmentaiton you can get away with like 1mb of swap set to the lowest priority swap you have ever seen. Yes 1mb is most likely oversized. This is because the Linux kernel small pages that cause the problem of being in the wrong place so are only 4kb in size each. 1mb/4k is quite a few pages that can be shoved out to swap at any one time.

              The other issue is fragmented physical memory you can at times notice reduced DMA performance(same thing lack of means to to continuous physical memory allocations) again this is going to be days/weeks/months(usage pattern is a factor here) of running not hours of running before the problem get bad. So a person who reboots there computer every 24 hours is most likely not going to notice any problem with fragmentation of physical memory caused by no swap resulting in those people claiming no problem.

              The DMA transfer slowdown as physical memory fragments MS Windows also suffers from so a person can think it normal for a long run PC to slightly lose transfer performance when that is not in fact the case because this is symptom of memory management defect. This one a person can claim everything fine when it not because what they class is normal is not right.

              Things not seam quite right on no swap putting a small zram swap back and see what happens can be a good move because if all your problems go away there is something you are doing finding the problems that come out of memory fragmentation.

              Comment

              Working...
              X