Announcement

Collapse
No announcement yet.

systemd 256 Nears Release With run0, systemd-vpick, importctl & More

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • archkde
    replied
    Originally posted by uid313 View Post
    I think systemd-oom is too low to react, when my system runs out of RAM it freezes for minutes before it kills the offending process.
    And why let any process consume all remaining RAM? I would like to auto-kill any process that consumes more than 8 GB of RAM.
    Killing processes using more than a fixed amount of memory is not oomd's job. Use cgroup resource limits instead.

    Leave a comment:


  • ol3geezer
    replied
    systemd continues to slay

    Leave a comment:


  • oiaohm
    replied
    Originally posted by uid313 View Post
    Maybe two or three minutes.
    I have a SATA SSD on a system with 16 GB RAM and I've disabled swap. I don't have any swap partition nor swap file.
    Some setups no swap at all can be the problem.



    So no swap partition and No swapfile on disk that fine because these are not exactly fast. No swapfile at all that can open the Pandora box. Android uses zram as swap in memory for few reasons one is to make the OOM events not be stalls. Ram based swap is quite fast. Yes ram based swap gives you that little bit of extra time for the OOM killer of any form to go o hell I am out of memory now so I need to do something. Yes ram based swap allows the OOM killer to have enough means to push stuff around so it not being stalled out as well due to lack of memory.

    Zero swap or slow swap basically generate what appears to be the same problem when you run out of memory of stalling but the stalls in fact different. Slow swap the stall is slow transfers. Zero swap is OOM killer finding it self without enough ram to perform it own processing without having to be pushing stuff out the way over and over again. Basically both are bad.

    There are different memory fragmentation clean up solutions with the Linux kernel that need swap as well this normally only a problem if you are running for months. Yes this need to swap does not say you have to have disc based swap zram is good enough to meet these needs.

    Leave a comment:


  • Serafean
    replied
    Originally posted by JMB9 View Post

    I have used 16 GB main memory for a long time - but always with Swap of same size. With current systems
    I experiened 16 GB to be not enough
    I recommend you enable zram . Breathes a new life in RAM limited setups.

    Leave a comment:


  • NotMine999
    replied
    Originally posted by mxan View Post
    So when's systemd-kerneld coming?
    Will that be better, more stable than systemd-linuxd ?

    Or does it simply restart itself when a watchdog timer fires ?

    Leave a comment:


  • intelfx
    replied
    Originally posted by JMB9 View Post
    And to get on topic - my current system (quite fast) needs more than 30 min to go down after creating
    some backups (of volume more than system RAM) - and looks like having crashed - and partly have to
    give "shutdown -h now" on console - so whatever systemd wants to do is the wrong thing ...
    and yes, I am still not happy to see a binary blob replacing clear scripts ... just old school - so just my fault.
    This looks like a word salad, the entire sentence makes no sense to me. Could you please clarify what did you actually intend to convey here, besides that "whatever systemd wants to do is the wrong thing"?
    Last edited by intelfx; 23 May 2024, 09:46 AM.

    Leave a comment:


  • JMB9
    replied
    Originally posted by uid313 View Post

    Maybe two or three minutes.
    I have a SATA SSD on a system with 16 GB RAM and I've disabled swap. I don't have any swap partition nor swap file.
    I have used 16 GB main memory for a long time - but always with Swap of same size. With current systems
    I experiened 16 GB to be not enough - so my next has 64 GB ECC RAM but not set on fire (1st light) yet ...
    but I am longing to see it ...
    And with 64 GB I won't use any swap ... as this should be sufficient - and I think a program could make
    good use of more than 8 GB of RAM.
    And to get on topic - my current system (quite fast) needs more than 30 min to go down after creating
    some backups (of volume more than system RAM) - and looks like having crashed - and partly have to
    give "shutdown -h now" on console - so whatever systemd wants to do is the wrong thing ...
    and yes, I am still not happy to see a binary blob replacing clear scripts ... just old school - so just my fault.

    Leave a comment:


  • uid313
    replied
    Originally posted by Avamander View Post

    Only a minute? The usual behaviour is total lock-up, especially if you've made the mistake of enabling swap on NVMe.
    Maybe two or three minutes.
    I have a SATA SSD on a system with 16 GB RAM and I've disabled swap. I don't have any swap partition nor swap file.

    Leave a comment:


  • Avamander
    replied
    Originally posted by uid313 View Post
    I think systemd-oom is too low to react, when my system runs out of RAM it freezes for minutes before it kills the offending process.
    And why let any process consume all remaining RAM? I would like to auto-kill any process that consumes more than 8 GB of RAM.
    Only a minute? The usual behaviour is total lock-up, especially if you've made the mistake of enabling swap on NVMe.

    Leave a comment:


  • Volta
    replied
    Originally posted by uid313 View Post

    It would be great if this was used by VS Code. I've ran my application within VS Code and then my computer froze and crashed because some infinite recursion due to a bug I had in my code that was used parsing or building some recursive tree structure.
    Maybe you'll be able to tweak your VS run command and add limits there? Or try setting the limit for current user:

    ulimit -Sv 1000000‚Äč (1GB for process)
    Last edited by Volta; 23 May 2024, 06:20 AM.

    Leave a comment:

Working...
X