Illumos Dropping SPARC, Allows For Newer Compiler + Eventual Use Of Rust In The Kernel

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • pracedru
    replied
    Originally posted by cb88 View Post

    How big is your disk... you never said. Also OI isn't tuned for your setup... its tuned for sytems with 64-256GB+ ram as a norm, and has already been said it can be tuned for your system it just requires doing so.

    Solaris has a history of being considered "heavy" but that is becasue it is tuned to run on large systems mostly not because of bloat.
    OpenIndianna markets them selves as suitable for desktops.

    OpenIndiana is a community supported operating system, based on the illumos kernel and userland.

    It is open source, free to use, and suitable for servers and desktops.
    If what you are saying is true, that it is an issue with ZFS configuration, I think they should tweak their ZFS config for a somewhat smaller amount RAM. But better yet, their OS should detect the hardware that is available and use that for config.

    Leave a comment:


  • k1e0x
    replied
    Originally posted by jacob View Post

    Honestly I don't see a lot of demand for ZFS storage, in fact barely any demand at all. As you say, those who use Linux in production for massive storage (i.e. not Ubuntu, that's a cloud instance OS, we're talking mainly RedHat and SUSE) don't use ZFS. Ubuntu's effort to push ZoL was met with a massive shrug and the fact that you can do "apt-get install zfs" in Debian isn't changing the world either. From where I stand it seems to me that it's mainly the home NAS people who swear by ZFS but that's niche application and clearly not something that has any real impact on the development of Linux.
    In my experience sysadmins really like it. It's a competing platform to NetApp and I manage several ZFS pools now.

    The trouble we have deploying more of it is that you can only really use it on FreeBSD or Solaris (rip) in live production. That has recently changed and you *can* run it in production on Linux with a commercially backed distro but that hasn't propagated down to enterprise level yet and storage arrays tend to live forever. FreeBSD is a fine OS but you need Linux as well if you want it everywhere because of service contracts.

    So no, I think you're completely wrong. I think Ubuntu is making a very wise call here backing ZFS. There is nothing else you really can use that is open source. NetApp, EMC or DDN and... ZFS. that's pretty much it.
    Last edited by k1e0x; 11 May 2021, 06:23 PM.

    Leave a comment:


  • jacob
    replied
    Originally posted by skeevy420 View Post
    The problem with ZFS is there's a lot of demand for ZFS storage with minimal demand for ZFS as root.
    Honestly I don't see a lot of demand for ZFS storage, in fact barely any demand at all. As you say, those who use Linux in production for massive storage (i.e. not Ubuntu, that's a cloud instance OS, we're talking mainly RedHat and SUSE) don't use ZFS. Ubuntu's effort to push ZoL was met with a massive shrug and the fact that you can do "apt-get install zfs" in Debian isn't changing the world either. From where I stand it seems to me that it's mainly the home NAS people who swear by ZFS but that's niche application and clearly not something that has any real impact on the development of Linux.

    Leave a comment:


  • k1e0x
    replied
    "OMG ZFS uses so much memory."

    False.

    Yes, ZFS will consume every bit of free memory it can but it uses it as a cache, it releases that memory once a program needs it.. so in effect it makes efficient use of the hardware in your system. The filesystem can run with no memory cache at all like any other filesystem.

    The reason Sun put "memory requirements" into the original documentation is because they wanted to guarantee a level of ZFS performance but yes, you can run ZFS just fine on a system with 128m of ram.
    Last edited by k1e0x; 11 May 2021, 01:45 PM.

    Leave a comment:


  • cb88
    replied
    Originally posted by pracedru View Post

    I agree that 4GB RAM might be in the low end for many Linux desktops, if you need it for something productive.
    But just running a desktop and installing a program like Libreoffice with a package manager should definitely be possible with 2 GB RAM. As you can see from the results of my test it also went just fine on the F34 Mate box, as it peaked at < 1GB RAM consumed. The installation consumed only about 150 MB RAM extra on F34 while on Open Indianna it consumed an extra 2400 MB RAM.

    I am definitely not saying that it isn't ZFS, but it seems unlikely to me.
    How big is your disk... you never said. Also OI isn't tuned for your setup... its tuned for sytems with 64-256GB+ ram as a norm, and has already been said it can be tuned for your system it just requires doing so.

    Solaris has a history of being considered "heavy" but that is becasue it is tuned to run on large systems mostly not because of bloat.

    Leave a comment:


  • pracedru
    replied
    Originally posted by cb88 View Post

    Yeah even in Linux 4GB is what I'd call bare minimum for "comfort" and if I do much... I'll run out. If you have 4GB + 4TB of drives which isn't atypical these days then you are pressying real hard up against the limits of ZFS, its designed for machines with 64GB+
    I agree that 4GB RAM might be in the low end for many Linux desktops, if you need it for something productive.
    But just running a desktop and installing a program like Libreoffice with a package manager should definitely be possible with 2 GB RAM. As you can see from the results of my test it also went just fine on the F34 Mate box, as it peaked at < 1GB RAM consumed. The installation consumed only about 150 MB RAM extra on F34 while on Open Indianna it consumed an extra 2400 MB RAM.

    I am definitely not saying that it isn't ZFS, but it seems unlikely to me.

    Leave a comment:


  • cb88
    replied
    Originally posted by skeevy420 View Post

    Then limit ZFS's memory usage. That shouldn't be necessary with 4GB+ ram available. The FreeBSD recommended minimum is 4GB for comfortable use with most workloads and what you are experiencing is to be expected since ZFS should yield its used ram for the system.

    I've read that in the past that 2GB is the ZoL extreme minimal a system should; FreeBSD says 1GB is extreme minimal (possibly with tuning); most places say 4GB+1GB per TB of ZFS storage is optimal.
    Yeah even in Linux 4GB is what I'd call bare minimum for "comfort" and if I do much... I'll run out. If you have 4GB + 4TB of drives which isn't atypical these days then you are pressying real hard up against the limits of ZFS, its designed for machines with 64GB+

    Leave a comment:


  • skeevy420
    replied
    Originally posted by pracedru View Post

    The reason why i don't think it is the case is because i had originally provisioned the VM with 2 GB RAM and tried to install GIMP.
    The installation process was killed because process went out of memory. I then added 2GB RAM to the machine and tried installing Libreoffice. It took forever to install and i notived that it began swapping memory to disk. This would not be the case had it been a cache issue.
    While it might not be, I still think ZFS is at fault. 2GB had crashing. 4GB had caching. I'm no rocket scientist but that seems like ZFS memory issues at play.

    With 4GB I'd limit the arc to 1GB of ram or just add 1-2GB of ram to the VM since I don't think the 4GB+1GB per TB suggestion accounts for OS memory. Every time I read that suggestion I wonder to myself "Is that in addition to the ram my system already has?" so I'd err on the side of caution and play it safe with a 6GB minimum. That should be enough to cover ZFS and the OS. You might want more ram on a Linux OS since systemd likes to gank ram for tmpfs.

    2-4GB is really pushing it with ZFS. I can't stress that enough. If this was a video game those are the specs where you'd have to be opening config files to do outside of the game tweaks to get it to run better.

    To be frank, I've never ran ZFS with less than 32GB of system ram available or in a VM ever. I love ZFS and promote it all the time, but there's a time and a place for ext4 and a VM with only 2-4GB ram is one of them unless you're doing ZFS memory pressure testing. Every PC I've owned in the past 10 years has had 32GB or more ram so I've never ran into ZFS out of memory swap issues and I prefer to stick to the VM's image format or simpler file systems like ext3 and 4.

    I checked and Ubuntu says 2GB of free memory is enough to use ZFS but suggests using it in a system with no less than 8GB.

    Leave a comment:


  • cretin1997
    replied
    No it's not ZFS everyone. The real reason is here:



    I only registered to point you to the right answer. ZFS is not to blame here, guys.

    Leave a comment:


  • pracedru
    replied
    Originally posted by skeevy420 View Post

    Based on the numbers you posed, I think it'd be safe to assume that ZFS was the culprit.
    The reason why i don't think it is the case is because i had originally provisioned the VM with 2 GB RAM and tried to install GIMP.
    The installation process was killed because process went out of memory. I then added 2GB RAM to the machine and tried installing Libreoffice. It took forever to install and i notived that it began swapping memory to disk. This would not be the case had it been a cache issue.

    Leave a comment:

Working...
X