Announcement

Collapse
No announcement yet.

Ubuntu Linux Considers Greater Usage Of zRAM

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Aleve Sicofante View Post
    Say you have 1GB of RAM and devote 256MB to zRAM. Now you have 768MB available. When the system needs 900MB, it will use part or your zRAM as its swap area. If you didn't have zRAM, the system would have accessed RAM directly: no need to swap. If the system needs 2GB, it'll swap to disk anyway, since the 256MB of zRAM will be of no use in that case.
    No

    zRAM consumes RAM only when used, not when just initialized, so if you have 1 GB of RAM and devote 256 MB to zRAM, you'll virtually see 1 GB RAM + 256 MB of compressed swap (clearly visible when running 'free'). When the system needs 900 MB it still only use RAM and no swap. If the system needs 1,2 GB of RAM it will start moving pages from RAM to zRAM. The available RAM will decrease since it will be used by zRAM, available zRAM will also decrease but slower since it is compressed.

    This is why zRAM works good in almost every scenario.

    Comment


    • #12
      Originally posted by GreatEmerald View Post
      Personally I set KDE to use the SI system (1 kB = 1000 B) for all the file sizes. It's apparently default on Mac OS X as well. It makes little sense to count file size in powers of two to begin with, it just causes confusion, especially when dealing with large files. Though it does make sense for RAM, because its modules come in sizes of powers of two.

      And yea, Windows, last time I checked, still incorrectly labels "MB" and such even if it really means "MiB". Though I'm not sure if it was changed in Windows 8 or not. Probably not.
      And yet the HDDs have internally 512 byte (2^9) or 4096 (2^12) byte sectors and can't process data in smaller packets than that... File systems (usually) do allocate space for files in 4096, 8192 or 16364 byte chunks and while other cluster sizes are available for different file systems, there are no file systems that operate using decimal 4KB, 8KB or 16KB clusters.

      The only usage in computing that sees SI prefix usage are network speeds but this is because we measure it in bits, not bytes, because bytes don't always have to be 8 bit long...

      Computers deal with binary numbers, so get used to it. Decimal prefixes for HDDs are just result of marketdroids messing with stuff they (as always) don't understand. If they could use 908 byte kilobytes, by God, they would.

      Comment


      • #13
        Originally posted by tomato View Post
        And yet the HDDs have internally 512 byte (2^9) or 4096 (2^12) byte sectors and can't process data in smaller packets than that... File systems (usually) do allocate space for files in 4096, 8192 or 16364 byte chunks and while other cluster sizes are available for different file systems, there are no file systems that operate using decimal 4KB, 8KB or 16KB clusters.
        Which is good and all, but it's completely pointless to use that system for humans. We don't think in powers of two. Therefore the fact that 8 GiB = 8589934592 B is something normal people will never calculate off the top of their heads. But 8 GB = 8000000000 B makes perfect sense and does not cause any confusion. It doesn't mean that internally things have to be counted in decimals, but when it's presented to the user, it would be nice if it was calculated in a way we'd understand. Similarly, all the configuration options you see in programs have descriptive names instead of using direct variable names, even if that could be easier and that's how it works internally...

        Comment


        • #14
          Originally posted by GreatEmerald View Post
          Which is good and all, but it's completely pointless to use that system for humans. We don't think in powers of two. Therefore the fact that 8 GiB = 8589934592 B is something normal people will never calculate off the top of their heads. But 8 GB = 8000000000 B makes perfect sense and does not cause any confusion. It doesn't mean that internally things have to be counted in decimals, but when it's presented to the user, it would be nice if it was calculated in a way we'd understand. Similarly, all the configuration options you see in programs have descriptive names instead of using direct variable names, even if that could be easier and that's how it works internally...
          Working with base 10 is just a convention we currently use, because its easier for long hand calculations, before that we used 12 and 60 as bases for counting because it was easier for mental calculation. For data, binary base is more convenient.

          And if you don't care how things work on low level, then you end up with "patches" that show 4GiB of available memory in 32bit OSes that don't use PAE. Or wonder why you can't have more than 65536 rows in a spread sheet, why 0.2 is not equal to 0.4-0.2, etc, etc. Computers work in binary, deal with it.

          We measure information in base 2 units because the way we store information is base 2. If the user remembers that the unit has 1024 k to 1 M, 1024 M to 1 G, etc. then he just needs to compare the numbers. It's confusing only because storage media manufacturers use SI prefixes for marketing reasons. Just look at marketing material of the new Advanced Format HDDs, they say everywhere that the drives have 4k sectors, not 4.096k sectors or 4ki sectors...

          For Joe Average 1MiB is just as abstract as 1MB, what he cares for is that if he has 200 X units of storage and average mp3 takes 2 X units of storage that means he can put 100 mp3s on the device.

          Comment


          • #15
            Originally posted by GreatEmerald View Post
            Personally I set KDE to use the SI system (1 kB = 1000 B) for all the file sizes. It's apparently default on Mac OS X as well. It makes little sense to count file size in powers of two to begin with, it just causes confusion, especially when dealing with large files. Though it does make sense for RAM, because its modules come in sizes of powers of two.
            Yes, lets instead say a file needs "100 MB" in space, when it really needs "100 Mib". Consumers are incapable of telling the difference, but HDD's have been using MB for years to oversell their capacity.

            Comment


            • #16
              Originally posted by tomato View Post
              Working with base 10 is just a convention we currently use, because its easier for long hand calculations, before that we used 12 and 60 as bases for counting because it was easier for mental calculation. For data, binary base is more convenient.

              And if you don't care how things work on low level, then you end up with "patches" that show 4GiB of available memory in 32bit OSes that don't use PAE. Or wonder why you can't have more than 65536 rows in a spread sheet, why 0.2 is not equal to 0.4-0.2, etc, etc. Computers work in binary, deal with it.

              We measure information in base 2 units because the way we store information is base 2. If the user remembers that the unit has 1024 k to 1 M, 1024 M to 1 G, etc. then he just needs to compare the numbers. It's confusing only because storage media manufacturers use SI prefixes for marketing reasons. Just look at marketing material of the new Advanced Format HDDs, they say everywhere that the drives have 4k sectors, not 4.096k sectors or 4ki sectors...

              For Joe Average 1MiB is just as abstract as 1MB, what he cares for is that if he has 200 X units of storage and average mp3 takes 2 X units of storage that means he can put 100 mp3s on the device.
              ^^QFT. Computers are binary, and all sizes are binary. Going to base 10 is little more then flat out lying to consumers.

              Comment


              • #17
                This zRAM feels awesome, right now I have Firefox using 1.2GB. With plugin-container that's about 1.3GB total, plus the OS and stuff so I have a 300MB swap usage. Given I have a fast CPU and only 2GB ram the trade off would be worth it to me. I'll upgrade to 3GB soon but will run into the limit again, even more if I get Steam games running (but I would need a new geforce for that).

                The MB versus MiB is beyond stupid. If I have a file with a 2.112 GB size, how can I tell at a glance if it's bigger or smaller than 2 GiB? What about 2.327 GB? 2.225GB?

                If I copy that file around or give it and it ends up some place where it needs to be less than 2 GiB, then I'm or we're fucked.
                I remember failing a CD burn, because Windows displays (or displayed, I don't remember how it is with Vista/7) file sizes in number of KiB (i.e., something like "720 339 KB"). That makes it compatible with the silly 1.44M = 1440K convention which mixes decimal and binary.
                But one day I fucked up and tried to burn an actual 720MB (i.e. MiB) amount of data on a 700MB CD, thinking I was burning 720 000 KiB.. (this costs money)

                Using "new" MB/GB instead of MiB/GiB means the user is stuck doing calculations or referring to a table that says 1 GiB = 1.xxxx GB, 2 GiB = 2.xxx GB, 4 GiB = 4.xxx GB, 2TiB = 2.xxx TB etc.

                A more convoluted but still relevant example : If I can store up to 2^31 blocks of data in a file and the block size is 128KB (i.e. 128KiB) then I can know the max file size is 2^(31+17) == 2^48 which is 256 terabytes.
                I don't have to multiply 2147483648 and 131072 in my head. (not that I would be able to, unless I had a really good reason)
                Last edited by grok; 02 January 2013, 05:07 AM.

                Comment


                • #18
                  Originally posted by grok View Post
                  The MB versus MiB is beyond stupid. If I have a file with a 2.112 GB size, how can I tell at a glance if it's bigger or smaller than 2 GiB? What about 2.327 GB? 2.225GB?
                  I don't think there is any intention of mixing units more than is done today, just making it more clear whether the units are decimal-based or binary-based. Today the two are mixed and you cope with it by always using the same base yourself and hoping that numbers provided to you are in the same base you prefer. The only difference is a bit more clarity.

                  That said, I would rather establish conventions (eg storage space is always binary-based, everything else is decimal based) and try to convert the outliers (eg disk capacity measurements) instead of continuing to use binary- and decimal-based units together in the same domain. I gather you feel the same way.
                  Test signature

                  Comment


                  • #19
                    The way I see it, and what seems to happen in the industry, when a unit is "contaminated" by a decimal number it then becomes decimal. Network speed is the obvious example but there are bus and memory bandwithes too. For a HDD, the sectors are 512 or 4096 bytes but the actual number of them is "decimal" (not binary in nature). Even flash memory (esp. SSD) ought to come in strict power of two but you may have overprovision, bad blocks etc.

                    But file sizes and file systems definitely stay in binary, as well as the memory usage of something.
                    decimal HDD are an evil we see because anyway, someone would do it if everyone else didn't.

                    BTW if I have to nitpick against something, it's people that translate 100Mbits/s into 12.5MB/s or 3Gbits/s in 375MB/s, on a serial bus just divide by 10, it's a good rule of hand (i.e. 3 gigabits per second should net about 300MB/s)

                    Comment


                    • #20
                      Originally posted by bridgman View Post
                      Are you talking about 100B.01 ? If so, doesn't that only cover microprocessors and memory where binary interpretation is the norm ?

                      Most of the other standards (which need to deal with memory, disk, networking, etc..) seem to be heading towards interpreting GB as decimal and using something like GiB for binary.

                      Maybe we should use JGB (Jedec GigaByte) or MGB (Memory GigaByte) for binary and GB for decimal
                      I believe so. Anyway, we are talking about memory, so it clearly applies.

                      Comment

                      Working...
                      X