Announcement

Collapse
No announcement yet.

Ubuntu Linux Considers Greater Usage Of zRAM

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • tomato
    replied
    Originally posted by GreatEmerald View Post
    Which is good and all, but it's completely pointless to use that system for humans. We don't think in powers of two. Therefore the fact that 8 GiB = 8589934592 B is something normal people will never calculate off the top of their heads. But 8 GB = 8000000000 B makes perfect sense and does not cause any confusion. It doesn't mean that internally things have to be counted in decimals, but when it's presented to the user, it would be nice if it was calculated in a way we'd understand. Similarly, all the configuration options you see in programs have descriptive names instead of using direct variable names, even if that could be easier and that's how it works internally...
    Working with base 10 is just a convention we currently use, because its easier for long hand calculations, before that we used 12 and 60 as bases for counting because it was easier for mental calculation. For data, binary base is more convenient.

    And if you don't care how things work on low level, then you end up with "patches" that show 4GiB of available memory in 32bit OSes that don't use PAE. Or wonder why you can't have more than 65536 rows in a spread sheet, why 0.2 is not equal to 0.4-0.2, etc, etc. Computers work in binary, deal with it.

    We measure information in base 2 units because the way we store information is base 2. If the user remembers that the unit has 1024 k to 1 M, 1024 M to 1 G, etc. then he just needs to compare the numbers. It's confusing only because storage media manufacturers use SI prefixes for marketing reasons. Just look at marketing material of the new Advanced Format HDDs, they say everywhere that the drives have 4k sectors, not 4.096k sectors or 4ki sectors...

    For Joe Average 1MiB is just as abstract as 1MB, what he cares for is that if he has 200 X units of storage and average mp3 takes 2 X units of storage that means he can put 100 mp3s on the device.

    Leave a comment:


  • GreatEmerald
    replied
    Originally posted by tomato View Post
    And yet the HDDs have internally 512 byte (2^9) or 4096 (2^12) byte sectors and can't process data in smaller packets than that... File systems (usually) do allocate space for files in 4096, 8192 or 16364 byte chunks and while other cluster sizes are available for different file systems, there are no file systems that operate using decimal 4KB, 8KB or 16KB clusters.
    Which is good and all, but it's completely pointless to use that system for humans. We don't think in powers of two. Therefore the fact that 8 GiB = 8589934592 B is something normal people will never calculate off the top of their heads. But 8 GB = 8000000000 B makes perfect sense and does not cause any confusion. It doesn't mean that internally things have to be counted in decimals, but when it's presented to the user, it would be nice if it was calculated in a way we'd understand. Similarly, all the configuration options you see in programs have descriptive names instead of using direct variable names, even if that could be easier and that's how it works internally...

    Leave a comment:


  • tomato
    replied
    Originally posted by GreatEmerald View Post
    Personally I set KDE to use the SI system (1 kB = 1000 B) for all the file sizes. It's apparently default on Mac OS X as well. It makes little sense to count file size in powers of two to begin with, it just causes confusion, especially when dealing with large files. Though it does make sense for RAM, because its modules come in sizes of powers of two.

    And yea, Windows, last time I checked, still incorrectly labels "MB" and such even if it really means "MiB". Though I'm not sure if it was changed in Windows 8 or not. Probably not.
    And yet the HDDs have internally 512 byte (2^9) or 4096 (2^12) byte sectors and can't process data in smaller packets than that... File systems (usually) do allocate space for files in 4096, 8192 or 16364 byte chunks and while other cluster sizes are available for different file systems, there are no file systems that operate using decimal 4KB, 8KB or 16KB clusters.

    The only usage in computing that sees SI prefix usage are network speeds but this is because we measure it in bits, not bytes, because bytes don't always have to be 8 bit long...

    Computers deal with binary numbers, so get used to it. Decimal prefixes for HDDs are just result of marketdroids messing with stuff they (as always) don't understand. If they could use 908 byte kilobytes, by God, they would.

    Leave a comment:


  • oibaf
    replied
    Originally posted by Aleve Sicofante View Post
    Say you have 1GB of RAM and devote 256MB to zRAM. Now you have 768MB available. When the system needs 900MB, it will use part or your zRAM as its swap area. If you didn't have zRAM, the system would have accessed RAM directly: no need to swap. If the system needs 2GB, it'll swap to disk anyway, since the 256MB of zRAM will be of no use in that case.
    No

    zRAM consumes RAM only when used, not when just initialized, so if you have 1 GB of RAM and devote 256 MB to zRAM, you'll virtually see 1 GB RAM + 256 MB of compressed swap (clearly visible when running 'free'). When the system needs 900 MB it still only use RAM and no swap. If the system needs 1,2 GB of RAM it will start moving pages from RAM to zRAM. The available RAM will decrease since it will be used by zRAM, available zRAM will also decrease but slower since it is compressed.

    This is why zRAM works good in almost every scenario.

    Leave a comment:


  • LLStarks
    replied
    I just say "gigs" to avoid the problem.

    Leave a comment:


  • bridgman
    replied
    Originally posted by ryao View Post
    The JEDEC standard defines 1GB as 2**30 bytes.
    Are you talking about 100B.01 ? If so, doesn't that only cover microprocessors and memory where binary interpretation is the norm ?

    Most of the other standards (which need to deal with memory, disk, networking, etc..) seem to be heading towards interpreting GB as decimal and using something like GiB for binary.

    Maybe we should use JGB (Jedec GigaByte) or MGB (Memory GigaByte) for binary and GB for decimal
    Last edited by bridgman; 09 December 2012, 12:00 AM.

    Leave a comment:


  • aceman
    replied
    Originally posted by Aleve Sicofante View Post
    I need an explanation about zRAM.
    Say you have 1GB of RAM and devote 256MB to zRAM. Now you have 768MB available. When the system needs 900MB, it will use part or your zRAM as its swap area. If you didn't have zRAM, the system would have accessed RAM directly: no need to swap. If the system needs 2GB, it'll swap to disk anyway, since the 256MB of zRAM will be of no use in that case.

    I'm confused. I'm definitely missing something. I'd appreciated an explanation.
    But zRAM is COMPRESSED. So imagine the system can put 512MB of data into that 256MB pool. Suddenly you have like 1,25GB of RAM. Yes, there is a tradeoff. Those 512MB can't be accessed directly by the kernel and the compression/decompression takes some CPU time. However, the theory says that even this is still faster than swapping to physical disk.

    Leave a comment:


  • ryao
    replied
    Originally posted by bridgman View Post
    Yeah, strictly speaking a gigabyte is 10^^9, not 2^^30, although it gets used both ways. The term "gibibyte" (BInary GIgaBYTE presumably) is being promoted for the 2^^20 definition.

    That feels wrong somehow, although I guess "you know what I mean" isn't a good foundation for a technical standard
    The JEDEC standard defines 1GB as 2**30 bytes.

    Leave a comment:


  • Aleve Sicofante
    replied
    I need an explanation about zRAM.

    If I understand it well, it will use RAM instead of disk for swapping/paging, right?

    Swapping happens when the system runs out of RAM and needs more. Then it will save some of the RAM contents on disk, free that RAM portion and use it. OK so far?

    So a system with a lot of RAM will never need to swap, or need it very rarely. A system with very little RAM will swap pretty soon. Now, if you use zRAM, you make swapping just happen earlier. How is this good at all? Yes, you're swapping to RAM, but you might as well use that RAM directly and have no need for swapping.

    Say you have 1GB of RAM and devote 256MB to zRAM. Now you have 768MB available. When the system needs 900MB, it will use part or your zRAM as its swap area. If you didn't have zRAM, the system would have accessed RAM directly: no need to swap. If the system needs 2GB, it'll swap to disk anyway, since the 256MB of zRAM will be of no use in that case.

    I'm confused. I'm definitely missing something. I'd appreciated an explanation.
    Last edited by Aleve Sicofante; 08 December 2012, 10:22 PM.

    Leave a comment:


  • GreatEmerald
    replied
    Originally posted by bridgman View Post
    That feels wrong somehow, although I guess "you know what I mean" isn't a good foundation for a technical standard
    Personally I set KDE to use the SI system (1 kB = 1000 B) for all the file sizes. It's apparently default on Mac OS X as well. It makes little sense to count file size in powers of two to begin with, it just causes confusion, especially when dealing with large files. Though it does make sense for RAM, because its modules come in sizes of powers of two.

    And yea, Windows, last time I checked, still incorrectly labels "MB" and such even if it really means "MiB". Though I'm not sure if it was changed in Windows 8 or not. Probably not.

    Leave a comment:

Working...
X