Announcement

Collapse
No announcement yet.

Ubuntu Linux Considers Greater Usage Of zRAM

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • milhouse
    replied
    zRam has just been enabled as an option in the Raspberry Pi kernel so it will be interesting to see what effect zRam has on these 256MB/512MB devices.

    Leave a comment:


  • curaga
    replied
    It's not so much about performance, more about avoiding swapping (or running on systems you previously couldn't). I recall an early Ubuntu livecd test where the CD would crash before desktop on a 256mb ram machine, but with zram it could successfully install. (no swap in that case)

    Since if you swap, performance goes to near zero

    Leave a comment:


  • oibaf
    replied
    Originally posted by gururise View Post
    Problem is, there are many anecdotal reports of zRAM boosting system performance, but there is very little in the way of real-world benchmarks... (Michael? Might be a great idea for a future article). Furthermore, there is little consensus as to what the cutoff point is with regards to cpu speed/zRam usage to gain the best performance. Obviously, since zRam uses the CPU to compress memory, it will be faster on faster CPU's; however, does that mean it is a hindrance on slow netbook CPU's or mobile phones where available RAM is generally limited? It would be very interesting to see benchmarks on a whole range of systems utilizing zRam and compare them to similar systems w/o zRam.
    There are some benchmark on the project page: http://code.google.com/p/compcache/

    About the benchmarks note that Linux 3.8 will also have a twice faster LZO compressor/decompressor module: http://git.kernel.org/?p=linux/kerne...1915e5ae057826 so numbers may even improve here.
    Last edited by oibaf; 03 January 2013, 04:25 PM.

    Leave a comment:


  • gururise
    replied
    Problem is, there are many anecdotal reports of zRAM boosting system performance, but there is very little in the way of real-world benchmarks... (Michael? Might be a great idea for a future article). Furthermore, there is little consensus as to what the cutoff point is with regards to cpu speed/zRam usage to gain the best performance. Obviously, since zRam uses the CPU to compress memory, it will be faster on faster CPU's; however, does that mean it is a hindrance on slow netbook CPU's or mobile phones where available RAM is generally limited? It would be very interesting to see benchmarks on a whole range of systems utilizing zRam and compare them to similar systems w/o zRam.

    Leave a comment:


  • ryao
    replied
    Originally posted by bridgman View Post
    Are you talking about 100B.01 ? If so, doesn't that only cover microprocessors and memory where binary interpretation is the norm ?

    Most of the other standards (which need to deal with memory, disk, networking, etc..) seem to be heading towards interpreting GB as decimal and using something like GiB for binary.

    Maybe we should use JGB (Jedec GigaByte) or MGB (Memory GigaByte) for binary and GB for decimal
    I believe so. Anyway, we are talking about memory, so it clearly applies.

    Leave a comment:


  • grok
    replied
    The way I see it, and what seems to happen in the industry, when a unit is "contaminated" by a decimal number it then becomes decimal. Network speed is the obvious example but there are bus and memory bandwithes too. For a HDD, the sectors are 512 or 4096 bytes but the actual number of them is "decimal" (not binary in nature). Even flash memory (esp. SSD) ought to come in strict power of two but you may have overprovision, bad blocks etc.

    But file sizes and file systems definitely stay in binary, as well as the memory usage of something.
    decimal HDD are an evil we see because anyway, someone would do it if everyone else didn't.

    BTW if I have to nitpick against something, it's people that translate 100Mbits/s into 12.5MB/s or 3Gbits/s in 375MB/s, on a serial bus just divide by 10, it's a good rule of hand (i.e. 3 gigabits per second should net about 300MB/s)

    Leave a comment:


  • bridgman
    replied
    Originally posted by grok View Post
    The MB versus MiB is beyond stupid. If I have a file with a 2.112 GB size, how can I tell at a glance if it's bigger or smaller than 2 GiB? What about 2.327 GB? 2.225GB?
    I don't think there is any intention of mixing units more than is done today, just making it more clear whether the units are decimal-based or binary-based. Today the two are mixed and you cope with it by always using the same base yourself and hoping that numbers provided to you are in the same base you prefer. The only difference is a bit more clarity.

    That said, I would rather establish conventions (eg storage space is always binary-based, everything else is decimal based) and try to convert the outliers (eg disk capacity measurements) instead of continuing to use binary- and decimal-based units together in the same domain. I gather you feel the same way.

    Leave a comment:


  • grok
    replied
    This zRAM feels awesome, right now I have Firefox using 1.2GB. With plugin-container that's about 1.3GB total, plus the OS and stuff so I have a 300MB swap usage. Given I have a fast CPU and only 2GB ram the trade off would be worth it to me. I'll upgrade to 3GB soon but will run into the limit again, even more if I get Steam games running (but I would need a new geforce for that).

    The MB versus MiB is beyond stupid. If I have a file with a 2.112 GB size, how can I tell at a glance if it's bigger or smaller than 2 GiB? What about 2.327 GB? 2.225GB?

    If I copy that file around or give it and it ends up some place where it needs to be less than 2 GiB, then I'm or we're fucked.
    I remember failing a CD burn, because Windows displays (or displayed, I don't remember how it is with Vista/7) file sizes in number of KiB (i.e., something like "720 339 KB"). That makes it compatible with the silly 1.44M = 1440K convention which mixes decimal and binary.
    But one day I fucked up and tried to burn an actual 720MB (i.e. MiB) amount of data on a 700MB CD, thinking I was burning 720 000 KiB.. (this costs money)

    Using "new" MB/GB instead of MiB/GiB means the user is stuck doing calculations or referring to a table that says 1 GiB = 1.xxxx GB, 2 GiB = 2.xxx GB, 4 GiB = 4.xxx GB, 2TiB = 2.xxx TB etc.

    A more convoluted but still relevant example : If I can store up to 2^31 blocks of data in a file and the block size is 128KB (i.e. 128KiB) then I can know the max file size is 2^(31+17) == 2^48 which is 256 terabytes.
    I don't have to multiply 2147483648 and 131072 in my head. (not that I would be able to, unless I had a really good reason)
    Last edited by grok; 02 January 2013, 05:07 AM.

    Leave a comment:


  • gamerk2
    replied
    Originally posted by tomato View Post
    Working with base 10 is just a convention we currently use, because its easier for long hand calculations, before that we used 12 and 60 as bases for counting because it was easier for mental calculation. For data, binary base is more convenient.

    And if you don't care how things work on low level, then you end up with "patches" that show 4GiB of available memory in 32bit OSes that don't use PAE. Or wonder why you can't have more than 65536 rows in a spread sheet, why 0.2 is not equal to 0.4-0.2, etc, etc. Computers work in binary, deal with it.

    We measure information in base 2 units because the way we store information is base 2. If the user remembers that the unit has 1024 k to 1 M, 1024 M to 1 G, etc. then he just needs to compare the numbers. It's confusing only because storage media manufacturers use SI prefixes for marketing reasons. Just look at marketing material of the new Advanced Format HDDs, they say everywhere that the drives have 4k sectors, not 4.096k sectors or 4ki sectors...

    For Joe Average 1MiB is just as abstract as 1MB, what he cares for is that if he has 200 X units of storage and average mp3 takes 2 X units of storage that means he can put 100 mp3s on the device.
    ^^QFT. Computers are binary, and all sizes are binary. Going to base 10 is little more then flat out lying to consumers.

    Leave a comment:


  • gamerk2
    replied
    Originally posted by GreatEmerald View Post
    Personally I set KDE to use the SI system (1 kB = 1000 B) for all the file sizes. It's apparently default on Mac OS X as well. It makes little sense to count file size in powers of two to begin with, it just causes confusion, especially when dealing with large files. Though it does make sense for RAM, because its modules come in sizes of powers of two.
    Yes, lets instead say a file needs "100 MB" in space, when it really needs "100 Mib". Consumers are incapable of telling the difference, but HDD's have been using MB for years to oversell their capacity.

    Leave a comment:

Working...
X