Announcement

Collapse
No announcement yet.

Ubuntu 12.04 LTS - Benchmarking All The Linux File-Systems

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Raid Request Clarification

    I guess my above post did not specify my interest is in relative raid performance using traditional hard disks. I suspect that software raid 6 with a hot spare is a reasonable choice for a home archival data server where your hot button issue is no data loss caused by disk failure. We are getting closer to Xeon Atom boards with 8 PCIe lanes that can be used for many sata ports. It would be nice to know that Btrfs raid rebuild performance is comparable to mdamd if you are using a low powered CPU from whoever.

    Comment


    • #17
      Originally posted by malkavian View Post
      Yes, there are "cheap" SSD disks nowadays, but they use MLC chips, that are slower and have a short life (short number or writes before fail). In 2 or 3 years you could have problems with a SSD MLC disk. SLC have a much longer life (large number or writes before fail) and are a lot quicker, but they are much more costly per Gb. http://en.wikipedia.org/wiki/Multi-level_cell
      You know that even a standard MLC SSD has more than enough durability for any normal scenario these days?
      A standard number I've seen for older MLC technologies is 10000 write cycles, and if anything the current number is higher. That means any single cell can be rewritten 10000 times, and there's (say) 128GB of cells available. At 10000, that's about 1.2PB. That's 717GB of writes every day for five years, if the wear levelling is perfect. If you use 90% of the drive for constant content, so the rewrites only touch the remainder, it should wear out in five years at about 70GB/day, and even if you then use a very pessimistic 10x safety margin you're at 7GB of writes every day for five years. Realistically, it'll be fine.

      Alternatively, intel claims a 1.2million hour MTBF on their 120GB SSD, or over 130 years. I have no idea how they came up with that number.
      Last edited by dnebdal; 03-17-2012, 12:36 PM.

      Comment


      • #18
        If nowadays wear levelling is well implemented I suposse you are right

        Comment


        • #19
          Originally posted by malkavian View Post
          If nowadays wear levelling is well implemented I suposse you are right
          It's supposed to be, though of course it works better the more free space it has to play with. Drives typically keep some GB of spare space for that purpose, so even if it's 100% full it can spread writes over 4 or 8 GB of physical cells. I think that's why you often see drive sizes that are a bit less than a power of 2 - like the 120GB intel that I bet has 128GB of physical chips.

          Comment


          • #20
            Originally posted by blackiwid View Post
            I dont mean efi ^^ I mean the replacement of mbr, dont know the name right now ^^ I thinkt that was somehow the problem, I will only try that again if my system is unable to boot from good old solid 1000 year old mbr
            You mean GPT, btw. It's nice enough as long as your BIOS can boot it; the only thing that annoys me with it is that Win7 and Win8 don't want to install onto GPT partitions. On the other hand it supports larger partitions, more partitions, more file system type codes, and it's a bit more resilient with the backup copy at the end of the disk.

            Not that it really matters as long as you can get your file system onto the disk at the desired position and length, and boot from it.
            Last edited by dnebdal; 03-17-2012, 01:06 PM.

            Comment


            • #21
              And what about partitions? I suposse it just work inside every partition asigned sectors. So would be better to make bigger than needed partitions and don't make swap ones in SSDs.

              Comment


              • #22
                Originally posted by malkavian View Post
                And what about partitions? I suposse it just work inside every partition asigned sectors. So would be better to make bigger than needed partitions and don't make swap ones in SSDs.
                The wear levelling is done on the hardware side, so it doesn't really care about your partition/filesystem layout. What happens is that it keeps a table of "what the system thinks is block 1234 is currently on chip 3 offset 0010" and so on for every physical block, and then it's free to cycle which physical positions it uses. Imagine a rewrite: Instead of reusing the same block, it can take an unused block, write the new content there, update the "this is there"-table, and put the old block at the end of the queue of free ones. (I imagine it works better with TRIM support, since that lets the disk consider much more space as "free". )

                It's the same translation map idea that is used for virtual memory.

                Comment


                • #23
                  Quoth wikipedia:
                  MLC NAND flash used to be rated at about 5–10k cycles (Samsung K9G8G08U0M) but is now typically 1k - 3k cycles

                  Comment


                  • #24
                    Originally posted by curaga View Post
                    Quoth wikipedia:
                    Right, that's a bit annoying. The chips in the intel 320 series are apparently rated at 5k - so halve the above numbers. On the flipside, lager capacities help - so newer models will make up for it just by having more space to spread the writes over.

                    Comment


                    • #25
                      Originally posted by dnebdal View Post
                      Right, that's a bit annoying. The chips in the intel 320 series are apparently rated at 5k - so halve the above numbers. On the flipside, lager capacities help - so newer models will make up for it just by having more space to spread the writes over.
                      Larger disks only help if you're not actually using the space. And if that's the case, why pay for a big ssd in the first place?

                      Comment


                      • #26
                        Originally posted by curaga View Post
                        Larger disks only help if you're not actually using the space. And if that's the case, why pay for a big ssd in the first place?
                        As long as the absolute amount of free space is larger, it helps. Larger drives tend to both have more reserved space and for the user to leave (in absolute amounts) more free.

                        Comment


                        • #27
                          I'll have to counter that with Murphy's law, the steady state of any disk is full

                          Comment


                          • #28
                            Originally posted by curaga View Post
                            I'll have to counter that with Murphy's law, the steady state of any disk is full
                            Better stated as a gas law, I think - stored data expands to fill all available space.
                            (Still, if you do a programs / data split over an SSD and a normal disk, it often works out with some free space on the SSD.)
                            Last edited by dnebdal; 03-17-2012, 05:05 PM.

                            Comment


                            • #29
                              Well, just read in wikipedia and with static wear leveling free or used space is not a problem: http://en.wikipedia.org/wiki/Wear_leveling#Types

                              Comment


                              • #30
                                Originally posted by malkavian View Post
                                Well, just read in wikipedia and with static wear leveling free or used space is not a problem: http://en.wikipedia.org/wiki/Wear_leveling#Types
                                Ah, that's fairly elegant.

                                Comment

                                Working...
                                X