Announcement

Collapse
No announcement yet.

Trying Out A $37 DREVO SSD On Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Mike Frett View Post
    I've been able to get up to 5-10 years out of these mechanical drives.
    They should last at least 20-30 years as a bare minimum. Check your PSU, maybe you need a better one that doesn't kill HDDs.

    Comment


    • #12
      Originally posted by eydee View Post

      They should last at least 20-30 years as a bare minimum. Check your PSU, maybe you need a better one that doesn't kill HDDs.
      That's not true. I've been repairing computers since the mid 90's and I've never seen any hdd last 20 years. There are 20 year old hard drives, but I promise you they weren't in use for 20 years. 5-10 years is a good guess for most drives that get used daily. I have an 8 year old drive now on its last legs, and it's been the best drive I ever owned.

      Comment


      • #13
        Originally posted by caligula View Post

        Choose a drive with high endurance flash cell tech. There are cheap, mid range, and quality drives. These are known as TLC, MLC, and SLC, respectively. TLC may last 1000 writes per cell, SLC up to 100 000 writes. I'd go for MLC these days for data, TLC for OS and games. Another option is to buy a 500GB drive and only partition 250 GB. You get a lot more endurance thanks to wear leveling. Also use RAID. You should ALWAYS pick RAID, if you care about data integrity. My oldest (MLC) SSD is from 2008 and it still has > 50% life time left.
        This is misleading. The most important factor in NAND write endurance is not the number of bits stored but the physical size of the cell which is dictated by the manufacturing process.

        The number of bits stored is related mostly to the speed of reading and writing since distinguishing between 2 voltage levels (for SLC) and 4 (MLC) is harder. It requires more sensitive components and more processing power.

        When SSDs were becoming mainstream they were indeed SLC while having huge cell dimensions by today's standards. The push to drive cost down combined with advancements in controller technology allowed us to use smaller cell sizes along with MLC and then TLC. However we've recently hit a barrier in making the planar (2D) NAND smaller and smaller - the endurance rating of a single is very low as a result.

        That's why most planar SSDs are having huge buffers of unused NAND as overprovisioning. And you're right - using for example only 80% of SSD's capacity will extend the endurance. But be careful here, the SSD can't ever have anything written to the remaining 20% or you'll have to perform an ATA secure erase to destroy the NAND mapping tables and start again.

        This has caused us to explore 3D NAND technologies. With vertical stacking of NAND cells we're able to retain bigger cells, allowing for better endurance, while bringing the capacity per chip higher. This is one of the reason that Samsung's 850-series is still at the top of benchmarks while being 3 years old.

        Samsung's V-NAND is so good that an 850 Pro rated for 150 TBW lasted through 9100 TBW (www.techpowerup.com/234699/samsung-850-pro-ssd-reaches-end-of-life-with-9100-tb-written).

        I also have to comment about RAID. RAID is about redundancy and not about data integrity. Unless you're using exotic RAID levels (5/6, but they still have their own problems) or checksumming filesystems like BTRFS or all-in-one integrated solution like ZFS then what will you do when one of your RAIDed drives starts lying about the data? Which one is the "correct" one?

        Comment


        • #14
          Originally posted by duby229 View Post
          That's not true. I've been repairing computers since the mid 90's and I've never seen any hdd last 20 years. There are 20 year old hard drives, but I promise you they weren't in use for 20 years. 5-10 years is a good guess for most drives that get used daily. I have an 8 year old drive now on its last legs, and it's been the best drive I ever owned.
          I've seen some pretty long-lasting bastards in SCADA PCs (industrial automation supervision PC, is usually always on, as the automation it is the user interface for never sleeps), and I agree that 20 years of 24/7 use is not common. Possible, but not common.

          They are usually in a RAID1 for this reason.

          Comment


          • #15
            Originally posted by Mike Frett View Post
            Let me ask you guys a question as I'm still using a mechanical drive. I do TONS of Video recording, some 720p but mostly SD, and lots of writing to disk. This drive I have is in old-age/pre-fail mode. Is it safe for me to buy an SSD now or should I stick with mechanical. Reliability is very important to me, I've been able to get up to 5-10 years out of these mechanical drives.

            250GB is what I need, something not too expensive as I can get another 250GB/500GB mechanical drive for around 40 bucks
            Eh, that's a phenomenon happening now : hard drives haven't changed that much in a full decade, e.g. in 2006 you could buy a nice 250GB 7200 rpm drive from IBM/Hitachi, still had an ATA 100 interface but was pretty fast (and had the "AAM" feature togglable in firmware, accoustic management). That's sill a nice size (except for the physical 3.5" size for such capacity) and interestingly it cost the same or less as a low end 250GB SSD does now.
            The phenomenon itself is : very old hard drives, the SATA ones at least, are rather good at their job, I personally find their size and performance good enough. But they're really starting to fail left and right. I witness some drives going to hell if writing, but still working otherwise, or a drive that works but the controller (on-board computer) appears to crash after some time.

            One example is needing to replace a 1TB drive from 2009. Would have to replace it with.. a 1TB hard drive from 2017. Oops! If I buy two for a RAID 1, that'll cost more than the single drive from 2009 (before incident that tripled hard drive prices). But that's a bit faster, or you can get a 2TB 3.5" drive or 1TB 2.5" drive instead for a bit more money. At worst, that's the price of maintaining 1TB of storage, re-buy hard drive every now and then.

            Another example : replace a 120GB HDD that does, I don't know, 80MB/s on reads and 70MB/s on writes with a low end 120GB SSD that does (up to) 500MB/s on reads and 100MB/s on writes (note that advanced consumer stuff "cheats" on the 500MB/s write speed number by using caching) that's something of a sidegrade too.


            I think you can choose either anyway.. but with SSDs there's the thing about avoiding unknown brands, dirt cheap stuff (hell you can go with an SD card technically, that might work if your video is encoded in real time to a small size when you record it)
            If you go with a half decent SSD, see if you can disable the "magical write caching" (they're able to use TLC cells as SLC cells to achieve 500MB/s in bar graphs and during genuine desktop use when you install something and it does lots of small writes. But that's useless if you're writing gigabytes in a single video file)
            It's possible than an SSD, e.g. 3D-NAND TLC would last you 20 years (or until the controller fails hard), that's not really knowable though.

            Comment


            • #16
              I got an Intel DC 3500 160GB, 2.5in SATA 6Gb/s, 20nm MLC or at least that's what the Intel site says. How good or how bad is it? My dad gifted it to me since it got replaced from a Field PG M4 at work. I've had a dirt cheap 60 GB SSD that died on me without notice, one day I turned off my PC and the next day the BIOS wouldn't even recognize it properly

              Comment


              • #17
                You can probably set up triple hard drive drive RAID 1 for say, an unsophisticated family member and leave that alone for a long time but if hdd 1 fails at 8 years, hdd 2 fails at 9 years and hdd 3 fails at 10 years.. you'll have to revisit it anyway?

                Or you come over at Christmas, see a drive is failed, but still effectively have a dual drive RAID 1.


                Originally posted by Stankami View Post
                I got an Intel DC 3500 160GB, 2.5in SATA 6Gb/s, 20nm MLC or at least that's what the Intel site says. How good or how bad is it? My dad gifted it to me since it got replaced from a Field PG M4 at work. I've had a dirt cheap 60 GB SSD that died on me without notice, one day I turned off my PC and the next day the BIOS wouldn't even recognize it properly
                Looks good? Perhaps update the firmware to the latest version then forget about it. (with no important data before firmware upgrade perhaps, to be sure)
                Last edited by grok; 10 July 2017, 10:15 PM.

                Comment


                • #18
                  Originally posted by numacross View Post

                  This is misleading. The most important factor in NAND write endurance is not the number of bits stored but the physical size of the cell which is dictated by the manufacturing process.
                  Thanks for the clarification.

                  That's why most planar SSDs are having huge buffers of unused NAND as overprovisioning. And you're right - using for example only 80% of SSD's capacity will extend the endurance. But be careful here, the SSD can't ever have anything written to the remaining 20% or you'll have to perform an ATA secure erase to destroy the NAND mapping tables and start again.
                  I would have assumed that formatting the empty space with fat32, running fstrim, then deleting the partition fixed that.

                  Comment


                  • #19
                    Originally posted by caligula View Post
                    I would have assumed that formatting the empty space with fat32, running fstrim, then deleting the partition fixed that.
                    I'd say it can fix it, but it's not guaranteed. The TRIM implementations are different and behave differently. Just take a look at queued TRIM problems in the kernel (https://github.com/torvalds/linux/bl...a-core.c#L4519). ATA Secure Erase has a higher chance of working, but it's all-or-nothing

                    Comment

                    Working...
                    X