Announcement

Collapse
No announcement yet.

Seagate 1TB Solid State Hybrid Drive

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    This is really a stretch, and an attempt for Seagate to try to maintain some degree of relevance.

    First off, the test is totally irrelevant, because it is comparing the hybrid against two SSD's and two VERY OBSOLETE HDDs.

    Second, the theory behind hybrid drives makes this kind of benchmarking very very irrelevant. It takes some significant use before the contents of the hybrid portion of the disk reflect the data most often used, and further, its performance benefits are intended for "real world" use, not one-off benchmarks. So for example, reading some random crap off the disk platter won't be any faster than a comparable HDD, not would a read of data on the SSD portion be an accurate reflection of anything.

    Now lets look at these "hybrid" drives from a more practical point of view;
    It has 8 GB of flash memory, which is way slower than 8 GB of *cheap* RAM.... you can easily see where I'm going with this. Outer rim of the HDD, sequential reads from an HDD are wickedly fast, on the order of you could pull 8 GB from it in a few seconds. The main benefit of the SSD portion is the RANDOM access, so that would be better with RAM, which can be synchronized sequentially with a dedicated portion of the platter.

    Comment


    • #12
      Originally posted by droidhacker View Post
      FIFY.
      10chars
      I'm glad you had fun, but I'm asking seriously. I've seen many SSDs break down after around 3 years, where HDDs (in workstations, not laptops) break down after around 10 years. But even if these numbers would be different the question stands - what happens when the SSD part breaks, but the HDD part is still good?

      Comment


      • #13
        Originally posted by droidhacker View Post
        FIFY.
        10chars
        I manage a fairly large number of servers, workstations and laptops.
        When an HDD dies, I can usually save 80-100% of the data on drive.
        In the last ~3 years I've yet to save a single (!) byte of a bricked SSD.

        Care to enlighten me?

        ... And before you begin, I usually opt for enterprise grade HDDs (Seagate) and SSDs (Intel and Samsung).

        - Gilboa
        P.S. I'm posting this on a workstation that has 5 x 6 year old [!] 320GB enterprise grade HDDs and has already bricked two SSDs.
        Last edited by gilboa; 29 November 2013, 08:58 AM.
        oVirt-HV1: Intel S2600C0, 2xE5-2658V2, 128GB, 8x2TB, 4x480GB SSD, GTX1080 (to-VM), Dell U3219Q, U2415, U2412M.
        oVirt-HV2: Intel S2400GP2, 2xE5-2448L, 120GB, 8x2TB, 4x480GB SSD, GTX730 (to-VM).
        oVirt-HV3: Gigabyte B85M-HD3, E3-1245V3, 32GB, 4x1TB, 2x480GB SSD, GTX980 (to-VM).
        Devel-2: Asus H110M-K, i5-6500, 16GB, 3x1TB + 128GB-SSD, F33.

        Comment


        • #14
          Originally posted by droidhacker View Post
          This is really a stretch, and an attempt for Seagate to try to maintain some degree of relevance.

          First off, the test is totally irrelevant, because it is comparing the hybrid against two SSD's and two VERY OBSOLETE HDDs.

          Second, the theory behind hybrid drives makes this kind of benchmarking very very irrelevant.
          ....
          Now lets look at these "hybrid" drives from a more practical point of view;
          It has 8 GB of flash memory, which is way slower than 8 GB of *cheap* RAM.... you can easily see where I'm going with this. Outer rim of the HDD, sequential reads from an HDD are wickedly fast, on the order of you could pull 8 GB from it in a few seconds. The main benefit of the SSD portion is the RANDOM access, so that would be better with RAM, which can be synchronized sequentially with a dedicated portion of the platter.
          First, you're absolutely right about the irrelevancy of the benchmarks. They're completely useless and don't reflect an understanding of how this drive works. Instead of Phoronix Benchmarks, the author needed to boot the system several times and launch the same programs several times to get the drive to cache the files and then report the results. That a boot time comparison wasn't even done was a shock to me.

          As for the rest...

          "you can easily see where I'm going with this."

          Nope. Not a clue. Nothing you wrote reflects the real benefit of this drive, improving boot time. Extra ram isn't going to help with that. Second, if the average hard drive has an average sustained read throughput of say 100MB/s, it would take 1 min 20 seconds to read 8GB of data, not a "few seconds". And that's assuming it's all big files and not tiny ones. There have been a few pre-caching Linux solutions, but it seems none have ever really caught on.

          The drive is in theory of more benefit to Linux users than others. First, our SSD caching software is less mature. Second, a typical desktop install (say OpenSUSE) takes about 3.6GB including a full suite of software, which is far less than Windows with no other applications loaded. The 8GB cache on this drive has the potential of caching all of a Linux user's system files and applications, which some room left over for most recently used data, bookmarks file, etc., especially considering they may not even frequently use a lot of that 3.6GB, leaving more room. Lastly, it's also good for laptops that only offer the potential for one drive and no mSATA capability.

          Comment


          • #15
            Originally posted by scottishduck View Post
            The only people to get Hybrid drives right until recently have been Apple. You cant just put a tiny buffer in there, you need a 128GB SSD. Luckily WD just launched the Black2 line of hybrid drives so that should hopefully push other OEMs along the right path.
            Black2 is not a hybrid drive, not even close. It's flash drive and a mechanical drive stuck in the same package, but with ZERO management of the data caching. It presents to the OS as two drives.

            There is one OS that could do something useful with it, OSX, which could format it as a fusion drive. Except, oh dear, presenting as two disks over a single SATA connection is hardly standard, so we need custom drivers which are, no surprise, Windows only. And if you feel comfortable entrusting your data to weird and custom drivers that WD will probably abandon as soon as the next shiny thing comes along, good for you.

            It's fine to complain about how this Seagate drive sucks --- I agree. But at least it sucks in a vaguely sane way.
            The WD offering is idiotic, pure and simple.

            Comment


            • #16
              Originally posted by gilboa View Post
              I manage a fairly large number of servers, workstations and laptops.
              When an HDD dies, I can usually save 80-100% of the data on drive.
              In the last ~3 years I've yet to save a single (!) byte of a bricked SSD.

              Care to enlighten me?

              ... And before you begin, I usually opt for enterprise grade HDDs (Seagate) and SSDs (Intel and Samsung).

              - Gilboa
              P.S. I'm posting this on a workstation that has 5 x 6 year old [!] 320GB enterprise grade HDDs and has already bricked two SSDs.
              So your agree that mechanical drives die, just like SSDs, but your argument is that they are better because you have a chance of extracting the data off them? Damn, I'm glad you're not in charge of my data.

              Look, storage is fragile. That's a fact of life. The ONLY useful thing to say, given this fact, is NUMERIC claims about the distribution of failures across SSDs and HDs. If you can't provide that info, your anecdotal evidence is worthless. I've had an SSD fail, sure. I also had a (less than 3yrs old) HD fail last month (and with no way to get the data off).
              A responsible person deals with this by having a rigorous backup policy, and then using the best tool for the job; not by trusting some voodoo about "well, if I stick with an HD, at least I'll probably be able to get my data off it when it fails".

              Comment


              • #17
                Originally posted by name99 View Post
                So your agree that mechanical drives die, just like SSDs, but your argument is that they are better because you have a chance of extracting the data off them? Damn, I'm glad you're not in charge of my data.

                Look, storage is fragile. That's a fact of life. The ONLY useful thing to say, given this fact, is NUMERIC claims about the distribution of failures across SSDs and HDs. If you can't provide that info, your anecdotal evidence is worthless. I've had an SSD fail, sure. I also had a (less than 3yrs old) HD fail last month (and with no way to get the data off).
                A responsible person deals with this by having a rigorous backup policy, and then using the best tool for the job; not by trusting some voodoo about "well, if I stick with an HD, at least I'll probably be able to get my data off it when it fails".
                0. First of all, I *never* said I don't have a fairly good backup plan.
                1. Any sane man will always try to recover data from a failed drive, no matter how "rigorous" your backup plean is. Why? Because any backup plan is just as "fragile" as your storage.
                2. We've got over 40 years of experience with dealing with mechanical drivers. When I design a project with 50 servers and 400 drivers I can more or less anticipate how many will die each year. We have yet to accumulate any type of meaningful long term data on SSDs. More-ever, the complexity of the SDD firmware makes previously accumulate data far less relevant as the technology has yet to settle down. (E.g. It's useless to use statistics about pre-TRIM SSD's in-order to predict the life-cycle of TRIM capable SSD's).
                3. The complexity of the SSD firmware also more-or-less negate any advantage you might get by placing multiple SSD's in a RAID as a *fault* tolerance measure. Beyond the obvious (TRIM doesn't not work in many RAID configurations), any issue with the firmware will most likely effect the RAID as a group.
                4. Last and not least, you somehow assume that I don't use SSDs. Wrong again. For now, due to the lack any meaningful long term experience with them, I simply do not trust them. (And given the rate of SSD deaths I'm seeing, I have every right to be cautions).

                - Gilboa
                oVirt-HV1: Intel S2600C0, 2xE5-2658V2, 128GB, 8x2TB, 4x480GB SSD, GTX1080 (to-VM), Dell U3219Q, U2415, U2412M.
                oVirt-HV2: Intel S2400GP2, 2xE5-2448L, 120GB, 8x2TB, 4x480GB SSD, GTX730 (to-VM).
                oVirt-HV3: Gigabyte B85M-HD3, E3-1245V3, 32GB, 4x1TB, 2x480GB SSD, GTX980 (to-VM).
                Devel-2: Asus H110M-K, i5-6500, 16GB, 3x1TB + 128GB-SSD, F33.

                Comment

                Working...
                X