Announcement

Collapse
No announcement yet.

HP To Launch Linux++ Operating System Next Year

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by http://www.technologyreview.com/news/533066/hp-will-release-a-revolutionary-new-operating-system-in-2015/#comments
    HP plans to use a single kind of memory?in the form of memristors?for both long- and short-term data storage in The Machine. Not having to move data back and forth should deliver major power and time savings. Memristor memory also can retain data when powered off, should be faster than RAM, and promises to store more data than comparably sized hard drives today.

    The Machine?s design includes other novel features such as optical fiber instead of copper wiring for moving data around. HP?s simulations suggest that a server built to The Machine?s blueprint could be six times more powerful than an equivalent conventional design, while using just 1.25 percent of the energy and being around 10 percent the size.
    The article makes it pretty clear, that this is more or less targeted to huge data centers. If I were google or facebook I would be very interested, especially about the 1.25% of energy and 10% size part. And if google and/or facebook are going to adopt this new architecture you're pretty much here to stay without every developer out there jumping on this new technology.

    Comment


    • #52
      So for us plebs, it means extremely fast 10tb hard drives with the shock endurance of flash. HP, you better price it right, that product alone is a SSD killer, pending write endurance figures.

      Comment


      • #53
        This and a very important advantage: No wear leveling needed!

        Comment


        • #54
          Originally posted by curaga View Post
          So for us plebs, it means extremely fast 64GB hard drives with the shock endurance of flash and replacement every 2-3 years due to wear.
          Fixed that for you. Nowhere has HP said TBs of memristors. They've repeatedly mentioned it in comparison to system memory (which currently means 4-32GB for the great majority), and several times in comparison to SSDs (which currently means 4GB-1TB). I would be shocked to see any memristor drive at 1TB+ and under 1000$ in less than three years.

          Comment


          • #55
            That's is not what HP is saying:

            Originally posted by http://www.technologyreview.com/news/533066/hp-will-release-a-revolutionary-new-operating-system-in-2015/
            The main difference between The Machine and conventional computers is that HP?s design will use a single kind of memory for both temporary and long-term data storage. Existing computers store their operating systems, programs, and files on either a hard disk drive or a flash drive. To run a program or load a document, data must be retrieved from the hard drive and loaded into a form of memory, called RAM, that is much faster but can?t store data very densely or keep hold of it when the power is turned off.

            HP plans to use a single kind of memory?in the form of memristors?for both long- and short-term data storage in The Machine. Not having to move data back and forth should deliver major power and time savings. Memristor memory also can retain data when powered off, should be faster than RAM, and promises to store more data than comparably sized hard drives today.
            They want to replace both RAM and hard disk with a single large memristor drive! So definitely TB not GB.
            And IIRC as of today the number of write cycles is ~100million for a memristor (5000 for flash!) so I would say there's no need to replace devices every 2 year and there's not even wear-leveling needed.

            The real problem is that some researcher say, that a memristor cannot work (and will never be working) because of physics. Lets see what the future holds

            Comment


            • #56
              Originally posted by Forge View Post
              Fixed that for you. Nowhere has HP said TBs of memristors. They've repeatedly mentioned it in comparison to system memory (which currently means 4-32GB for the great majority), and several times in comparison to SSDs (which currently means 4GB-1TB). I would be shocked to see any memristor drive at 1TB+ and under 1000$ in less than three years.
              According to the paper "nvram applications in the architectural revolutions of main memory implementation" in December's icjet, we can expect memristor's to be about 4 times denser than disk, and 23 times denser than ram.

              @droste: 100 million writes is nothing for ram. It's not enough to replace ram for any conventional architecture. We'll see what they come up with in the next few months.
              Last edited by liam; 28 December 2014, 06:15 PM.

              Comment


              • #57
                Originally posted by liam View Post
                @droste: 100 million writes is nothing for ram. It's not enough to replace ram for any conventional architecture. We'll see what they come up with in the next few months.
                Yes sure, but it's enough for a hard drive replacement for now. I don't think they will provide RAM replacements for the current architecture.
                And with HPs new architecture there's no need for traditional heavy read/write RAM for caching hard disk data. You just read the stuff you need from where it already is, so writes are really only needed if information changes.

                More recent articles are also not mentioning the write endurance. Maybe it's higher or even unlimited with new materials. I can't find recent information (especially regarding HPs stuff) anywhere.

                Comment


                • #58
                  Originally posted by gilboa View Post
                  However, Intel never managed to get the platform performing up-to-spec. The first 733Mhz Itanium Merced core had severe issues out-performing a 1.7Ghz Pentium 4 Willamette core. In theory, Intel could slowly force the market into adopting IA64 by keeping P4 32bit-only, but the Atlhon64/Opteron left them not choice. (And the lackluster performance of the McKinley/Madison cores didn't really improve things)
                  Sure, by 2006 the performance was acceptable, but by then both the Xeon and the Opteron shoved it into a very small market segment (Super-high-end servers) and most of the Itanium server manufacturers left the architecture.

                  - Gilboa
                  yeah...intel managed to get performance on HP invented Itanium very late...
                  they contracted a Russian specialist team to do the job, when they figured out that the situation was very badly.
                  The team was from elbrus, which are specialist on sparc, and high performance computing.

                  But it turned to be a very late decision, and the world wanted to get rid of it, because of the prices, I think...

                  Comment


                  • #59
                    Originally posted by tuxd3v View Post
                    yeah...intel managed to get performance on HP invented Itanium very late...
                    they contracted a Russian specialist team to do the job, when they figured out that the situation was very badly.
                    The team was from elbrus, which are specialist on sparc, and high performance computing.

                    But it turned to be a very late decision, and the world wanted to get rid of it, because of the prices, I think...
                    Interesting. I wasn't aware of this. Thanks.
                    oVirt-HV1: Intel S2600C0, 2xE5-2658V2, 128GB, 8x2TB, 4x480GB SSD, GTX1080 (to-VM), Dell U3219Q, U2415, U2412M.
                    oVirt-HV2: Intel S2400GP2, 2xE5-2448L, 120GB, 8x2TB, 4x480GB SSD, GTX730 (to-VM).
                    oVirt-HV3: Gigabyte B85M-HD3, E3-1245V3, 32GB, 4x1TB, 2x480GB SSD, GTX980 (to-VM).
                    Devel-2: Asus H110M-K, i5-6500, 16GB, 3x1TB + 128GB-SSD, F33.

                    Comment


                    • #60
                      Originally posted by jacob View Post
                      It always makes me laugh that they obviously don't understand that a quantum leap is the tiniest leap possible.
                      you don't understand that quantum leap is most powerful leap possible. it passes through impassable barrier.
                      and it is not tiniest. tiniest is planck length
                      Last edited by pal666; 05 February 2015, 06:54 AM.

                      Comment

                      Working...
                      X