Raptor Launching Talos II Lite POWER9 Computer System At A Lower Cost

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts
  • chithanh
    Senior Member
    • Jul 2008
    • 2493

    #41
    Originally posted by madscientist159 View Post
    While the full Talos II has a slot that can be bifurcated on the second CPU, the Lite version doesn't support further bifurcation that what is already routed to the slots on the board.
    Is this a hardware or a software limitation? I know that on some x86 mobos, PCIe bifurcation support was a matter of modding the firmware by adding the necessary UEFI module.

    Originally posted by starshipeleven View Post
    While I'm aware that this is probably very secondary, I think that designing all PCIe slot areas to accomodate phisically a longer card, and installing pcie connectors that are open on one end is a great thing.
    I think it is secondary on the full Talos II mobo, as enough x16 slots exist there. However, on the Talos II Lite, the PCIe situation is decidedly a weakness. If bifurcation in the x16 slot and physically fitting x16 cards in the x8 slot can be achieved, this will go a long way in mitigating the weakness.

    Originally posted by madscientist159 View Post
    So interestingly it's not the design that's the issue here, it's that actually obtaining the open slot PCIe connectors as an OEM is quite difficult. They seem to have fallen out of favor; we had investigated installing an open slot PCIe connector in the x8 slot before, but no one is making (quality) compatible edge connectors.
    Interesting.
    So that means, mobos like the Supermicro X11DPX-T use either low-quality (or insufficient for 4.0) PCIe connectors or are sourcing their parts from companies that refuse to sell to Raptor?

    Originally posted by madscientist159 View Post
    The only option may be carefully cutting off the end of the edge connector. If this is done in a way that doesn't damage the board, this isn't something we would void warranty for.
    Unfortunately that is not an option to me.
    Last edited by chithanh; 18 May 2018, 12:51 AM.

    Comment

    • madscientist159
      Raptor OpenPOWER
      • May 2015
      • 306

      #42
      Originally posted by chithanh View Post
      Is this a hardware or a software limitation? I know that on some x86 mobos, PCIe bifurcation support was a matter of modding the firmware by adding the necessary UEFI module.
      It's hardware here. The POWER9 has three PCIe controllers on each chip, and the one that can trifurcate is already being used to drive the on-board peripherals. The other one that can bifurcate drives the x8 slot and another on-board peripheral, leaving only the last CAPI-capable controller that can only do x16 by hardware design.

      Originally posted by chithanh View Post
      Interesting.
      So that means, mobos like the Supermicro X11DPX-T use either low-quality (or insufficient for 4.0) PCIe connectors or are sourcing their parts from companies that refuse to sell to Raptor?
      Many times these are provided by custom contract; the component manufacturer makes, say, 1 million or more of a part to the OEM's specification, and does not sell those parts to other companies. Something similar happened with the PS/2 port complex on ASUS boards; just because a top tier OEM can get a part doesn't mean that other OEMs can get that same part. Of course, as we continue to grow, we may start gaining the ability to use these kinds of custom parts, so there's some hope remaining!

      Comment

      • darkbasic
        Senior Member
        • Nov 2009
        • 3088

        #43
        Originally posted by starshipeleven View Post
        Yeah, because mechanical hard drives don't have storage controllers. Nor do USB flash drives nor any other storage device presenting itself as a "block device".
        You are free to consider mechanical hard drives or usb flash drives as a viable alternative in a 3000$+ workstation, I don't.
        ## VGA ##
        AMD: X1950XTX, HD3870, HD5870
        Intel: GMA45, HD3000 (Core i5 2500K)

        Comment

        • chithanh
          Senior Member
          • Jul 2008
          • 2493

          #44
          Originally posted by madscientist159 View Post
          The other one that can bifurcate drives the x8 slot and another on-board peripheral,
          What is that peripheral? Can it be disabled to allow bifurcation in the x8 slot?

          Originally posted by madscientist159 View Post
          Many times these are provided by custom contract; the component manufacturer makes, say, 1 million or more of a part to the OEM's specification, and does not sell those parts to other companies. Something similar happened with the PS/2 port complex on ASUS boards; just because a top tier OEM can get a part doesn't mean that other OEMs can get that same part. Of course, as we continue to grow, we may start gaining the ability to use these kinds of custom parts, so there's some hope remaining!
          I think those Supermicro mobos are not exactly high-volume, and only few models have open end PCIe slots. But yeah, probably still an order of magnitude away from Raptor.

          Originally posted by darkbasic View Post
          You are free to consider mechanical hard drives or usb flash drives as a viable alternative in a 3000$+ workstation, I don't.
          I think you were missing the point. It was not that mechanical hard drives are suitable for high-end workstations, it is that the problem is not SSD-specific but present in almost any kind of storage.

          Comment

          • madscientist159
            Raptor OpenPOWER
            • May 2015
            • 306

            #45
            Originally posted by chithanh View Post
            What is that peripheral? Can it be disabled to allow bifurcation in the x8 slot?
            It's the SAS controller. Sadly the associated PCIe controller is locked into a fixed configuration of x8 and x8 in silicon.

            Comment

            • tajjada
              Senior Member
              • Jan 2014
              • 207

              #46
              Originally posted by madscientist159 View Post
              just because a top tier OEM can get a part doesn't mean that other OEMs can get that same part. Of course, as we continue to grow, we may start gaining the ability to use these kinds of custom parts, so there's some hope remaining!
              I wish you best of luck and success with Raptor Systems and Talos! This is an amazing project and some amazing hardware.

              I'd love to have such a POWER-based system. Unfortunately, I cannot currently afford to buy this. I recently spent a lot of money on a personal server based on AMD EPYC, so will not have the money to buy additional expensive computers for some time.

              I hope that by the time I am ready to buy another expensive computer system (probably in about a year or so), you will have even more amazing things than what you offer now! You have already made great progress with your current product line-up, compared to where you started with the initial Talos!

              I wish you success and growth!

              Comment

              • darkbasic
                Senior Member
                • Nov 2009
                • 3088

                #47
                Originally posted by chithanh View Post
                I think you were missing the point. It was not that mechanical hard drives are suitable for high-end workstations, it is that the problem is not SSD-specific but present in almost any kind of storage.
                You're right, I misinterpreted his answer. Because of the quote I thought he was against my "Unfortunately there are no ways to get a fully open system" statement, but he wasn't.
                I simply didn't consider hard disks or flash drives as an option, independently of being open or not. That's the only reason why I blamed SSDs specifically.
                Last edited by darkbasic; 18 May 2018, 11:10 AM.
                ## VGA ##
                AMD: X1950XTX, HD3870, HD5870
                Intel: GMA45, HD3000 (Core i5 2500K)

                Comment

                • darkbasic
                  Senior Member
                  • Nov 2009
                  • 3088

                  #48
                  Originally posted by madscientist159 View Post

                  Those benchmarks inadvertently compared a Spectre and Meltdown-proof CPU (the POWER9) to Intel and AMD systems that were vulnerable to Spectre v2. Intel in particular has since released mitigations that have dropped its benchmark performance significantly, while we have also released information on how to turn off the Spectre protections on POWER9 if desired (see link) to raise performance on machines running trusted code.

                  At the moment the CPU industry is still reeling from the effects of Meltdown and Spectre, and it has become somewhat routine to compare vulnerable processors against hardened or mitigated ones, since the hardened / mitigated processors nearly always run slower. When looking at benchmarks, always look for the status of Spectre v2 user-mode separation -- that is the one that kills interpreted language performance across architectures.
                  Any chance to give Michael access to a Talos 2 Lite with a single socket 18 cores CPU? That would be a perfect match to benchmark against Intel i9-7980XE. The 16-core would be a perfect match against Threadripper as well.
                  ## VGA ##
                  AMD: X1950XTX, HD3870, HD5870
                  Intel: GMA45, HD3000 (Core i5 2500K)

                  Comment

                  • ynari
                    Junior Member
                    • May 2018
                    • 20

                    #49
                    I'm a little confused by the Oculink suggestion, and feel that that and the 'EATX' designation of the Talos II Lite may be incorrect.

                    Oculink is useless here as the Lite slots do not support bifurcation, and there are no Oculink ports listed on the motherboard. Unless the slot really has to be a considerable distance from the motherboard with a PCI-e 4.0 to Oculink adapter, it seems pointless.

                    What I'd recommend is the ThermalTake Premium PCI-E extender - it will be cheaper and more reliable to mod that than try and butcher a x8 slot. Use the flexibility of that to host your PCI-e 3.0 x16 graphics card, leaving the PCI-e 4.0 slot for shedloads of storage. There are much cheaper options than the ThermalTake, but they're not as reliable or well made.

                    As to the 'EATX' support, Supermicro motherboards and cases are not EATX, they are SSI-EEB. The two are not the same. Whilst the physical dimensions of EATX and SSI-EEB are identical, the standoffs differ, and typically SSI-EEB ports feature an SSI-EEB connector to attach to the front of an SSI-EEB case. You can put most EEB motherboards in an EATX case, provided you can cope with at least one of the corners and one of the standoffs floating without support.

                    I've built a system using the Nanoxia Deep Silence 5, and that does have all the SSI-EEB standoffs (the DS6 apparently does not, go figure). I used the ThermalTake extender to move a graphics card, and if I remember correctly a PCI extended to move a PCI card, so as to use practically all of the server motherboard slots.

                    Comment

                    • starshipeleven
                      Premium Supporter
                      • Dec 2015
                      • 14568

                      #50
                      Originally posted by darkbasic View Post
                      You are free to consider mechanical hard drives or usb flash drives as a viable alternative in a 3000$+ workstation, I don't.
                      You missed the point by a few miles, every block device from mechanical drives to crappy micro-sd cards has a controller running a firmware anyway.

                      Only way to not have a firmware in the storage system at the moment is to access raw flash over some crappy embedded bus like SPI.

                      For the future it might be possible by using "Open Channel SSDs", are NVMe SSDs where the controller is much dumber than on normal SSDs (it is mostly acting as a bridge to expose the raw flash over PCIe, so you can have good performance) and most of the smart jobs are dealt with by the host CPU, which Linux can deal with since kernel 4.4.

                      I've seen articles about prototypes for that hardware, but the technology is squarely aimed at datacenter usage for now.

                      Afaik the only "product" with that technology is a jack-of-all-trades enterprise SSD with a controller that can go in "open channel mode" https://rockylim92.github.io/researc...annelSSD_tips/ (and has 10GB ethernet capabilities so they can create their own storage network with others of the same kind if you want to go that route)

                      Comment

                      Working...
                      X