Announcement

Collapse
No announcement yet.

Benchmarking A 10-Core Tyan/IBM POWER Server For ~$300 USD

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Re gcc -mcpu=: man gcc says "The -mcpu options automatically enable or disable the following options: -maltivec -mfprnd -mhard-float -mmfcrf -mmultiple -mpopcntb -mpopcntd -mpowerpc64 -mpowerpc-gpopt -mpowerpc-gfxopt -msingle-float -mdouble-float -msimple-fpu -mstring -mmulhw -mdlmzb -mmfpgpr -mvsx -mcrypto -mdirect-move -mhtm -mpower8-fusion -mpower8-vector -mquad-memory -mquad-memory-atomic -mfloat128 -mfloat128-hardware"

    Comment


    • #42
      Yes, that was it. Putting the dimms in according to the OpenPower porting guide, v1.01 boots fine. I immediately saw higher mem perf, even though with my dimm count it's mere dual channel: https://openbenchmarking.org/result/...SP-7ZPNOR10153 . I've also sent a pull req for the SMT reporting in PTS.

      Comment


      • #43
        Originally posted by curaga View Post
        [....]Please list them.[...]
        Well, OK, here we go just from the first page.

        - 32 RAM slots capable of fitting up to 512 GB of 1333 MHz DDR3 registered ECC memory -- even the oldest manual I've found mention 1024GB as max.
        - Identify fires up some blue leds, showing the system is ok -- IMHO sole purpose of identify button is to switch blue leds in front and back of the server so once you go around the racks to lookup server back you are able to identity it. It's nowhere connected to anything dealing with health of the system
        - The kit comes with redundant 750 W power supplies, plenty for normal use. -- but certainly not plenty for normal server use. Even manual clearly warns about that with the red mark "NOTE". E.g. with 750W you are not supposed to use front HDDs and PCI-E slots!
        - The USB 1.1 port is not functional. -- manual clearly states that USB 1.1 is for BMC F/W updates! So it's perfectly OK when it looks like not working from the booted linux side, but then you can't simply claim that without further investigation.

        And here I stopped reading since the amount of issues in a few sentences of description was enough to go over my error tolerance threshold. This plus your very bad attitude to cooling the beast. Next I've just went thorough graphs, was not satisfied, written my original post and another one with mentioning different results.

        Well, anyway, again, congratulations to your purchase. I still think for the money you get a bit lower perf but otherwise architectural very similar to IBM scale-up server which is not that common anyway.

        Comment


        • #44
          Curaga, btw, looks like it would be good if we sync each other on the documentation links. In my posts I'm referring to "TN71-BP012 / Service Engineer's Manual". I do have two versions here describing different hardware configuration where the old version of the manual describes the hardware which is currently being sold. ftp://ftp.tyan.com/doc/TN71-BP012_UG...or_Channel.pdf -- this is the latest version which does not list our hardware. Unfortunately I can't currently find the old one. It's name instead of "for Channel" it's named "for IBM" and is pure v1.0. There PSUs are 750w and together with warnings.
          Anyway, both manuals describes the same processes of pushing DIMMS to the system so I'd really appreciate if you post the link to your mentioned "OpenPOWER porting manual" which lists different processes of pushing DIMMs. Thanks!

          Comment


          • #45
            Originally posted by curaga View Post
            [....]Please list them.[...]
            Well, OK, here we go just from the first page.

            - 32 RAM slots capable of fitting up to 512 GB of 1333 MHz DDR3 registered ECC memory -- even the oldest manual I've found mention 1024GB as max.
            - Identify fires up some blue leds, showing the system is ok -- IMHO sole purpose of identify button is to switch blue leds in front and back of the server so once you go around the racks to lookup server back you are able to identity it. It's nowhere connected to anything dealing with health of the system
            - The kit comes with redundant 750 W power supplies, plenty for normal use. -- but certainly not plenty for normal server use. Even manual clearly warns about that with the red mark "NOTE". E.g. with 750W you are not supposed to use front HDDs and PCI-E slots!
            - The USB 1.1 port is not functional. -- manual clearly states that USB 1.1 is for BMC F/W updates! So it's perfectly OK when it looks like not working from the booted linux side, but then you can't simply claim that without further investigation.

            And here I stopped reading since the amount of issues in a few sentences of description was enough to go over my error tolerance threshold. This plus your very bad attitude to cooling the beast. Next I've just went thorough graphs, was not satisfied, written my original post and another one with mentioning different results.

            Well, anyway, again, congratulations to your purchase. I still think for the money you get a bit lower perf but otherwise architectural very similar to IBM scale-up server which is not that common anyway.

            Comment


            • #46
              You mean this one? ftp://ftp.tyan.com/doc/TN71-BP012_UG_v1.0_for_IBM.pdf

              Found it by just browsing the FTP directory

              Comment


              • #47
                Just one note about the Mellanox card: while it's true that it doesn't need to load a binary blob to run it's because Mellanox cards have their own on-board firmware (which can be flashed and is strictly closed-source).

                Comment


                • #48
                  Originally posted by illuhad View Post
                  You mean this one? ftp://ftp.tyan.com/doc/TN71-BP012_UG_v1.0_for_IBM.pdf

                  Found it by just browsing the FTP directory
                  Exactly! 750w psu there and all the warning about their non-normal usage too. That's it! Thanks for looking into it...

                  Comment


                  • #49
                    The note about PCIe slots not being supported with 750W PSUs is obviously nonsense though; I have all of the slots filled, one of them with a 75W GPU, and it works perfectly fine

                    Comment


                    • #50
                      Originally posted by q66_ View Post
                      The note about PCIe slots not being supported with 750W PSUs is obviously nonsense though; I have all of the slots filled, one of them with a 75W GPU, and it works perfectly fine
                      We're not using the default fans though... With the default fans, I've seen the system draw more than 500W under load. If you put in a fast GPU and some other PCI-E devices (or many hard drives), I can see that 750W could be a tight fit, at least if redundancy of the PSUs is required.

                      Comment

                      Working...
                      X