Announcement

Collapse
No announcement yet.

The ClearFog ARM ITX Workstation Performance Is Looking Very Good

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by edwaleni View Post

    Cavium and Gigabyte did cooperate to create a ATX ARM workstation motherboard using the ThunderX CPU. Cavium showed off the ThunderStation at various trade shows.

    I tried to order this board from one of Gigabyte's distributors and I got the third degree on what I was planning to do with it. Seems Cavium is very protective of how the ThunderX (and subsequent ThunderX2) will be used to avoid comparisons to Xeon.

    I did not order it after all. After several email questions that I thought were none of their business, I dropped the idea.
    Well we will gladly sell you one at the reduced introductory price, and as many as we have at the production price As you can see we are already posting benchmarks against all the ARM workstations, and not our cherry-picked benchmarks, but the generic benchmarks that third party reviewers, Phoronix / Level1Tech, have run. We are developers and understand that there is a trade off between power consumption and performance. After NXP approached us with the SOC and we reviewed the design of the chip and then started benchmarking we realized it had an application beyond embedded network appliance. NXP has been very supportive in helping bring the idea of an actual usable ARM workstation to market.

    This AM I have spent benchmarking the LX2160 overclocked up to 2.5Ghz (completely stable by the way), and the benchmarks are very enlightening. In the ARM vs Intel which core is faster per clock, things are much closer than you would imagine. They will only be getting better as well, https://community.arm.com/developer/...nu-performance. As Cloudflare found out when they started looking at the Qualcom server, some things are just horribly under-optimized for ARM64 because the platform hasn't been available to make anyone care. We want to be a platform that is affordable enough and capable enough to make developers want to take the time to make the effort.

    As for the capabilities...here is a little sneak preview taste of what is possible.

    pts/rodinia-1.2.2 [Test: OpenMP LavaMD]
    Core i9 7980XE ........... 46.64 |===============
    Threadripper 1950X ....... 45.95 |===============
    HoneyComb LX2K 2.5GHz .... 40.93 |==============
    EPYC 7601 ................ 31.58 |==========

    pts/john-the-ripper-1.5.1 [Test: Blowfish]
    Core i5 8400 ............. 6998|===================
    HoneyComb LX2K 2.5GHz .... 7917|=====================
    Xeon Silver 4108 ......... 8275|======================

    pts/x264-2.3.0 [H.264 Video Encoding]
    Xeon Silver 4108 ...... 32.75 |===================================
    Core i7 4770K ......... 37.64 |========================================
    HoneyComb LX2K 2.5GHz . 38.28 |=========================================
    Core i7 7740X ......... 49.95 |================================================= ====

    pts/c-ray-1.1.0 [Total Time]
    Core i7 8700K ............ 12.78 |===============================
    Ryzen 7 1700 ............. 11.23 |===========================
    HoneyComb LX2K 2.5GHz .... 10.39 |=========================
    Ryzen 7 1800X ............ 9.70 |=======================

    And some of the benchmarks even at 2.5Ghz ARM is way behind...in the 40% slower category. These may be the first benchmarks a vendor has actually posted of ARM against Intel head to head, and yes these are cherry-picked to prove a point. We will post more because we want the customers to know what they are buying, but also where they may be able to contribute. ARM as a workstation has to start somewhere, I hope we are that somewhere. We will give you nvme, lots of SATA, 10Gbps SFP+ and the ability to have a solid GPU, all in a power footprint that is less than some laptops. What the community and industry does with it, well that will be exciting to see. We are excited about it. Oh and yes, we rebranded the workstation board from ClearFog-ITX to HoneyComb LX2K...more about that later.

    Comment


    • #42
      HoneyComb looks nice. But I have a few questions before considering any way of getting the board:

      - does it support virtualisation (should do but prefer to ask)
      - SBBR?
      - how many pcie lines goes to m.2 and are they taken from pcie x8 slot?
      - will it ship with I/O panel?

      Asking as I do OpenStack development so virtualisation, 16+GB ram, fast storage are a must. Would be best to run standard distro kernel (Debian 'buster', Fedora 30 at least).

      Now I have APM Mustang (sits unused) and do development on remove ThunderX/X2 machines from Linaro's lab.

      --
      https://marcin.juszkiewicz.com.pl/

      Comment


      • #43
        One by one

        Originally posted by haerwu View Post
        HoneyComb looks nice. But I have a few questions before considering any way of getting the board:

        - does it support virtualisation (should do but prefer to ask)
        Yes. Of course

        - SBBR?
        We are focused on SBSA, but SBBR is on the radar. It is not yet certified but we are working with ARM and NXP on this.

        -- how many pcie lines goes to m.2 and are they taken from pcie x8 slot?
        the m.2 is a pcie 3.0 x4 in this version of the SOC

        - will it ship with I/O panel?
        This is a personal requirement by me, and we expect to have one ready for production. If it isn't ready to ship for the developer addition we will make them available after the production board is available.

        Asking as I do OpenStack development so virtualisation, 16+GB ram, fast storage are a must. Would be best to run standard distro kernel (Debian 'buster', Fedora 30 at least).
        We have virtualization, nvme storage, sata support. But a distro kernel is not going to be an option until at least another release cycle. SBSA and SBBR only specify the minimal requirements to boot a board, watchdog, rtc, uart, pcie. Other devices need to be exposed via ACPI, but still need custom drivers that need to be mainlined, SATA, SFP+, USB etc. We can not dictate the mainlining timeframe. Both SolidRun and NXP have resources allocated to achieving this, but it takes time. As for upstream support, we also are openly collaborating with various distributions to make sure they have the hardware and software resources to include full support for this platform as soon as possible.

        Now I have APM Mustang (sits unused) and do development on remove ThunderX/X2 machines from Linaro's lab.
        Excellent! Linaro approached us at EWC after we initially announced this board. We are working very closely with them and they will have early access to the platform. We hope to hear how you feel the platforms compare to one another.

        --
        https://marcin.juszkiewicz.com.pl/[/QUOTE]
        Last edited by linux4kix; 06-04-2019, 04:19 PM.

        Comment


        • #44
          Originally posted by linux4kix View Post
          We are focused on SBSA, but SBBR is on the radar. It is not yet certified but we are working with ARM and NXP on this.
          Well, if it's a TianoCore EDK2 based firmware and ACPI tables are present and it works in practice, that's already great, certification is secondary.

          Originally posted by linux4kix View Post
          Other devices need to be exposed via ACPI, but still need custom drivers that need to be mainlined, SATA, SFP+, USB etc.
          Network cards of course need drivers, you'll have to add ACPI attachment to the Linux NIC driver, as has been done for the MACCHIATObin's MVPP2.

          But I hope you're just misspeaking here about the other two things, because needing custom drivers for SATA and USB on an ACPI platform is a sign that things have gone horribly wrong. If your XHCI device in the DSDT is not the usual PNP0D10 or your AHCI device is not the usual thing with the PCI config region.. that would be very very weird.
          eMMC and custom onboard NIC are the only major devices that I wouldn't expect to work out of the box on any ACPI compatible OS. But SATA and USB, no, they shouldn't need custom drivers. That's a big advantage of ACPI, that the attachments for the very common devices — USB, SATA and PCIe — are standardized.

          I'm definitely very very tempted to preorder right now, but I'd like to hear confirmation about PCIe: does it already work in unmodified OSes via ACPI?

          Comment


          • #45
            Originally posted by linux4kix View Post

            I agree here. The NXP documentation is quite thorough and we are already starting to document the COM module, https://developer.solid-run.com/know...7-user-manual/ Where the products differ are that we are making a general Workstation / Server software development platform, where as the Xilinx platform is for hardware prototyping. This of course needs to have far greater documentation regarding the specifics of the hardware.
            Agree with what!
            My pov was not price but about getting enough open source material to develop with.
            Does it make sense?

            Comment


            • #46
              Originally posted by myfreeweb View Post

              Well, if it's a TianoCore EDK2 based firmware and ACPI tables are present and it works in practice, that's already great, certification is secondary.



              Network cards of course need drivers, you'll have to add ACPI attachment to the Linux NIC driver, as has been done for the MACCHIATObin's MVPP2.

              But I hope you're just misspeaking here about the other two things, because needing custom drivers for SATA and USB on an ACPI platform is a sign that things have gone horribly wrong. If your XHCI device in the DSDT is not the usual PNP0D10 or your AHCI device is not the usual thing with the PCI config region.. that would be very very weird.
              eMMC and custom onboard NIC are the only major devices that I wouldn't expect to work out of the box on any ACPI compatible OS. But SATA and USB, no, they shouldn't need custom drivers. That's a big advantage of ACPI, that the attachments for the very common devices — USB, SATA and PCIe — are standardized.

              I'm definitely very very tempted to preorder right now, but I'd like to hear confirmation about PCIe: does it already work in unmodified OSes via ACPI?
              Sorry the phrasing was correct, but unclear in my haste. The drivers do need to be written, but not in Linux but EDK2. Just like the MacchiatoBin these are not PCIe devices so they need have all the configuration done in EDK2 and exposed via the PCIe emulation layer.

              For the SFP+ cages we are working with ARM and Russell King in order to bring ACPI support to his phylink layer.

              Comment


              • #47
                Originally posted by cyring View Post

                Agree with what!
                My pov was not price but about getting enough open source material to develop with.
                Does it make sense?
                Absolutely.

                Comment


                • #48
                  Originally posted by linux4kix View Post

                  They are on the list. The 2.5 / 5G ethernet was left off due to chip constraints, but supporting multi-gig is on our radar. Perhaps the new board spin in 2020 when the next revision of the SOC is released.
                  Next idea. If you can deliver high speed rackmount, and network performance is good, make a 1U rackmount chassis for this. This just might be my next router then.

                  Comment


                  • #49
                    Originally posted by GI_Jack View Post

                    Next idea. If you can deliver high speed rackmount, and network performance is good, make a 1U rackmount chassis for this. This just might be my next router then.
                    It is a standard mini-itx form factor. We are working on an I/O shield. It will work in most standard 1U rackmount cases. Will post updates after we test ourselves and give info on the final thermal solution.

                    FYI, there are SFP+ modules that you should be able to use to support multi-gig. They aren't true multi-gig in that the transceiver runs at 10Gbps and then uses flow control to support the flow control. Not perfect but I have seen them tested in other hardware and are functional

                    Comment

                    Working...
                    X