Announcement

Collapse
No announcement yet.

Intel Rolls Out The Stratix 10 FPGA With HBM2 Memory

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    I wonder how many backdoors Intel has planted into this thing...

    Comment


    • #12
      Originally posted by starshipeleven View Post
      1. it still going to run like shit if compared to an ASIC (i.e. non-reconfigurable processors) on the same load
      Isn't that a question of how the ARM cores are implemented? Soft cores would certainly be a problem.
      2. it's HORRIBLY expensive
      Maybe but that depends upon your tolerance high end systems. If it is in the same range as a Xeon workstation with a few compute cards it may be acceptable to some.

      This thing is designed for workloads where designing and manufacturing dedicated ASIC hardware would be unpractical, (i.e. the algorithm it must accelerate changes over time due to development, or it does niche functions where less than a few thousand of units are required).
      Exactly! Lets build a laptop capable of workstation like research into AI or other hardware acceleratable technologies.
      It's basically a "poor man's dedicated accelerator", much better than running your code on a CPU (or on GPU) while far weaker than a dedicated ASIC accelerator designed for the same job.
      Being re-configurable you can mass-produce these things enough to sell them as a product to many different customers that could not justify the production runs (and associated costs) of dedicated ASICs.
      you can already find single board computers with gate arrays on board at reasonable prices. Im just interested in a board that has a chance in hell of running Linux and giving me access to configurable hardware.

      Obviously the ARM implementations would have to be pretty complete to do this otherwise you would need a separate ARM SoC.

      I look at it this way, we are at a point in time where there is a lot of benefit to experimenting with AI acceleration. An all in one chipcould make the cost of entry far lower than it is today.

      Comment


      • #13
        Originally posted by Happy Heyoka View Post
        See comment by L_A_G above... Try a Xilinx Zynq UltraScale+ on for size, I've not used the UltraScale jobbies but the plain Zynq with Linux is quite flexible and eats RasPi for breakfast
        Haven't used the Virtex UltraScale+ parts with HBM myself, but at work I am right now working on a system based on a regular Zynq UltraScale+ EG (FPGA, quad ARM Cortex A53's, dual Cortex R5's and a Mali GPU) at work and it is somewhat similar to the current Raspberry Pi model B in terms of the main CPU. The "big" CPU core cluster is the same used in the current Raspberry Pi Model B and the biggest difference there is that the UltraScale is manufactured on TSMC's 16nm node while the Broadcom part used in the RPi is some 40nm process (old TSMC process AFAIK).

        Still, the Zynq series is so flexible I really can't think of much that would actually require all of the built things found in the highest end parts (FPGA, main and realtime CPUs, GPU and video decoder block) at the same time.

        Also, for those whining about how inefficient FPGAs are, they exist primarily because they are much more efficient in ASIC-solvable jobs than general purpose hardware and the fact that making your own ASIC and getting it working properly is horrendously expensive these days. Not to mention how time consuming it can be when you have to make chip revisions to fix for mistakes. There's a reason why bitcoin mining started on CPUs, first moved to GPUs and then FPGAs when the electricity costs for GPU mining went over what you'd actually make. Finally when hardware manufacturers saw that there was a market for dedicated bitcoin mining ASICs, only then did they actually make ones, which now dominate bitcoin mining.
        Last edited by L_A_G; 19 December 2017, 05:52 PM.

        Comment


        • #14
          Originally posted by wizard69 View Post
          Isn't that a question of how the ARM cores are implemented? Soft cores would certainly be a problem.
          The whole point of a FPGA is making soft cores or soft processors.

          The only thing they can do better than a normal general-purpose ASIC (CPU or GPU) is being optimized for a specific load (specific software), and they can't do anywhere near a dedicated ASIC (a "hardware accelerator", specialized for a specific load)

          Maybe but that depends upon your tolerance high end systems. If it is in the same range as a Xeon workstation with a few compute cards it may be acceptable to some.
          These babies run types of loads that a Xeon or compute card would seriously suck at, but loads running on a Xeon or compute cards would run like garbage on these FPGAs.

          They are 2 different tools for 2 different jobs.

          Exactly! Lets build a laptop capable of workstation like research into AI or other hardware acceleratable technologies.
          That's not how it works. It's not how ANY of this works. AI research isn't a hobby and requires a very specialized skillset, same for other niche stuff where it might make sense to find the funding to pay for dedicated computing hardware. Even if we ignore that these things are going to cost tens of thousands of dollars, so "a laptop" is the last thing most companies would want to put them into.

          you can already find single board computers with gate arrays on board at reasonable prices. Im just interested in a board that has a chance in hell of running Linux and giving me access to configurable hardware.
          See my answer above. These things might be able to run as well as a commercial processor, but with VERY bad cost-performance ratio.

          FPGAs are NOT the solution for your problem. It's cheaper to buy new hardware each year for ten years than buying one of these, and the performance is not going to hold up.

          Comment


          • #15
            Originally posted by starshipeleven View Post
            The whole point of a FPGA is making soft cores or soft processors.

            The only thing they can do better than a normal general-purpose ASIC (CPU or GPU) is being optimized for a specific load (specific software), and they can't do anywhere near a dedicated ASIC (a "hardware accelerator", specialized for a specific load)
            well, I guess you, me an L_A_G can all go out for a beer sometime and discuss that

            Getting an ASIC designed and fabbed is expensive even if the design and validation tools have come a long way... last time I looked >>> $100k for a run of possibly non-functional chips.

            If you're _not_ a big name electronics company and you need < 500000 of some gadget, FPGA is your only choice - and I could name some names here.
            Think instrumentation or analysis. Fancy radios, etc.

            With a few hundred bucks worth of hardware, I have done stuff on the bench in my shed that would have sent a decent embedded development team blind with frustration only a decade or two ago...

            Moore's law is great and everything and I am constantly amazed at what you can do on the GPU hardware of your average phone style ARM derivative but to do a highly parallel task at a high data rate with specialised I/O on cheap/small/low power hardware is still very tough.

            Comment

            Working...
            X