Announcement

Collapse
No announcement yet.

NVIDIA Announces Hopper H100, Grace CPU Superchips, Jetson Orin Developer Kit

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by hoohoo View Post
    1 exaflop TF32. I like! Too bad it will not come in a PCIe4/5 AIB form factor.

    Also, 2 thots: funny it is nVidia not AMD promising to finally deliver an HPC "APU" type device; when a company anounces a super-anything shipping sometime in a wide window next year, keep your big grain of salt handy. I think that CPU might be great at logic and imteger but quite weak on the FP metric.
    Well to be fair, AMD is already shipping to exascale HPC installations with the Frontier and El Capitan supercomputers.
    No, the CPU and GPU are not on a single package, but they are combined on a single coherent blade, (1x EPYC Milan Zen 3 + 4x Instinct MI200) with Infinity Fabric links to maintain coherency between CPU DDR and GPU HBM.

    At the time AMD+Cray won those contracts, pricing & flexibility were more important than raw unified memory bandwidth.

    AMD Trento HPC Blade:
    https://i.imgur.com/KzCHLQZ.png

    Last edited by nranger; 22 March 2022, 10:15 PM.

    Comment


    • #12
      NVIDIA Eos meanwhile is NVIDIA's new supercomputer
      LOL. In streaming APIs, EOS typically indicates End Of Stream. Kind of like EOF.

      Comment


      • #13
        Originally posted by ezst036 View Post
        Are we still waiting for that big huge open source announcement from Nvidia from a few years back?

        https://www.phoronix.com/scan.php?pa...-Source-GTC-20
        https://www.phoronix.com/scan.php?pa...2020-5-October
        I almost forgave them once they introduced a driver with GBM support. Then, I tried to install a RT kernel and now I'm are back to F*N*.
        If only CUDA had an alternative.

        Comment


        • #14
          Originally posted by mppix View Post

          I almost forgave them once they introduced a driver with GBM support. Then, I tried to install a RT kernel and now I'm are back to F*N*.
          If only CUDA had an alternative.
          Just curious here, but for what do you need that RT kernel exactly?

          Comment


          • #15
            Originally posted by Linuxxx View Post

            Most likely nVidia did really plan to switch their driver over to a similar model of AMDGPU-Pro (integrated kernel driver + blobby user-space part).

            However Daniel Vetter's tweet which publicly ridiculed them probably caused an egoistic change-of-heart, so they simply managed to bolt on GBM support to their current driver model & will carry on to do business as usual, which, from a financial perspective at least, seems to work out for nVidia just fine, quite frankly...
            Sure, big multinational company stop a big project which required lot of efforts for a meme on Twitter, how not...
            Some people really enjoy living and crafting their own reality like was Minecraft (not RTX of course, too realistic)

            Comment


            • #16
              Originally posted by Linuxxx View Post
              Just curious here, but for what do you need that RT kernel exactly?
              You'd use RT kernels for all applications where missing an interrupt may cost many dollars or lives.
              Typical examples are professional audio/video, real-time control, robotics, and autonomous driving/drones.

              It is kind of ridiculous because Nvidia tries to push their GPUs into the autonomous driving market and then they cannot run in RT.

              Comment


              • #17
                Originally posted by mppix View Post
                You'd use RT kernels for all applications where missing an interrupt may cost many dollars or lives.
                Typical examples are professional audio/video, real-time control, robotics, and autonomous driving/drones.
                I think the issue isn't missing interrupts, but rather failing to meet strict latency, execution-time, or priority guarantees.

                Originally posted by mppix View Post
                It is kind of ridiculous because Nvidia tries to push their GPUs into the autonomous driving market and then they cannot run in RT.
                Linux isn't and never will be certified for self-driving applications, no matter what kernel patches it's using.

                However, your point about Nvidia pitching their Jetson boards for things like drones and robotics is well-made, as Linux (+ some set of patches) is probably adequate for small drones and robots, operating in non- safety-critical settings.

                Comment


                • #18
                  Originally posted by coder View Post
                  I think the issue isn't missing interrupts, but rather failing to meet strict latency, execution-time, or priority guarantees.
                  This is mostly the same thing, at least for the RT applications that I work with. You'll miss interrupts if you don't have exec time in check. Then, you'll violate latency requirements. Lack of (correct) priorities is an excellent way to miss interrupts.

                  Originally posted by coder View Post
                  Linux isn't and never will be certified for self-driving applications, no matter what kernel patches it's using.
                  (1) why will Linux never be certified?
                  (2) who certifies an OS for autonomous driving?
                  (3) out of curiosity, what is Tesla using on their SoC? (edit: I have a guess: https://www.phoronix.com/scan.php?pa...-In-Linux-5.18)

                  Originally posted by coder View Post
                  However, your point about Nvidia pitching their Jetson boards for things like drones and robotics is well-made, as Linux (+ some set of patches) is probably adequate for small drones and robots, operating in non- safety-critical settings.
                  I am pretty sure this viewpoint is quite close to how Sun looked at Linux ~25y ago and how Nokia looked at Android ~15y ago.
                  Last edited by mppix; 24 March 2022, 08:54 PM.

                  Comment


                  • #19
                    Originally posted by Stefem View Post

                    Sure, big multinational company stop a big project which required lot of efforts for a meme on Twitter, how not...
                    Some people really enjoy living and crafting their own reality like was Minecraft (not RTX of course, too realistic)
                    I agree that I of course oversimplified there, but it still points to Daniel Vetter (upstream DRM subsystem co-maintainer) giving nVidia a hard time behind-the-scenes, to the point that they obviously abandoned their already publicly announced plan to do a big open-source drop.

                    However, since you are clearly a very smart person, I'd like to hear your version of why nVidia dropped their open-source play in the end.

                    I'm sure you have the right answer, so please don't let us wait for too long...

                    Comment


                    • #20
                      Originally posted by coder View Post
                      Linux isn't and never will be certified for self-driving applications, no matter what kernel patches it's using.
                      You sure about that?

                      AFAIR, there was a case-study conducted where a hard RT Linux kernel never failed to meet the deadlines over a reasonably long period of time.

                      Of course that doesn't mean it will be instantly certified, but at least points towards that the potential to become safety-critical definitely exists...

                      Comment

                      Working...
                      X