Announcement

Collapse
No announcement yet.

An Introduction To Intel's Tremont Microarchitecture

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by duby229 View Post

    Then I guess it might be surprising to you that AMDs GPUs have many times more execution units than nVidias...
    What? No they don't. The RTX 2080 Ti has 4352 CUDA cores. The Vega VII has 3840 CUs. GFlops-wise they are the same. Each CUs or CUDA cores are capable of 2 FLOPs per cycle.

    Comment


    • #42
      Originally posted by c117152 View Post
      What's up with that decoder's width?
      In typical usage, it's half that width.

      At a branch instruction, it runs both options through the decoders, and then parks the one that fails - it assumes the branch will eventually be taken, at which time it can just switch decoders and keep on running, like a relay race.

      Comment


      • #43
        Originally posted by sandy8925 View Post
        True, but multiple cores are very useful. For example, on my Nexus 6, for some idiotic reason, cores are shut down as battery charge level decreases. So when the battery reaches 75% and less, 2 cores are turned off and only 2 are available. There's a huge drop in performance and responsiveness.

        It turns out, that multiple cores able to run multiple processes/threads in parallel can significantly boost responsiveness - who knew?
        FYI: Android is not a real-time OS by any stretch of the imagination, and it usually does not even use the soft-realtime features from Linux kernel (so it won't just interrupt its processing when a high-priority input arrives), it's running 90% bloat, the CPU schedulers in the default firmware were written by hitting the keyboard with a fist multiple times without looking at the screen, and so on and so forth.

        Really you can't use that as a reason to "add moar cores".
        Last edited by starshipeleven; 25 October 2019, 09:15 AM.

        Comment


        • #44
          Originally posted by Alex/AT View Post
          Design target: single-thread performance.
          Someone tell them it's 2019 already.
          FYI: single-threaded performance is still very much a thing in 2019, especially for a weak core.

          Comment


          • #45
            Originally posted by Alex/AT View Post
            Fortunately, the typical number of different tasks running on modern general purpose CPU is more than one.
            It's 2019. DOS and likes are way in the past.
            You are confusing multithreading with multiprogramming.

            DOS is monoprogramming so it can run a SINGLE process until it has finished and releases control to the "OS".

            This hardware is most likely going to run a multiprogramming OS of some kind, where multiple processes will be allocated time so they can be run "together" without taking exclusive control of the CPU
            Last edited by starshipeleven; 25 October 2019, 09:14 AM.

            Comment


            • #46
              Originally posted by sandy8925 View Post
              Actually, it is. When you have multiple cores/processors, you're actually running things in parallel. Not just providing the appearance of running things in parallel. It does make a big difference as far as responsiveness.
              Responsiveness a matter of effective process scheduling, also note that on most multicore systems you are still running far more processes than you have cores for so there is still a BIG component of process scheduling and "appearance of running in parallel".

              Comment


              • #47
                Originally posted by uid313 View Post
                I don't know about compiling, but aren't ARM processors really good for video decoding considering all phones and tablets that are used for video decoding with very little power usage?
                No they are not. Phones and tablets have decoding acceleration hardware and offload the media decoding to that.

                Without hardware decode most ARM devices can't show more than 720p video.
                Last edited by starshipeleven; 25 October 2019, 12:07 PM.

                Comment


                • #48
                  Originally posted by starshipeleven View Post
                  FYI: Android is not a real-time OS by any stretch of the imagination, and it usually does not even use the soft-realtime features from Linux kernel (so it won't just interrupt its processing when a high-priority input arrives), it's running 90% bloat, the CPU schedulers in the default firmware were written by hitting the keyboard with a fist multiple times without looking at the screen, and so on and so forth.

                  Really you can't use that as a reason to "add moar cores".
                  Realtime isn't actually useful there, except potentially for phone calls, and other types of audio and video calls. It's not some kind of nuclear reactor safety and monitoring usecase.

                  Comment


                  • #49
                    Originally posted by archsway View Post

                    It turns out, that Android is extremely bloated with far too many background processes doing nothing useful and needs many cores to be responsive - who knew?



                    That's called the RT patchset.



                    Have you heard of zram?



                    "That chip over there is the OTG USB controller (with buggy drivers that cause kernel panics), and right next to it is what we call the 'Ok' chip. We plan to put an 'Alexa' chip on next year's version."



                    Because even the app launcher uses 300MB of RAM.



                    But obviously only the ones not sending tracking data to Google.


                    Idle processes != bloat. The laptop on which I'm typing this has 239 processes - most of the CPU usage is probably from Chromium here.

                    Also, it's not that Android itself is bloated - it's actually due to apps which love running code in the background all the time for no reason, due to really bad code. (I know because I'm an Android app developer - a lot of apps are pretty badly written). Unfortunately, Android only started clamping down on this starting with Android 8.0. so before that, as long as there was enough memory available, apps were running rogue in the background, and using up CPU time that should have been used for the foreground apps. Of course, Android does share some blame for not reducing the process and IO priority of background work.

                    The RT patchset isn't really that important here - it's only in the context of processing touch input or phone calls that realtime is relevant.

                    Yes, ZRAM is nice, but if apps keep asking for more and more memory, ZRAM just won't be enough at that point. And no matter how fast and fancy you get with storage, it just isn't fast enough. Android and iOS don't use swap, and will kill background apps, which is just a sensible approach. Also, there's no separate "Ok Google" chip and "Alexa" chip. They'd all use the same chip for whatever hotword needs to be detected.

                    "Because even the app launcher uses 300MB of RAM." Where did you get that figure from? And what do you mean by "the app launcher" ? Google stopped caring about the AOSP launcher long ago, they just ship their Google/Pixel launcher on their devices (and with select partners). Others like Samsung, OnePlus make their own launchers. And yeah, Samsung launchers suck (just like the rest of their crap custom changes to Android).

                    Hahaha - as if there are no buggy drivers or software on desktop Linux, on servers, and many other critical devices that we depend on. Yeah, the drivers suck, they're mostly closed source, and we can't do shit about them. Which is why open source drivers and freely available hardware documentation are important.

                    Comment


                    • #50
                      Originally posted by sandy8925 View Post
                      Realtime isn't actually useful there, except potentially for phone calls, and other types of audio and video calls. It's not some kind of nuclear reactor safety and monitoring usecase.
                      Realtime is not safety.

                      Safety is reliability, and for that you need realtime AND something else. While you can't have a non-realtime system that is certified as "safe", being realtime does not automatically make the system "safe".

                      Realtime per-se is just a more extreme form of process scheduling, where the CPU hardware receives hardware interrupts that tell it some input signals must be processed RIGHT NOW and the OS blocks execution of everything else to do what it should do with them.
                      Windows claims to be realtime

                      soft-realtime in Linux is again process scheduling https://people.mpi-sws.org/~bbb/pape...s/ospert13.pdf and it is commonly used to increase user-responsiveness, or running Jack audio server with minimum jitter
                      Last edited by starshipeleven; 25 October 2019, 11:05 AM.

                      Comment

                      Working...
                      X