Announcement

Collapse
No announcement yet.

Intel Formally Announces Iris Xe MAX Graphics, Deep Link

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by nxij View Post
    Would be nice to have one of these as a little standalone PCI-E card with passive cooling
    Yes, that would be a wish of mine, too, albeit for different use cases than yours.

    I have to say though, that their August announcement of not-HDMI-2.1-but-only-2.0 does rule out some interesting use cases.

    Comment


    • #12
      So Xi (Jinping) will get Xe first? They should hire Xuxa as the official spokeswoman.
      Last edited by torsionbar28; 31 October 2020, 08:14 PM.

      Comment


      • #13
        Originally posted by torsionbar28 View Post
        So Xi (Jinping) will get Xe first? They should hire Xuxa as the official spokeswoman.
        I don't know man, her efforts on this area were not that great...

        Comment


        • #14
          Originally posted by tildearrow View Post

          They may not have 7nm yet, but at least they have 4:4:4 encoding and a stable open-source driver.
          <dang it> Made me snort my beer....and bubbles and Guinness kinda burns.

          Comment


          • #15
            Originally posted by Caffarius View Post

            It allows you to use them both for some compute or media conversion workloads. For graphics they use the software stack to determine whether a game should run on the iGPU or the dGPU (because the iGPU in Tiger Lake is faster at some games). Sounds like it may function similar to Optimus under the covers. Anandtech has a deeper dive on the distinctions.
            Ahhh...a kinda context switching. Workload A is this....you go here. Workload B....you go over there. Hmmm does open some interesting possibilities though. You could have one GPU do some some ad hoc DSP/AI stuff while the other more performant GPU does heavy rendering.

            Or...maybe this. Maybe you have a game where you have a 3D rendered headset on. Or perhaps its an Oculus type head mount. You see HD to 4K rendered 3D in your virtual headset or Oculus and let's say the game calls for a real time PiP of actual video. One GPU handles all the heavy 3D while the other handles the video.

            Just spitballing here.....

            Comment


            • #16
              Originally posted by Jumbotron View Post
              So Deep Link is more or less AMD Crossfire or at least the fabled version of Crossfire that was supposed to tie together an AMD APU's iGPU to an AMD dGPU.

              Interesting stuff...but this is a classic Intel paper launch which is why there was no heads up. It's Intel's Marketing Dept going " HEY..HEY...LOOK...WE'RE STILL INOVATING. WE'RE STILL RELEVANT !! < what...no...we're not commenting on 10nm much less our roadmap to 7nm. no..we're not commenting either on having to sell off our entire NAND division...nor our modem division...nor our mobile CPU division...nor our Edison project. and no...launching GPUs designed by half of AMDs graphic engineering department doesn't mean Larrabee/Knights Landing was a abysmal failure. yes...we did shut down that department as well...but...>
              It also seems more focused on video encoding and editing. Most likely game streaming, video editing, maybe adding fancy effects in real time (detect person's face and add bowler hat on top)

              Comment


              • #17
                Originally posted by sandy8925 View Post

                It also seems more focused on video encoding and editing. Most likely game streaming, video editing, maybe adding fancy effects in real time (detect person's face and add bowler hat on top)
                You know...in Intel's defense...their focus on video encoding/decoding in CPU as opposed to forcing someone to purchase an add in card with an FPGA is commendable. Particularly since you can't really plug in an FPGA in laptop just yet.

                I am curious to know if that focus came about from Intel's partnership with Apple for all those years ?

                Comment


                • #18
                  Originally posted by Jumbotron View Post
                  You know...in Intel's defense...their focus on video encoding/decoding in CPU as opposed to forcing someone to purchase an add in card with an FPGA is commendable. Particularly since you can't really plug in an FPGA in laptop just yet.

                  I am curious to know if that focus came about from Intel's partnership with Apple for all those years ?
                  Why do laptops need encode? Using a laptop for content creation is doing it wrong. I made that mistake once, the scorching heat, the migraine inducing fan noise, the horrible ergonomics of a keyboard hinged to the display, no thanks!

                  Comment


                  • #19
                    Originally posted by torsionbar28 View Post
                    Why do laptops need encode? Using a laptop for content creation is doing it wrong. I made that mistake once, the scorching heat, the migraine inducing fan noise, the horrible ergonomics of a keyboard hinged to the display, no thanks!
                    Uhhhh....you've never used a Macbook with Final Cut and Logic Pro out in the field I presume. I have during my TV and production days. There were no other options in Windowsland or LinuxWorld that even comes close, even now.

                    And I never experienced scorching heat or migraines from fan noise. I did experience those things however due to production locations and from overzealous producers and technical directors but that's another story for another time.

                    Comment


                    • #20
                      Originally posted by MadeUpName View Post
                      neither NVidia or AMD have really talked up their hardware encoding yet on the new generation boards so we don't know if this is just carried over from the previous generation or if they have brought any improvements in IQ or encode speed.
                      I don't know about "talking up", but the info is all here:



                      According to that, Ampere's NVENC engine is the same generation as Turing, and still lacking AV1 support. However, its NVDEC is new and supports AV1 10-bit.

                      Their performance graphs are here, but lack any data on Ampere cards. However, if you expand the last section, they claim AV1 decode up to 8k x 8k (at what framerate is unspecified).



                      Originally posted by MadeUpName View Post
                      More competition is good but I think they are going to get steam rolled by AMD.
                      Not in video encoding. I've used Intel's QuickSync video encoding capabilities in CPUs since Haswell, and it's very mature. Intel already has a lead, here. It's only Nvidia they have to worry about.

                      Comment

                      Working...
                      X