Announcement

Collapse
No announcement yet.

Intel Begins Teasing Their Discrete Graphics Card

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by creative View Post
    Maybe later than 2020 then, no idea of how long a first development cycle for new tech like this will take.
    Do you mean their GPU, in general, or are you still hoping for a x86-64 based GPU? It seems clear to me that they've put an end to the Larrabee lineage. Here's more:

    To a certain extent, the “Knights” family of parallel processors, sold under the brand name Xeon Phi, by Intel were exactly what they were supposed to be:



    Originally posted by creative View Post
    iGPU's hmmm... Maybe their plan is as such to strap existing iGPU tech to a board, without the thermal inclusion of a regular CPU such a thing might travel quite far.
    I can definitely say the SIMD units in their HD graphics GPUs are too narrow (2x 4-wide SIMDs per EU, with 24 EU's per desktop iGPU). Otherwise, I can imagine a lot of reuse.

    That said, I think its ISA dates back more than a decade, to the venerable i915 chipset (ever notice that's still the driver it's using?), so they might feel there are significant wins from starting with a clean slate.

    Originally posted by creative View Post
    it could very well never see the light of day.
    Again, what? Larrabee did see the light of day, in the form of 2 generations of full Xeon Phi. It just lacked the graphics capability that was originally planned for Larrabee, instead going after just the HPC market.

    Their new initiative seems quite serious. They created an entire division of the company to work on dGPUs (and related products). That's a big deal. This being Intel, I think it will definitely launch, and probably more or less on schedule (Intel's schedule slippage mostly seems caused by delays in deployment of new manufacturing nodes).

    Comment


    • coder backtrack to the end of comments section 10. Sometimes when stuff gets mentioned I do research and it can lead to other places. As for your first question? Yes it was pertaining to their new generation of GPU's

      And pertaining to your statement containing "SIMD units", Yes, I agree with your thoughts.

      Partaining to your last section on Larabee, second paragraph. Your last statements, yes, I do think that that seems to be the case.

      Sorry for the confussion.
      Last edited by creative; 18 August 2018, 03:58 PM.

      Comment


      • Originally posted by juanrga View Post
        Fujitsu line of supercomputers built around general purpose CPUs say otherwise. The K-computer was very good and continues being good despite being outdated.
        The top entry in the Green 500 gets 22x as many GFLOPS/W.


        https://en.wikipedia.org/wiki/K_comp...er_consumption

        Annoyingly, Green 500 doesn't seem to specify what GPUs or other accelerators are used. I guess you can sort of infer from the compiler and math library.

        Originally posted by juanrga View Post
        The post-K promises to be the first exascale supercomputer, and it is built around ARM CPUs with 512bit SIMD units.
        Good luck to them. I will be interested in seeing how that turns out.

        Comment


        • coder No I am no longer hoping for a x86 based GPU now that I am semi up to speed with info. What I am now hoping for is that they do release something that is competitive to AMD's and Nvidia's line of midrange GPU's. Even something with the performance of a RX580 or GTX 1060 per genertation ratio will be very welcomed. I would opt for that as my GPU further down road when GPU tech has long eclipsed my gtx 1070. I generally use a GPU for about seven years before upgrading.

          Honestly this is all too far in the future for me. I'm not going to just ditch something unless it decides to croak. My GTS 450 still works lol and thats a crappy Galaxy card.
          Last edited by creative; 18 August 2018, 04:29 PM.

          Comment


          • Originally posted by creative View Post
            I generally use a GPU for about seven years before upgrading.
            The best strategy is usually to wait until you need to replace tech (either due to change in needs or HW failure), before upgrading. However, if you can get 7 years out of a GPU, then your needs are probably rather modest. 7 years is worth about an order of magnitude, in terms of GPU performance.

            Originally posted by creative View Post
            Honestly this is all too far in the future for me.
            Well, 2020 is only about 4 years after your 1070 launched, so maybe not far enough? It's not as if you need to make any decision, since we don't have specs, pricing, or any clue what AMD and Nvidia's counter-offerings will look like. Nvidia will almost certainly be launching the post-Turing generation, around then.

            IMO, it's not too far to talk about, and that's all we're doing. Now, were we to talk about what GPUs will look like in 2030 - that would be too far.

            Originally posted by creative View Post
            My GTS 450 still works lol and thats a crappy Galaxy card.
            It's > 100 W and probably less than 2x as fast as Intel or AMD iGPUs. Using such a card can't be justified, unless it's in a rarely-used system. For such purposes, I have a couple low power GPUs (HD 5450 and now a RX 550).

            Comment


            • coder

              What sparked my upgrade process was a failing power supply 'noisy exaust fan in it' on my old fx 8320 build, not too long after I bought a GTX 1050 ti.

              So then I went to get a nicer 650w. For some reason I became annoyed and then relized my rig was old. Bought an i7 7700 65 watt, z270 board and everything. Then I was thinking "I have an i7 maybe I should really utilize and take advantage of it since I really like to game", I never bought a higher end card, it was always budget to middle range. To me a GTX 1070 is high end.

              I run games on really high/ultra settings as much as possible but if it starts to choke bad on high/ultra settings I will start turning stuff down. I am not that picky but damn have I enjoyed having a higher end card. Sometimes I turn stuff down cause some of the graphics settings annoy the hell out of me like bloom in Rise of The Tomb Raider. I can't stand a blurry looking game.

              Still waiting to reuse my phenom ii 945 or my 8320. 970A is not really well suited for a FX 8320 chip and I had it undervolted. Just need to buy a power supply and an extra ssd and I will be able to build a secondary DAW for dedicated audio dsp for some sound design I want to experiment with.
              Last edited by creative; 18 August 2018, 09:03 PM.

              Comment


              • Originally posted by oiaohm View Post
                Each core is pure in-order single instruction CISC.
                I'm not sure that was ever true. Knight's Corner was based on the Pentium 54C, which was a 2-way superscalar, in-order architecture. It was modified to include 4-way hyper threading and 512-bit SIMD instruction (not AVX-512).

                Originally posted by oiaohm View Post
                https://software.intel.com/en-us/for...e/topic/603106

                Each thread being processed in a Xeon Phi is about 8 times slower than if it was in a standard Xeon core at the same clock speed.
                Of course, they're talking about Knight's Corner - 1st gen Xeon Phi, launched 6 years ago.

                Originally posted by oiaohm View Post
                In fact atom cpu core can run rings around the core in Xeon Phi.
                So funny you should say that, since the 2nd generation was actually built on the foundation of Silvermont "Atom" cores. These were dual-issue, out-of-order cores, again modified to support 4-way HT and (this time) AVX-512.

                That's the problem with posting about stuff you don't know, and just searching as you go.

                Originally posted by oiaohm View Post
                Xeon Phi shows that x86 process can be built with a very short pipeline but Xeon Phi really does desperately need some smarts like reading and processing more than 1 instruction at a time.
                This post shows that oiaohm desperately needs to read more and post less.

                Originally posted by oiaohm View Post
                show that a general cpu should be able to keep up with a graphics card only one catch it cannot be what intel did with the Xeon Phi. Risc-v prototypes show it can be done.
                Which? When? On what sort of workload? I'll believe it when I see the data.

                Comment


                • Originally posted by coder View Post
                  I'm not sure that was ever true. Knight's Corner was based on the Pentium 54C, which was a 2-way superscalar, in-order architecture. It was modified to include 4-way hyper threading and 512-bit SIMD instruction (not AVX-512).
                  Pentium54C had longer pipeline that the cores in first generation Xeon Phi. You don't cut steps out a pipeline without redesigning without losing features. Problem is based on is highly deceptive. Instead you need to read what feature of the Pentium 54C were left after the intel developers stopped cutting for the Xeon Phi cores. Big things like processing multi instruction per pipeline cycle were gone.
                  Originally posted by coder View Post
                  So funny you should say that, since the 2nd generation was actually built on the foundation of Silvermont "Atom" cores. These were dual-issue, out-of-order cores, again modified to support 4-way HT and (this time) AVX-512.
                  Again the pipeline in the 2nd generation Xeon Phi is shorter than the Silvermont "atom" cores. Features that boost performance are gone again.

                  Originally posted by coder View Post
                  Which? When? On what sort of workload? I'll believe it when I see the data.
                  Presentation by Tony Brewer at Micron Technology on May 9, 2018 at the RISC-V Workshop in Barcelona, hosted by Barcelona Supercomputing Center and Universita...

                  There are many risc-v prototypes like the one above that have a general risc-v instruction set out running your GPU cards. Once you don't cut performance boosting features out the cores that always boost performance and focus on on being memory effective its quite simple for a general instruction set cpu to kick the living heck out a graphics card. Mostly because GPU are design to have massive volume of processing with very little in gpu optimisation so when competing against system that can optimise on the fly based the on chip conditions GPU not that great .

                  It is in fact possible to be faster and use less power than a GPU using a RISC-V instruction set and modern designs. The RISC-V prototype can cleanly beat Xeon Phi. Also cause you to question x86 for server work as they also cleanly beat Xeon chips. To catch back up Intel will need to do some major redesigns.

                  Comment


                  • Originally posted by oiaohm View Post
                    Instead you need to read what feature of the Pentium 54C were left after the intel developers stopped cutting for the Xeon Phi cores. Big things like processing multi instruction per pipeline cycle were gone.
                    Where did you read that?

                    Originally posted by oiaohm View Post
                    Again the pipeline in the 2nd generation Xeon Phi is shorter than the Silvermont "atom" cores. Features that boost performance are gone again.
                    And that?

                    He's benchmarking a very specific problem, and the GPU code might not have been optimal. It would be a mistake to overgeneralize from this one example.

                    Anyway, their design reminds me a lot of AMD's GCN.

                    Comment


                    • Originally posted by coder View Post
                      Where did you read that?

                      And that?
                      Both of those is having access to the real cards Xeon Phi and the real Silvermont atom and real 54C then seeing that single thread samples on those those hardware was slower in the Xeon Phi. Then you start going though the spec very careful to see what they cut out.

                      Originally posted by coder View Post
                      He's benchmarking a very specific problem, and the GPU code might not have been optimal. It would be a mistake to overgeneralize from this one example.

                      Anyway, their design reminds me a lot of AMD's GCN.
                      Problem its there are more than 1 benchmark example of these risc-v systems keeping up. Yes you are right it is a lot like the AMD GCN method of doing cache.

                      Remember AMD and NVIDIA GPU instruction set is intentionally missing the instructions for processing code to directly allocate memory for security reasons. Having a general instruction set instead of a GPU instruction set does make a lot of different processing a lot more optimal todo. There is really no particular reason why a general cpu based around a compact instruction set like risc-v could not have a GPU architecture layout and for a lot of compute processing this make more sense than using a GPU. GPU instruction set for compute processing is very much round peg square hole.

                      Interest enough is risc-v has a higher instruction density than AMD GCN instruction sets. Same is true with Nvidia instruction sets. There are a lot of issue with GPU and compute workloads.

                      GPU are not that well optimised for compute workloads and some of this is security choices.

                      Current X86 chips are not that well optimised for massive number of threads way too much focus on single thread speed.

                      Comment

                      Working...
                      X