Announcement

Collapse
No announcement yet.

Intel Doing Discrete Graphics Cards!

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel Doing Discrete Graphics Cards!

    We are focused on developing discrete graphics products based on a many-core architecture targeting high-end client platforms.
    http://www.intel.com/jobs/careers/visualcomputing/

    This should be interesting with a discrete Intel graphics cards and the open-source Linux drivers!
    Michael Larabel
    http://www.michaellarabel.com/

  • #2
    This is great news. It'll put pressure on the other discrete graphics makers to open source something. Right now I want to replace my X800... I want some AIGLX magic!

    - rmjb

    Comment


    • #3
      Originally posted by rmjb View Post
      This is great news. It'll put pressure on the other discrete graphics makers to open source something. Right now I want to replace my X800... I want some AIGLX magic!

      - rmjb
      Why not switch to the open-source Radeon drivers then? You can have AIGLX with that.
      Michael Larabel
      http://www.michaellarabel.com/

      Comment


      • #4
        Awesome! Hopefully they'll do with graphics the same they've been doing with processors. Superior performance at a superior heat/energy ratio.

        With open drivers, I'd buy it.

        Can't wait for the Phoronix reviews!

        Comment


        • #5
          Excellent! Currently the intel drivers are on the cutting edge of features thanks to the employment of people like Keith Packard to work on them.

          If you could get an intel card + superior features on any computer, and not just on lame htpc motherboards and on laptops, that would rule!

          Comment


          • #6
            Well, one's definition of "superior features" is bound to vary. I wouldn't expect an R600-burner out of Intel anytime Real Soon. OTOH, Intel generally doesn't aim for a pile if it doesn't think it can hit the top, so Nvidia and AMD have rights to be nervous.

            I'm currently running a PowerColor x700 card with the Open Source driver under FC-6. Works fine.

            Comment


            • #7
              Wow, this sure is a surprise! The dedicated GPU market definitely needs another heavy-weight competitor. Go Intel Go!

              Comment


              • #8
                Originally posted by pipe13 View Post
                Well, one's definition of "superior features" is bound to vary. I wouldn't expect an R600-burner out of Intel anytime Real Soon. OTOH, Intel generally doesn't aim for a pile if it doesn't think it can hit the top, so Nvidia and AMD have rights to be nervous.

                I'm currently running a PowerColor x700 card with the Open Source driver under FC-6. Works fine.
                I don't expect an R600 burner (Or a GeForce8 burner for that matter... ) but I'd expect an R400 (NVidia GeForce5/6) burner, possibly an R500 (GeForce 7) harrier out of a dedicated X3000- the most critical thing is that it performs respectively well and that it's got open info and open sourced drivers. I expect this to happen because UMA does nothing but drag a GPU to it's worst case performance levels.

                Comment


                • #9
                  I too am looking forward to this. This should be very welcome.

                  As for a R600 burner? I seriously doubt it also. But who cares?

                  Intel are doing some things right, however. I know clock speed doesn't matter so much, but even if Intel's designs are not quite as specialized or optimized as Nvidia's or ATI's hopefully they can make up for it by simply cranking up the mhz.


                  Also they mention 'many-core' quite a bit, don't they?



                  What would be the effect of dropping something like 3 GMA X3000-style cores, clocked at 800mhz-1ghz on a discrete card with 256 megs of DDR4 RAM and the ability to grab additional RAM over the PCIe port? (all on the same die, probably. How much silicon does a GMA core take up vs a previous generation pentiums that were made in those now-idle Intel fab plants?)

                  You'd end up with something like 24 programmable pipelines... It would be a very flexible card for a wide veriaty of situations wouldn't it? And I know that companies like multicore designs sometimes since power management is effective; you just shutoff cores you don't need, but you can fire them up on demand.

                  Does something like this even make any sense?
                  Last edited by drag; 01-30-2007, 07:13 PM.

                  Comment


                  • #10
                    Intel Discrete GPU Roadmap Overview

                    Originally posted by VR Zone
                    Intel's Visual Computing Group (VCG) gave an interesting overview of the discrete graphics plans this week. There seems to be a few interesting developments down the pipeline that could prove quite a challenge to NVIDIA and AMD in 2 years time. As already stated on their website, the group is focused on developing advanced products based on a many-core architecture targeting high-end client platforms initially. Their first flagship product for games and graphics intensive applications is likely to happen in late 2008-09 timeframe and the GPU is based on multi-core architecture. We heard there could be as many as 16 graphics cores packed into a single die.

                    The process technology we speculate for such product is probably at 32nm judging from the timeframe. Intel clearly has the advantage of their advanced process technology since they are always at least one node ahead of their competitors and they are good in tweaking for better yield. Intel is likely use back their CPU naming convention on GPU so you could probably guess that the highest end could be called Extreme Edition and there should be mainstream and value editions. The performance? How about 16x performance of any fastest graphics card
                    http://www.vr-zone.com/?i=4605
                    Michael Larabel
                    http://www.michaellarabel.com/

                    Comment


                    • #11
                      We heard there could be as many as 16 graphics cores packed into a single die.


                      That's a lot of cores.

                      How complex is a current GMA X3000 core? If you shrink down the proccess to CPU-size, how many could you pack into a current P4-sized, or maybe Core-Duo2, peice of silicon?

                      Using the X3000 core as a basis would get you 128 programmable pipelines in 16-way core. So that's probably wrong... (me assuming that they are going to use x3000 design fairly directly.(


                      32nm
                      I don't think so. 45nm is more likely, I figure.

                      The only thing I know about this sort of thing is that when you shrink the proccess of making a cpu down a step you basicly have to rebuild the entire assembly line. The whole plant. Also because at the same time you usually make the silicon wafer bigger to get higher yeilds per wafer.

                      So since Intel would have all this spare assembly line laying around then it would make sense to turn it into massive multicore gpu designs. You could be cheaper about it and cut more corners then you can with cpus also. If you have a flaw in the chip or in the silicon wafer then you just deactivate those chips that has the flaw... so a 'pure' core would be the high-end with all 16 GPUs, with 1/3 of the core goobered up then you have a mid-range video card with 12 cores, then with half or more of the cpu gone you have a 'low end' card with 6-8 cores.

                      So that way the video card fabrication proccess will always follow 1 generation behind the latest proccess used in the CPU. So it will probably be the size, power requirements, and expense of the current Core Duo 2 cpus if I am right.

                      These things range from 150 to about 700 dollars right now, just for the cpu. Of course the top of the line cpu is incredably overpriced. So I figure $350-500 with the entire card to start of with?

                      It's quite a competative advantage that Intel is going to have over Nvidia. Nvidia will have to build all new plants to move up to the next generation of fabrication.. while Intel can use the old stuff already bought and paid for by CPU sales and still be just as or more advanced.




                      BTW on the Linux-intel front..


                      Keith Packard gave a nice presentation at the Debian miniconf. I beleive the following is the right one, I am not sure as it's been a while since I looked at it and I can't realy check it out right now.
                      http://mirror.linux.org.au/pub/linux...450_Debian.ogg


                      But if that's the right video he talks alot about 7.2 and the future direction of X.org 7.3.

                      Also he gives a nice overview of him working with Intel hardware (mostly on how it relates to x.org 7.3 and suc). Also mentions Intel's intentions with Linux driver support.

                      They now do Linux driver development in-house with Keith's (and other hacker's) assistance.. Traditionally Linux driver development has lagged behind Windows. However it is now Intel's goal to ship working (and completely open source) Linux drivers the same day the corrisponding hardware ships.

                      This means, hopefully, that as soon as these things start showing up in stores you can just buy them and they will run on Linux with cdrom-supplied drivers.
                      Last edited by drag; 02-12-2007, 07:35 PM.

                      Comment


                      • #12
                        Originally posted by drag View Post


                        That's a lot of cores.
                        Yep, that it is. Think of SLI/Crossfire eight times over.

                        How complex is a current GMA X3000 core? If you shrink down the proccess to CPU-size, how many could you pack into a current P4-sized, or maybe Core-Duo2, peice of silicon?
                        Probably about 16-ish. It's their most complicated chip attempted to date on the GPU front. It's been panned on the "review" sites because Intel shipped the design without full Windows side drivers. It's got a lot of promise and if the open source drivers are decent (which I'm hoping they are) it'd be a decent choice as a discrete part by itself.


                        Using the X3000 core as a basis would get you 128 programmable pipelines in 16-way core. So that's probably wrong... (me assuming that they are going to use x3000 design fairly directly.(
                        That would be about correct. Top end cards are running 32-ish right now (G80...). I'm not QUITE sure how someone's arriving at 16x the fastest cards out right now, but I could buy a 3x-4x advantage if they could pull off the management of resources, etc. on the multicore design with an X3000 based multicore- probably with less power consumption. Right now, this is all guesswork on our part- we've no idea what the pipelines can fully do yet in the X3000 or if they're even USING that core in the multicore design. They could have a 16/32 pipe core already in the pipeline as a rollout for the baseline discrete part for all we know.

                        Comment


                        • #13
                          New Information: http://www.theinquirer.net/default.aspx?article=38011
                          Michael Larabel
                          http://www.michaellarabel.com/

                          Comment


                          • #14
                            Yep I just found that out myself.

                            Looks like the Intel graphics cards will have more ram then my current desktop. Bizzare.

                            Also did you check out the article that one linked to?
                            http://www.theinq.com/default.aspx?article=37548

                            That's more bizzare.

                            They are making it so that you can program x86 code for it.

                            If I am correct it sounds like it will make 'software rendering' faster then 'hardware rendering'. And then that means massive stability gains and get new features faster then anything else possible.

                            Just for people that don't know..

                            OpenGL is a general purpose program API for 3D GUI applications. It's not just for games, and although it's designed to work with hardware acceleration it's not nessicary to have hardware acceleration to successfully use OpenGL programs.

                            In fact consumer cards only accelerate a portion of the OpenGL API, just the stuff that is cpu intensive and tend to get used for games. One of the differences between 'workstation' class cards and 'consumer' or 'gamer' class cards is that they accelerate more of opengl, if my understanding is correct.

                            So with Linux DRI drivers are actually based off of the Mesa software OpenGL stack. The driver programmers use this software stack and then accelerate as much as that Mesa OpenGL stack as possible given their understanding of the hardware. For things that they don't accelerate then it does software fallback.

                            But if the GPU is x86-based then it's going to take relatively little effort to port Mesa over to run on this video card.

                            As in addition to effectively making 'semi-software rendering' faster then anything coming out of ATI or Nvidia you can port all sorts of extra stuff over to it very easily.

                            Media encoding, audio acceleration, physics acceleration, AI, scientific computing, etc etc. Most anything that can run on x86 that can benifit from massive parrellel proccessing.

                            For instance OpenRT
                            http://www.openrt.de/gallery.php


                            It's a Open realtime raytracing API designed to be similar to OpenGL. What you can do with that is increadable.

                            Take this:
                            http://www.openrt.de/Applications/boeing777.php

                            [q]The Boeing 777 model contains roughly 350,000,000 (350 million) triangles, which arrived in a compressed (!) form on 12 CDs. The entire model to be rendered (including all triangles, BSP trees etc.), consumes roughly 30-60 GByte on disk. We render the full model, without any simplifications or approximations including pixel-accurate shadows and highlights.[/q]

                            [q]Currently, we use a single AMD Opteron 1.8GHz CPU. The machine is a dual-CPU. We currently get around 1-3 frames per second at 640x480 pixels on that setup, depending on the actual view. Some simple views run even faster, the 1-3 fps correspond to the images as shown above.[/q]

                            That's realtime performance. Sure it's only 1-3 FPS, but that's a 350 million triangles being rendered.


                            Could you imagine playing a game like GrandTheft Auto, but instead of only rendering the stuff that is close up and having simple models farther you get out until it just cuts off your view it renders the entire GTA world, realtime, with all the people with full detail by rendering the rays of your view rather then the models themselves and then clipping them?

                            This sort of stuff seems to me that it will revolutionise graphics for the PC. By moving to a software model rather then the hardware model it's going to make things a lot more flexible with a lot more performance and increased image quality.

                            See this example:

                            This rendering is done with the OpenGL API. State of the art hardware at the time it was made:
                            http://www.winosi.onlinehome.de/Gallery_t14_03.htm

                            Compared to the same image that took 2 minutes to render on a 1700+ AMD althon..
                            http://www.winosi.onlinehome.de/Gallery_t14_08.htm

                            Comment


                            • #15
                              Originally posted by drag View Post
                              This sort of stuff seems to me that it will revolutionise graphics for the PC. By moving to a software model rather then the hardware model it's going to make things a lot more flexible with a lot more performance and increased image quality.
                              What most people don't know is that any modern DirectX 9.0/OpenGL 2.X capable card is already doing software rendering- right now.

                              3D Graphics is stupidly SIMD. So's physics computations.

                              It's why you can do physics on a GPU along with rendering. It's why you're seeing AMD and NVidia fielding research project supercomputers in a single PC box that trash 32-64 box clusters on speed.

                              Drivers these days take requests for old functionality, translate the requests to GLSL or HLSL, which are simplistic versions of C, compile them and then intermix the resultant ALU code for the accelerator with compilations from modern API code- and then run the programs in turn as the applications ask for the code to be ran. It's why we're still a bit slower than we probably ought to be seeing with the X3000 benchmarks done recently on Phoronix- the open source crowd's still learning how to walk before really running. (One thing I'd like to know, though, on the X3000 benchmarks is whether the full featureset was turned on by default on the X3000 (There's features of the DRI drivers that is off by default right at the moment on at least some of the drivers- things like hardware TCL, etc...)). On paper, the X3000 should be a slightly better performer than it's showing to be right at the moment (though it IS doing well all the same...).

                              So, in reality, this isn't too far-fetched and has been something brewing for a long while now. Now, what remains to be seen is if this is PR spin from Intel, or the real deal- and if it's the real deal, can they deliver on the potential promise AND keep the critical programming details open. It's a good chance that they're going to use Open Source as an edge here as they try to lever themselves into this space- but it's NOT a foregone conclusion. We all know where those rumors of AMD releasing enough information to allow open source drivers for the R300-R500 chipsets have went- NOWHERE.

                              Comment

                              Working...
                              X