Announcement

Collapse
No announcement yet.

Intel Doing Discrete Graphics Cards!

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    I know one of the features of the Cell proccessor, which is something that Sony left out of their PS3 design, is that they have basicly way to have a Cell to Cell buss networking.

    And it's even external. Theoreticly you have a optical bus were you connect two or more machine's cells together. I don't know if that is what they do for the Cell clusters or not for scientific computing, not sure.

    The lack of this for something like x86-based Beowolf clusters is of course why clusters, even though the majority of today's supercomputers are clusters, very limited in certain types of high performance computing tasks.

    If you had a very low latency connection then it can make distributed system memory scemes work for clusters so you have something like multiple Linux kernels running on nodes with a single system root with single addressable memory space and be fully multithreaded, like a SMP machine. (see Kerrighed.org for a current attempt to do this with Linux with Beowolf-style clustering and with normal networking stuff)

    It would be interesting if Intel came out for something like that for x86 systems. Imagine a rack of machines with 2 of these Intel video cards with dual sockets and very low latency, high bandwidth interconnects. They'd give Cell-based super computers a run for their money.

    Of course it would be nice to be able to increase the 3D performance of your laptop by simply using a special plugin to connect it to your desktop also.

    Although Intel could be betting that having a cluster traditional of machines with 3-4 PCIe 16x (or 32x or whatever) CPU/GPU 'daughter cards' wouldn't require something very special like a cell-to-cell buss to compete with IBM for high end computing. (and thus take advantage of price differential for more 'purely' commodity hardware.)

    Who knows. I guess it's all fantasy at this point. But it's promising for Linux that it's currently the platform of choice for this sort of thing. Openness is certainly a plus for anybody wanting to get more traction and with the failure of Itanium vs POWER I'd expect that Intel is looking forward towards that.


    One thing I'd like to know, though, on the X3000 benchmarks is whether the full featureset was turned on by default on the X3000 (There's features of the DRI drivers that is off by default right at the moment on at least some of the drivers- things like hardware TCL, etc...)). On paper, the X3000 should be a slightly better performer than it's showing to be right at the moment (though it IS doing well all the same...).
    I beleive the chipset that they are testing in these benchmarks are actually the 3000 and not the X3000. The difference is that with the Q945 chipset (GMA 3000) you do not actually have the hardware T&L and such things you'd have with the G945.

    From my personal experiance with my Intel Pentium-D with GMA 950 I can tell you that the MESA folks are making very good progress.

    I am using Debian Unstable right now, which uses the X.org 7.1 release and that supports the GMA 950 out of the box.

    However one of my favorite games is True Combat:Elite full modification for Enemy-Territory. It requires more resources to run then regular ET as it has higher graphics (with some added eye candy such as 'HD' lighting), better sound, and more detailed textures.

    With the stock drivers it was unplayable (well at least to the point were it wasn't fun). Even with compiling updated DRI drivers and trying all different sorts of tweaks it didn't realy work out that wonderfully, although it made Nexuiz mostly playable.

    So with Mesa the 915 driver is a sort of guinea pig. They have this modesetting branch, for instance, which I don't think the newer cards have even.

    Well they have a newer DRI driver they call the 915tex driver. You compile it along side of the regular 915 dri driver and the X.org drv driver chooses which one to use.

    The thing that is special about this driver is that it now had the new texture memory management support being worked on with Mesa and such. With this it is able to dynamicly able to allocate ram as you need it, which is terrific.

    So I have to compile git versions of xf86-video-intel, drm, mesa, and a special linux-agp-compat so that I can get the same agpgart support that you get witht the latest -mm kernels.

    So that represents about the bleeding edge of open source Linux 3d driver development. Using that I am now able to play TC:E quite well. It's not perfect, but using 16bit graphic setting in the xorg.conf along with 'fastest' graphical settings at 800x600 I am able to be quite competative. The only graphical lag I get is scenes with lots of smoke or were there is a lot of detail and distance going one and even then it's not so bad that I can't still shoot people in the head. :P

    And the nice thing also is that all the tweaking and playing around that I had to do before actually reduces performance. These seem to me fairly well optimized drivers.

    Unfortunately even though it's very stable during gameplay (unless your playing warsow which causes lockups) I can't successfully pull off a complete ET railgun benchmark yet to see how it compares with this article's benchmarks.

    I think that once the memory management stuff gets sorted out with Mesa then you will have decent engough performance out of the x3000 for most Linux games, maybe even including Doom3 (with all the details turned down). (although I realy doubt that you'd be able to get away with playing online with Quake4)

    I am hoping that they'd should have the memory management stuff stable by 7.3 release, especially this this would increase the usability (reduce memory usage, for example) of things like Beryl or Compiz to the point were it can be used by normal people with no sweat.
    Last edited by drag; 03-08-2007, 02:21 AM.

    Comment


    • #17
      Originally posted by drag View Post
      I know one of the features of the Cell proccessor, which is something that Sony left out of their PS3 design, is that they have basicly way to have a Cell to Cell buss networking.
      Not needed in the PS3- but in Mercury's Cell BBE cluster box, it'd be a different story, I'd suspect...


      I beleive the chipset that they are testing in these benchmarks are actually the 3000 and not the X3000. The difference is that with the Q945 chipset (GMA 3000) you do not actually have the hardware T&L and such things you'd have with the G945.
      This makes the results even more interesting and impressive. I've an R200 based setup with a P4 mobile that handles quite a few games that bring my Xpress 200M based main laptop to it's knees and I just can't play them. But, it only can do that if I use the DRI tuning panel to dial up hardware T&L and so forth... We probably ought to get numbers for a G965 motherboard, then...

      From my personal experiance with my Intel Pentium-D with GMA 950 I can tell you that the MESA folks are making very good progress.

      < * background details of the situation deleted for brevity... * >

      I think that once the memory management stuff gets sorted out with Mesa then you will have decent engough performance out of the x3000 for most Linux games, maybe even including Doom3 (with all the details turned down). (although I realy doubt that you'd be able to get away with playing online with Quake4)

      I am hoping that they'd should have the memory management stuff stable by 7.3 release, especially this this would increase the usability (reduce memory usage, for example) of things like Beryl or Compiz to the point were it can be used by normal people with no sweat.
      Interesting. On paper, the X3000 is actually a decent GPU. On the specs front, the thing weighs in at around an AMD X600-X800 class part, if the specs match what Intel's fielding for the GPU. If so, and the DRI drivers are effectively using it, you should be able to play even Quake4, with the settings dialed back a bit. Of course, it's all rampant supposition on my part at this point, not having one in hand, taking Intel at their word (That'd be a sure way to get burned! ), and comparing Intel's theoreticals to the same theoreticals from AMD on this one. So, our mileage on this one may vary quite a bit.

      I only need to point to other chips offered by other players in the market to prove on-paper versus delivered performance- and it's mainly due to poor drivers or a mismatch at the silicon level that precludes a good performing driver under a given OS.

      Which brings us back to the Larabee discussion. If it's like they're describing, IF they pull it off, and IF they open the info like they have done with the 3000/X3000 cores, then we may have a winner here. Well, we might have a winner with the X3000- it's just not certain at this point. I just wish Intel hadn't rushed this one to market before the drivers arrived proper- it left the fanboy reviewers with room to pan the part before it really was ready for primetime.
      Last edited by Svartalf; 03-08-2007, 02:31 PM.

      Comment


      • #18
        There is some more graphic weirdness.

        I seen mentioned a few places Intel's chipset for their mobile stuff is going to be a bit different from their PC stuff.

        GM965 will be using a updated core designed to support DirectX 10, which is supposedly called the X4000.

        Looks like the next machine I am going to buy is a Santa Rosa "Centrino Pro" laptop.

        One interesting side feature is going to be it's onboard Flash drive for Vista's readyboost/readydrive stuff. As well as EFI replacement for the BIOS and probably 802.11 a/b/g/n support. Should be interesting.

        I don't know what youl'd use the flash drive for in Linux though. Maybe a swap for making hybernate faster?

        Also I heard on servers that it's possible to get better performance for writes to disk by having a nice flash drive, sticking the ext3 journal it, and enabling full journalling.

        I wonder how the scedualling works like that if that will allow the disk to sleep longer before commiting writes. I donno.

        Comment


        • #19
          That is an UGLY solution. Brute force if there is such a thing. On the other hand, it's just crazy enough that it might work. Getting it to run with a good power/heat to speed ratio is something else. Not to mention doing it affordably. Intel obviously thinks they can, though, which may mean that Moore's law has taken over again. (Using scalar processors in a GPU. Huh!)

          Comment


          • #20
            If Intel can come up with a real winner then I am all for it. Choice is better and it will be kind of fun to watch the other big boys (nVidia and ATI/AMD) squirm! Intel's got the cash and the talent to make some really great GPU's so this will be a very welcome development indeed. With open drivers, then that will be the kicker as well

            Comment

            Working...
            X