Announcement

Collapse
No announcement yet.

Intel Sandy/Ivy Bridge Gallium3D Driver Merged

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by Kivada View Post
    Intel's GPUs don't compete anyways, the silicon just doesn't have the performance. The idea of not doing it is moronic and comes from management that doesn't understand the basis of technology, that Intel is a HARDWARE company, not making sure every aspect of your hardware does exactly what it is capable of doing due to your bullshitting on drivers is only detrimental to your sales.
    I would disagree with this statement Kivada. For high end gaming, youre right. But Sandy Bridge and newer... at least for me, new Intel CPU's with a builtin GPU will replace everything from nvidia and ATI midrange and down. High-mid and up, I will still go discrete because im assuming those will be workstations or gaming machines, but mid-range and down? Just go Intel with the integrated. It really is more than an enough.

    *I type this from a Sandy Bridge Low-voltage (aka, underclocked) ultrabook. And for the most part, I couldn't be happier. I will be upgrading to Broadwell or whatever the next architecture name is after *well and I'm very excited to see the difference in performance even more.
    All opinions are my own not those of my employer if you know who they are.

    Comment


    • #22
      Originally posted by Ericg View Post
      I would disagree with this statement Kivada. For high end gaming, youre right. But Sandy Bridge and newer... at least for me, new Intel CPU's with a builtin GPU will replace everything from nvidia and ATI midrange and down. High-mid and up, I will still go discrete because im assuming those will be workstations or gaming machines, but mid-range and down? Just go Intel with the integrated. It really is more than an enough.

      *I type this from a Sandy Bridge Low-voltage (aka, underclocked) ultrabook. And for the most part, I couldn't be happier. I will be upgrading to Broadwell or whatever the next architecture name is after *well and I'm very excited to see the difference in performance even more.
      I take it you haven't used any of the AMD APU systems in the same price bracket then, they are considerably better in the graphics department then Intel's GPUs. In any case, Intel, like AMD and Nvidia is a HARDWARE company, the drivers should be considered a necessary part of that hardware, because what use is a driver for which there is no hardware to make use of it?

      As for Nvidia, you are right, but that is because Nvidia pigeonholed themselves by having no X86 CPU and where dependent on AMD and Intel to allow them to make motherboard chipsets for them. Nvidia only had the one out for a chance at existing 10 years from now by going to the much more competitive ARM market, but currently has little to show for it and are barely holding on to their GPGPU market via their early ability to get as many devs as possible on the CUDA bandwagon, knowing that if they went to OpenCL there would be very little incentive to only buy Nvidia hardware over what gets the highest performance per watt at the time of your purchase.

      Back in 2008 Nvidia should have either bought out VIA/S3 or fought to force Intel to allow them a license to make X86 hardware. By now they'd likely have an interesting product in the market.

      Comment


      • #23
        Originally posted by Kivada View Post
        I take it you haven't used any of the AMD APU systems in the same price bracket then, they are considerably better in the graphics department then Intel's GPUs. In any case, Intel, like AMD and Nvidia is a HARDWARE company, the drivers should be considered a necessary part of that hardware, because what use is a driver for which there is no hardware to make use of it?

        As for Nvidia, you are right, but that is because Nvidia pigeonholed themselves by having no X86 CPU and where dependent on AMD and Intel to allow them to make motherboard chipsets for them. Nvidia only had the one out for a chance at existing 10 years from now by going to the much more competitive ARM market, but currently has little to show for it and are barely holding on to their GPGPU market via their early ability to get as many devs as possible on the CUDA bandwagon, knowing that if they went to OpenCL there would be very little incentive to only buy Nvidia hardware over what gets the highest performance per watt at the time of your purchase.

        Back in 2008 Nvidia should have either bought out VIA/S3 or fought to force Intel to allow them a license to make X86 hardware. By now they'd likely have an interesting product in the market.
        I haven't USED them, no, but I was under the impression (from AMD's own marketing material) that power consumption for the APU's wasn't going to hit intel's levels until the next architecture revamp next year. Plus there's the issue of very...lackluster open source drivers, and the closed source driver is slow to support new X releases (its still only at 1.13 right now) which is an issue for me as an Arch user.

        Meanwhile Intel has good open source drivers on Linux and not bad closed source drivers on windows, with all of the important features for laptops covered in both drivers and immediate support for new X / Kernel releases.
        All opinions are my own not those of my employer if you know who they are.

        Comment


        • #24
          Originally posted by Kivada View Post
          I take it you haven't used any of the AMD APU systems in the same price bracket then, they are considerably better in the graphics department then Intel's GPUs. In any case, Intel, like AMD and Nvidia is a HARDWARE company, the drivers should be considered a necessary part of that hardware, because what use is a driver for which there is no hardware to make use of it?

          As for Nvidia, you are right, but that is because Nvidia pigeonholed themselves by having no X86 CPU and where dependent on AMD and Intel to allow them to make motherboard chipsets for them. Nvidia only had the one out for a chance at existing 10 years from now by going to the much more competitive ARM market, but currently has little to show for it and are barely holding on to their GPGPU market via their early ability to get as many devs as possible on the CUDA bandwagon, knowing that if they went to OpenCL there would be very little incentive to only buy Nvidia hardware over what gets the highest performance per watt at the time of your purchase.

          Back in 2008 Nvidia should have either bought out VIA/S3 or fought to force Intel to allow them a license to make X86 hardware. By now they'd likely have an interesting product in the market.
          I always thought it would be wize for nvidia to buy out Transmeta. Of course that didnt happen, but it would have given nvidia a chance to get into the x86 market.

          Comment


          • #25
            Originally posted by Ericg View Post
            I would disagree with this statement Kivada. For high end gaming, youre right. But Sandy Bridge and newer... at least for me, new Intel CPU's with a builtin GPU will replace everything from nvidia and ATI midrange and down. High-mid and up, I will still go discrete because im assuming those will be workstations or gaming machines, but mid-range and down? Just go Intel with the integrated. It really is more than an enough.

            *I type this from a Sandy Bridge Low-voltage (aka, underclocked) ultrabook. And for the most part, I couldn't be happier. I will be upgrading to Broadwell or whatever the next architecture name is after *well and I'm very excited to see the difference in performance even more.
            Performance has improved greatly to the point where the GT2 graphics on Haswell can comfortably compete with mid-range dedicated hardware but you are still constrained by system memory. And last I checked, DDR3 is still much slower than GDDR5. Plus you do leech off system ram when using onboard graphics, For some people that's a no-no if you need every last bit of memory available in the system.

            Also there has been one very annoying issue about Intel hardware; when compared side by side, a machine using the onboard graphics from Intel always seems to have a very blurred display vs the AMD and the Nvidia's where the display appears sharp and clear. I have no idea why this is the case though.

            Comment


            • #26
              Originally posted by Ericg View Post
              I haven't USED them, no, but I was under the impression (from AMD's own marketing material) that power consumption for the APU's wasn't going to hit intel's levels until the next architecture revamp next year. Plus there's the issue of very...lackluster open source drivers, and the closed source driver is slow to support new X releases (its still only at 1.13 right now) which is an issue for me as an Arch user.

              Meanwhile Intel has good open source drivers on Linux and not bad closed source drivers on windows, with all of the important features for laptops covered in both drivers and immediate support for new X / Kernel releases.
              I take it you haven't read any of the reviews on the hardware on on the current state of the AMD Gallium3D drivers. They are now at a point that they are better then Intel's drivers, go read them currently on the front page of this very site.

              Combine that with the fact that the Intel HD Graphics 4000 couldn't even match the performance of the first generation APU's Radeon HD6550D let alone keep up with the Radeon HD7660D in the current models.

              So yeah, keep saying Haswell will somehow be made out of magic ZOMG PWNIE farts when it hasn't even come to market, and the "reports" are all internal benchmarks where they are likely making all kinds of mods to the game to make it appear to be running faster when they're either disable high quality settings for the Haswell or have run a version of the game that is very poorly optimized on the non Intel GPUs.

              Why assume this? Because Intel did this for years with their compiler, disabling any SSE extensions for any non Intel CPU, hence why they where investigated for anticompetitive practices. You will see this in many Windows benchmarks, where you can take a CPU from both companies that on Linux would perform almost identically until you ran Windows and something like SuperPI, with the SSE disabled on the AMD parts the Intel chip appears to be many times faster.

              That reason alone is good enough to never recommend Intel hardware.

              Comment


              • #27
                Originally posted by Sonadow View Post
                Performance has improved greatly to the point where the GT2 graphics on Haswell can comfortably compete with mid-range dedicated hardware but you are still constrained by system memory. And last I checked, DDR3 is still much slower than GDDR5. Plus you do leech off system ram when using onboard graphics, For some people that's a no-no if you need every last bit of memory available in the system.
                For AMD's APUs I remember seeing that the GPU performance scaled linearly with system ram speed, if you can overclock the system ram to the limits of either the ram or the memory controller you will get much better results. Currently the fastest factory certified DDR3 system ram I could find clocks in a 2.8Ghz. Though ram like that does indeed break the bank. I'd rather get some much cheaper in the 2.133Ghz or 2.4Ghz range with a voltage no higher then 1.5v at less then 1/3rd the price and overclock it myself.

                It allows you to make quite a powerful ITX based HTPC system if you use an ASRock FM2A85X-ITX and an A10-5800K w/ some 2.4Ghz ram and a WinTV-HVR-2250 since it will actually be able to handle most of the games on Desura, Steam and Gameolith at max settings.
                Last edited by Kivada; 30 April 2013, 05:49 AM.

                Comment


                • #28
                  Originally posted by Kivada View Post
                  For AMD's APUs I remember seeing that the GPU performance scaled linearly with system ram speed, if you can overclock the system ram to the limits of either the ram or the memory controller you will get much better results. Currently the fastest factory certified DDR3 system ram I could find clocks in a 2.8Ghz. Though ram like that does indeed break the bank. I'd rather get some much cheaper in the 2.133Ghz or 2.4Ghz range with a voltage no higher then 1.5v at less then 1/3rd the price and overclock it myself.

                  It allows you to make quite a powerful ITX based HTPC system if you use an ASRock FM2A85X-ITX and an A10-5800K w/ some 2.4Ghz ram and a WinTV-HVR-2250 since it will actually be able to handle most of the games on Desura, Steam and Gameolith at max settings.
                  Ohh YEAH! Now what we need is Steam Big Picture mode and XBMC integration

                  Comment

                  Working...
                  X