Announcement

Collapse
No announcement yet.

Intel Haswell Graphics Driver To Be Opened Up Soon

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel Haswell Graphics Driver To Be Opened Up Soon

    Phoronix: Intel Haswell Graphics Driver To Be Opened Up Soon

    While the Ivy Bridge launch is still a number of weeks out, Intel will soon be publishing their initial hardware enablement code for next year's Haswell micro-architecture...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    xf86-video-ati DDX
    I think you mean xf86-video-intel

    Comment


    • #3
      If it's DX11.1 compliant then it should be OpenGL 4.x compliant, not 3.2.

      Comment


      • #4
        Originally posted by cl333r View Post
        If it's DX11.1 compliant then it should be OpenGL 4.x compliant, not 3.2.
        Yes and no. Intel have a bad track record of providing hardware capable of DirectX 10.0 but the drivers never supported it.

        Comment


        • #5
          Let others talk the talk while we tick 'n' tock.
          Opensource bits for a uarc.next().next()
          now that's an open source commitment, congrats

          Comment


          • #6
            Originally posted by cl333r View Post
            If it's DX11.1 compliant then it should be OpenGL 4.x compliant, not 3.2.
            You'd like to think that, wouldn't you.

            While likely very true from a hardware perspective, the Windows drivers simply aren't there for it. Intel GPUs that offer D3D 10 features only offer GL 3.0 on the Windows drivers, despite the hardware theoretically being capable of full GL 3.3. I think the Ivy Bridge drivers will do GL 3.1, despite offering D3D 10.1.

            This is one of the many, many reasons why OpenGL is just best avoided if you're only developing for Windows. You can argue all you want whether or not it's OpenGL's/Khronos' fault, but the reality is that 60% of the GPUs in the world have more features and better performance using D3D on the operating system used by 90% of people. :/

            (The other reason is that GL is just a poorly designed API -- e.g. binding Uniform Buffer Objects by slot directly in the shader wasn't added to Core until 4.2, and the ARB extension that adds the support to older cards is only supported by NVIDIA's drivers; likewise, binding vertex attributes by slot wasn't added to Core until 3.3. Major D3D-class features came to GL over a year later, e.g. Uniform Buffer Objects, Texture Buffer Objects, primitive restart, and instancing weren't added to GL until 3.1 and it took until GL 3.2 to add geometry shader support to Core. Those features existed as extensions, but they were neither universally available nor universally high-quality, so you couldn't actually use them in a shipping product. Granted, even once in Core, the implementations tended to be buggy, likely due to a lack of any kind of test suite for implementations to be verified against.)

            Comment


            • #7
              The graphics unit on Haswell is expected to be Direct3D 11.1 and OpenGL 3.2 compliant.
              This refers to the state of the Windows drivers, not the hardware limitations. Haswell could be fully GL4 compliant if the driver support is there.

              Sandy Bridge has been quite impressive performance-wise for being Intel integrated graphics, but with Ivy Bridge this performance is going to be upped substantially (as much as twice as fast as Sandy Bridge).
              I've been hearing more like 50% faster, but either way it should be a nice boost.

              This will happen again with Haswell where I'm told its integrated graphics should be comparable to a mid-to-high-end discrete GPU.
              Even if we're talking double IB, which is in turn double SB, that's definitely mid-range discrete territory, not high end. And that's current generation mid-range - by the time Haswell is released, it's likely low-end again.

              Comment


              • #8
                Come the heck on, AMD!

                Your main competitor had already fully embraced the open-source driver effort and releases hardware code a full YEAR before releasing the actual hardware. You don't see them losing sales or profits; in fact, they're doing pretty well. Why can't AMD just kill off their pathetic excuse for a driver bundle for GNU/Linux (proprietary, at that) and focus all efforts on Mesa/Gallium3D?

                Comment


                • #9
                  Originally posted by blinxwang View Post
                  Your main competitor had already fully embraced the open-source driver effort and releases hardware code a full YEAR before releasing the actual hardware. You don't see them losing sales or profits; in fact, they're doing pretty well.
                  Yep. They don't make 3D workstation hardware so an open-source-only model works quite well for them. If our hardware focus was the same we would probably have the same open-source-only Linux driver approach, in fact that's what we used to do in the pre-R200 days.

                  Originally posted by blinxwang View Post
                  Why can't AMD just kill off their pathetic excuse for a driver bundle for GNU/Linux (proprietary, at that) and focus all efforts on Mesa/Gallium3D?
                  Because we would lose a big customer base which needs the 3D performance and features that can only be delivered cost-effectively by a proprietary driver. I have answered this question a lot of times already.

                  If you ignore the 3D workstation market then say you can't see any reason for fglrx to exist it's hard for me to give good answers.
                  Test signature

                  Comment


                  • #10
                    Originally posted by elanthis View Post
                    (The other reason is that GL is just a poorly designed API -- e.g. binding Uniform Buffer Objects by slot directly in the shader wasn't added to Core until 4.2, and the ARB extension that adds the support to older cards is only supported by NVIDIA's drivers; likewise, binding vertex attributes by slot wasn't added to Core until 3.3. Major D3D-class features came to GL over a year later, e.g. Uniform Buffer Objects, Texture Buffer Objects, primitive restart, and instancing weren't added to GL until 3.1 and it took until GL 3.2 to add geometry shader support to Core. Those features existed as extensions, but they were neither universally available nor universally high-quality, so you couldn't actually use them in a shipping product. Granted, even once in Core, the implementations tended to be buggy, likely due to a lack of any kind of test suite for implementations to be verified against.)
                    off topic but i often have this visions of elanthis walking into the Kronos Offices with explosives and detonating them after yelling D3D Akbar

                    p.s.
                    if they cant do it well enough diy

                    Comment

                    Working...
                    X