Announcement

Collapse
No announcement yet.

Intel Haswell Graphics Driver To Be Opened Up Soon

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel Haswell Graphics Driver To Be Opened Up Soon

    Phoronix: Intel Haswell Graphics Driver To Be Opened Up Soon

    While the Ivy Bridge launch is still a number of weeks out, Intel will soon be publishing their initial hardware enablement code for next year's Haswell micro-architecture...

    http://www.phoronix.com/vr.php?view=MTA1MzU

  • #2
    xf86-video-ati DDX
    I think you mean xf86-video-intel

    Comment


    • #3
      If it's DX11.1 compliant then it should be OpenGL 4.x compliant, not 3.2.

      Comment


      • #4
        Originally posted by cl333r View Post
        If it's DX11.1 compliant then it should be OpenGL 4.x compliant, not 3.2.
        Yes and no. Intel have a bad track record of providing hardware capable of DirectX 10.0 but the drivers never supported it.

        Comment


        • #5
          Let others talk the talk while we tick 'n' tock.
          Opensource bits for a uarc.next().next()
          now that's an open source commitment, congrats

          Comment


          • #6
            Originally posted by cl333r View Post
            If it's DX11.1 compliant then it should be OpenGL 4.x compliant, not 3.2.
            You'd like to think that, wouldn't you.

            While likely very true from a hardware perspective, the Windows drivers simply aren't there for it. Intel GPUs that offer D3D 10 features only offer GL 3.0 on the Windows drivers, despite the hardware theoretically being capable of full GL 3.3. I think the Ivy Bridge drivers will do GL 3.1, despite offering D3D 10.1.

            This is one of the many, many reasons why OpenGL is just best avoided if you're only developing for Windows. You can argue all you want whether or not it's OpenGL's/Khronos' fault, but the reality is that 60% of the GPUs in the world have more features and better performance using D3D on the operating system used by 90% of people. :/

            (The other reason is that GL is just a poorly designed API -- e.g. binding Uniform Buffer Objects by slot directly in the shader wasn't added to Core until 4.2, and the ARB extension that adds the support to older cards is only supported by NVIDIA's drivers; likewise, binding vertex attributes by slot wasn't added to Core until 3.3. Major D3D-class features came to GL over a year later, e.g. Uniform Buffer Objects, Texture Buffer Objects, primitive restart, and instancing weren't added to GL until 3.1 and it took until GL 3.2 to add geometry shader support to Core. Those features existed as extensions, but they were neither universally available nor universally high-quality, so you couldn't actually use them in a shipping product. Granted, even once in Core, the implementations tended to be buggy, likely due to a lack of any kind of test suite for implementations to be verified against.)

            Comment


            • #7
              The graphics unit on Haswell is expected to be Direct3D 11.1 and OpenGL 3.2 compliant.
              This refers to the state of the Windows drivers, not the hardware limitations. Haswell could be fully GL4 compliant if the driver support is there.

              Sandy Bridge has been quite impressive performance-wise for being Intel integrated graphics, but with Ivy Bridge this performance is going to be upped substantially (as much as twice as fast as Sandy Bridge).
              I've been hearing more like 50% faster, but either way it should be a nice boost.

              This will happen again with Haswell where I'm told its integrated graphics should be comparable to a mid-to-high-end discrete GPU.
              Even if we're talking double IB, which is in turn double SB, that's definitely mid-range discrete territory, not high end. And that's current generation mid-range - by the time Haswell is released, it's likely low-end again.

              Comment


              • #8
                Come the heck on, AMD!

                Your main competitor had already fully embraced the open-source driver effort and releases hardware code a full YEAR before releasing the actual hardware. You don't see them losing sales or profits; in fact, they're doing pretty well. Why can't AMD just kill off their pathetic excuse for a driver bundle for GNU/Linux (proprietary, at that) and focus all efforts on Mesa/Gallium3D?

                Comment


                • #9
                  Originally posted by blinxwang View Post
                  Your main competitor had already fully embraced the open-source driver effort and releases hardware code a full YEAR before releasing the actual hardware. You don't see them losing sales or profits; in fact, they're doing pretty well.
                  Yep. They don't make 3D workstation hardware so an open-source-only model works quite well for them. If our hardware focus was the same we would probably have the same open-source-only Linux driver approach, in fact that's what we used to do in the pre-R200 days.

                  Originally posted by blinxwang View Post
                  Why can't AMD just kill off their pathetic excuse for a driver bundle for GNU/Linux (proprietary, at that) and focus all efforts on Mesa/Gallium3D?
                  Because we would lose a big customer base which needs the 3D performance and features that can only be delivered cost-effectively by a proprietary driver. I have answered this question a lot of times already.

                  If you ignore the 3D workstation market then say you can't see any reason for fglrx to exist it's hard for me to give good answers.

                  Comment


                  • #10
                    Originally posted by elanthis View Post
                    (The other reason is that GL is just a poorly designed API -- e.g. binding Uniform Buffer Objects by slot directly in the shader wasn't added to Core until 4.2, and the ARB extension that adds the support to older cards is only supported by NVIDIA's drivers; likewise, binding vertex attributes by slot wasn't added to Core until 3.3. Major D3D-class features came to GL over a year later, e.g. Uniform Buffer Objects, Texture Buffer Objects, primitive restart, and instancing weren't added to GL until 3.1 and it took until GL 3.2 to add geometry shader support to Core. Those features existed as extensions, but they were neither universally available nor universally high-quality, so you couldn't actually use them in a shipping product. Granted, even once in Core, the implementations tended to be buggy, likely due to a lack of any kind of test suite for implementations to be verified against.)
                    off topic but i often have this visions of elanthis walking into the Kronos Offices with explosives and detonating them after yelling D3D Akbar

                    p.s.
                    if they cant do it well enough diy

                    Comment


                    • #11
                      Originally posted by blinxwang View Post
                      Your main competitor had already fully embraced the open-source driver effort and releases hardware code a full YEAR before releasing the actual hardware. You don't see them losing sales or profits; in fact, they're doing pretty well. Why can't AMD just kill off their pathetic excuse for a driver bundle for GNU/Linux (proprietary, at that) and focus all efforts on Mesa/Gallium3D?
                      Because people that use ATI products in their professional workstations pay ATI to produce drivers that are compatible with the software that they run.

                      For many many years the only reason that Linux has had any attention _at_all_ from Nvidia and ATI is because of the professional workstation market. These people think nothing about dropping $2-3K on graphic hardware. They have no problem having their developers work with Nvidia and ATI and pay a lot of money for NDAs and special access to ensure they get the features that they need. Without this market there would be no proprietary drivers from Nvidia or ATI.

                      I remember very plainly from the bad-old-days of ATI were people were forced to hack binaries firegl drivers to make them work on consumer ATI devices. FireGL being the professional workstation line of cards. It sucked, but the hardware is mostly the same so it worked. (Not much has really changed.)

                      ATI has to deal with competitive forces and until the 'linux professional workstation' folks are satisfied with the API support in Gallium3D drivers then ATI will need to continue to pour resources into Catalyst.
                      Then they are required by law to keep portions of their software secret. Thanks to DMCA and friends. Combine these things together and ATI really has no choice.

                      Comment


                      • #12
                        Since I got a donated radeon 9700 pro card i write scripts for binary drivers - first driver i used for it was 3.2.8 from 10/2003. You never needed to patch the driver to use it with consumer radeon cards. firegl cards get however a slightly different optimized opengl code. I am not sure if it is slower in some areas but it is faster with workstation apps (or with similar benchmarks). There have been always hacks to enable those features on consumer cards, but for games thats definitely not needed. There other things that are much more annoying like the control file/aticonfig whitelist for supported pci-ids when the driver is always generic no matter how "special" it should be.

                        For intel oss, my experience is not always positive. Linux distribution based on stable Debian do not receive the support they should get - supporting only the latest mesa/ddx/xserver is not the best solution. There even ati oss is much better.

                        Comment


                        • #13
                          I wonder what the status of the 79xx kernel code is

                          That's honestly the most frustrating aspect of AMD's open source strategy. The lack of an open development process or any type of public roadmap. There's no way of knowing if the code is on target to be released for the next kernel, or if it's been delayed and completely cancelled and we just aren't being told.

                          Once the code is released, even if it doesn't work, we can see commits going into it and others can try and patch problems (even if it rarely happens, it's possible). Until then, everyone is kind of just stuck waiting on AMD without any clue of what's happening.

                          Comment


                          • #14
                            Originally posted by smitty3268 View Post
                            That's honestly the most frustrating aspect of AMD's open source strategy. The lack of an open development process or any type of public roadmap. There's no way of knowing if the code is on target to be released for the next kernel, or if it's been delayed and completely cancelled and we just aren't being told.

                            Once the code is released, even if it doesn't work, we can see commits going into it and others can try and patch problems (even if it rarely happens, it's possible). Until then, everyone is kind of just stuck waiting on AMD without any clue of what's happening.
                            Sorry, but that's just plain wrong, unless you are only talking about the brief period before initial code release where none of us know the exact schedule (although I provide updates every few weeks). After initial release of code/headers all of the development happens in the open.

                            The real thing you're complaining about is that while we were catching up with 6 generations of hardware this "dark period" has been happening after launch so it's visible and annoying. Our competitor didn't have to go through "catch-up" so their corresponding pre-release activities have not been visible to you, but it's a safe bet that we all have to jump through roughly the same hoops.

                            With a bit of luck SI should be the last generation where IP review has to happen after launch. If you look at the time delay between HW intro and initial release of code/headers and factor in the magnitude of the change between generations the trend should be pretty clear.
                            Last edited by bridgman; 02-07-2012, 10:38 AM.

                            Comment


                            • #15
                              Originally posted by bridgman View Post
                              Sorry, but that's just plain wrong, unless you are only talking about the brief period before initial code release where none of us know the exact schedule (although I provide updates every few weeks).
                              Ok, then, when is the code going to be out?

                              After initial release of code/headers all of the development happens in the open.
                              Yes, i know that, and that's good.

                              The real thing you're complaining about is that while we were catching up with 6 generations of hardware this "dark period" has been happening after launch so it's visible and annoying. Our competitor didn't have to go through "catch-up" so their corresponding pre-release activities have not been visible to you, but it's a safe bet that we all have to jump through roughly the same hoops.

                              With a bit of luck SI should be the last generation where IP review has to happen after launch. If you look at the time delay between HW intro and initial release of code/headers and factor in the magnitude of the change between generations the trend should be pretty clear.
                              Ok, that's very good news. I did not realize this was supposed to be the last generation it happens on. It's happened on every card since the AMD open source strategy was announced, so I don't think I'm too out of line for thinking it would continue.

                              It's not just the dark period AFTER hardware release, though - the fact that in the weeks and months leading up to that release we have no news is just as worrying. Although if we can come to rely on support by the hardware release date then i guess that problem takes care of itself because we would have a roadmap.

                              Comment

                              Working...
                              X