Announcement

Collapse
No announcement yet.

Intel Wants YOUR Linux Questions, Feedback

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    I found some bugs when I did this test but the reporting was to complicated.
    For our bugs, the bug reporting guide is at http://intellinuxgraphics.org/how_to_report_bug.html.. It is not complete, I admit, but it should be enough to get started.

    If you have ideas or suggestions on how to improve it, I'd be interested in hearing that.

    Also, I have some very basic scripts which help with automatically identifying i915_error_states traces and map them into known issues. When I finish them, I'll be certain to post here a guide on how to use them. This should help at least in the initial issue identification I suppose..


    Since the new gpus come from a third party there is poor opensource support, is there any plan to fix this introducing new intel gpus that could be supported?
    All our new GPUs (the ones which we develop the drivers for - e.g., Ivy Bridge, and some future ones I am not allowed to talk about ) are intended to be provided with opensource drivers, as usual.

    The GPUs which come with closed-source drivers are licensed from PowerVR, and the license does not allows to open-source their support as far as I know...



    Are you saying that Ivy Bridge and subsequent GPUs are expected to be fully usable at introduction going forward?
    If by "usable" you mean that they will come with working 2d and 3d modes, modesetting support, external outputs, suspend-resume and all the othes usually expected set of features, then yes, the idea is to provide all of this.

    The definition of "usable" is very subjective though. I've been using intel GPUs far before I jointed Intel, and I never had much usability problems except for the i810 line of GPUs. But of course, it changes for everyone - some people expect to play more games, some use more media decoding...

    One possible objective criteria would be the number of issues in open-source bugzilla (namely, freedesktop.org one). For Ivy Bridge, at this moment, we don't have any critical issues which do not have a solution. At the same time, there are bugs for some sort of misrendering in some applications or non-passing piglit tests. If it affects the usability of your workloads it is hard to say; but it is all open for everyone to check via bugzilla, and verify if the intended use cases are expected to work without any problems or it is possible to expect any sort of issues. It is all public and open for everyone - this is how open-source works .

    1) What exactly is RC6?
    2) Why does it repeatedly turn out to be non-functional? Is it some unforeseen incompatibility on Intel's side? Are there some BIOS bugs to work around?
    3) Can we expect universal support for RC6 (and/or semaphores) in the future?
    I don't know the exact translation, but I assume that RC6 term is similar to CPU's C-states, where R would stand for Render. So RC6 is equivalent to Render C-State 6 (Deep Power Down).

    (Most of my knowledge about this matter comes from http://www.hardwaresecrets.com/article/611 which is the best non low-level resource I found about such details)..

    As for RC6 support and problems, here is a short overview.

    RC6 is a technology which allows GPU to go to a very low power consumption state when GPU is idle (down to 0V), so it results in a considerable power saving when this stage is activated. When comparing under idle loads with machine state where rc6 is disabled, we can get up to 40-60% improved power usage.

    As a bonus, when those states are reached, we have additional thermal and power space for both CPU and GPU to go into a more aggressive turbo mode, so it can also improve the performance of intensive workloads by around 10%, and a bit more in some cases. Michael was the first one to publicly observe this in his RC6 overviews on phoronix by the way (thanks a lot, again!).

    To exemplify, on battery power, when your machine is idle, it uses 50% less power. And when you run a CPU/GPU intensive application (for example, openarena), it gets additional +10 fps under load.

    However, in some hardware and software configurations this power state results in unexpected issues (such as hangs or graphics corruptions). One co-related situation when you are almost certainly can expect issues is when you have VTd enabled (e.g., intel_iommu != off kernel parameter). Most of all our reports related to rc6 were traced down to this, but we don't have an explanation of why this is causing issues.

    With 3.2-rc6, we tried to enable rc6 by default for the kernel, but it was quickly had to be reverted, as it turned out that in some cases we still have random crashes and hangs. The down side of this is that we don't know at the moment what is causing those issues, as only very few users has reported this problem, and, like always with rc6-related issues, none of Intel employees computers can reproduce said issues for now. So hopefully, if we discover what causes the problem there, we'll be able to workaround it just like we did with VTd, and enable it by default for Sandy Bridge on 3.3 kernel.

    In general, if you do not experience random hangs and graphics corruptions when rc6 is enabled - I mean, issues which you haven't seen previously, you shouldn't have any issues with it at all. But if you do, we'll be very interested in hearing about it, because we are on a quest to locate machines which can reliably reproduce rc6-related problems. But so far, I'd say that around 99% of all machines should "just work" with it enable.

    About rc6 support on different generations of Intel GPUs, here is the short resume:

    For Ironlake (e.g., Intel Core i - pre-sandy machines), rc6 is disabled by default in all cases as it can cause similar issues as well, but can be activated by the same kernel parameter. On my Acer TimelineX, the enablement of rc6 makes its battery last 9 hours instead of 6 (powertop reports 7.5W when idle, down from 11W without rc6 enabled manually), and it causes no advert issues, but the mileage may vary between Ironlake GPUs.

    On Sandy Bridge, right now it is always disabled as well by default. On my Lenovo T420 (sandy bridge), powertop reports around 12.5W with rc6 enabled, and around 19.5W with rc6 disabled.

    For older graphics cards, this parameter makes no difference as kernel ignores it.

    I hope it clarifies the things a bit .

    As for expectation for an universal support for rc6 and semaphores, "there is always hope for the future as tainted by past experiences" ( (c) Chris Wilson)

    4) Whereas Intel's integrated graphics are not really competitive in the performance segment, they are vastly ahead regarding power efficiency. Can we expect further driver improvement in that regard?
    I cannot comment about the details of Intel products which were not announced yet, but I won't reveal much secrets by saying that the universal trend is to improve both power efficiency AND performance in the future products.

    5) Will we get S3TC?
    From technical side, nothing is blocking it. The problem is, as usual, with patents and the way they are being used in modern world. Sadly, as long as software patents for such features exist, I do not expect much innovation in this area anymore. Specially when talking about open-source projects. But this is just my opinion.


    When can we expect OpenGL 3.3 support on Sandy Bridge?

    When do you expect to have OpenGL 4.2 support on Ivy Bridge?
    Full OpenGL 3.x support requires some cooperation from the hardware. It is available on Ivy Bridge, yes, so GL 3.x should be supported on that architecture with hardware acceleration. For previous generations (Sandy Bridge and Ironlake), some parts of it must go through software-only implementations.

    As for GL 4.2, it is too early to say anything. Of course, the Mesa team is interested in supporting all the latest and greatest GL versions in full, but I cannot answer when it will happen at the moment.

    Comment


    • #52
      Please implement switch between Intel and nouveau/R600g with MUX-less in upstream FOSS drivers without Bumblebee/Ironhide.

      Originally posted by eugeni_dodonov View Post
      Those drivers are developed by a different team
      Representative of this group may come here?

      Comment


      • #53
        Intel 845G driver is broken

        Intel Extreme Graphics driver is broken for a long time. If Intel took out support for 8xx will it at least and at last release the documentation for the chip to help developers to solve GPU/CPU coherency issue?

        Comment


        • #54
          Hi Eugeni,
          I'm one of the users affected by graphical glitches affected while RC6 is enabled. Alas, RC6 power saving is very good and it really gives juice to my battery, but sometimes the glitches make the machine totally unusuable. If you want I can provide you information about my hardware configuration. I would really glad to help you in solving that issue.

          Thanks for you hard work, guys, I'm sure you'll work it out.
          Luigi

          Comment


          • #55
            If you can give any predictions about how Wayland will develop alongside Intel products in the future
            The lucky ones of you who are going to attend <strike>a secret meeting at </strike> FOSDEM will find out some answers about that. Until that time, I won't spoil the answers .

            Seriously, how are you supposed to know what you're buying when Intel has a nasty habit of pairing Atom CPUs with PowerVR SGX crap and doesn't provide anything to differentiate itself from real Intel graphics chipsets?
            I believe all the notebooks advertise which GPU they are packaged with. http://en.wikipedia.org/wiki/Intel_GMA provides a nice mapping of what features to expect within each marketing GPU name, and which chip comes within which product. +1 to collective intelligence!

            What is the state of libva support for the Ironlake chipsets? What kernel started support for it? What does support depend on? What's the future of libva products at Intel? (are all new chips going to get it?)
            Libva is supported for different generations of GPUs, and it is expected to continue to receive new features and improvements. So yes, the future chips are expected to be supported through it.

            On Ironlake, most of the decoding features are supported as well, including H.264, since 2010.

            I know that libva descriptions on both http://freedesktop.org/wiki/Software/vaapi and http://cgit.freedesktop.org/vaapi/ pages are somewhat too basic for some features and their support. I promise to write a post about that at some point on my blog to clarify most points. It would be too big to write here in the format of forum replies..

            What about Ironlake's 3D graphics? I bought a dell laptop that came with Ubuntu 10.04, but so far it seems to barely have any OpenGL support (i can play some simple games, but I can only use compositing with Xfce4, which makes me think something isn't quite up).
            Something is definitely not right, Ironlake shouldn't have problems with 3D graphics on Ubuntu. However, 10.04 is quite old, you should try to update to 11.10 for example - most of performance and overall enhancement work happened during those 2 years.

            You can try with Ubuntu 11.10 live cd for example, and check if the performance gets better.


            But I remember the days when I had this laptop when EXA was new... before KMS, before DRI2, before UXA, way before SNA...
            Yeah, I remember those old time too, back when there was XAA vs EXA battle. I had my first centrino-based notebook back then (acer travelmate t4000 if I am not mistaken), and it wasn't a good experience indeed. This turned me away from Intel graphics for a couple of years, so I went into nvidia camp back then. On the other hand, with nvidia at that time, I never had much luck with battery life.

            Since that time, or better, since the gtt and gem introduction, things have been remarkingly stable on Intel's side. At least, I never had that kind of problems ever again. The mileage may vary, but I'd ask you to give another try to Intel GPUs - at least to report your issues. As you already have had much experience with them in the old time, it would be very interesting to hear from you about what are your feelings now.

            With new Ivy Bridge, I'd say you have pretty good chances of have everything working from out-of-the-box. I can't give exact numbers at the moment, but for example, nexuiz in full-hd mode is not unthinkable.

            But as always, I think the best thing is to take your own conclusions. Now, with the availability of live-cds and live-usbs, it is easy to put the disk into the drive and take the machine for a spin. Most major Linux releases will be ready for Ivy bridge support prior to its launch, so you shouldn't need any specific third-party drivers or custom images.

            - What is the status of OpenGL3 support for Ironlake in Mesa 8.0?
            It should work, but without hardware acceleration in some cases, as the needed hardware is simple not there.

            - Is any attention being paid to making SNA acceleration more stable on Ironlake?
            - What other significant improvements or changes can Ironlake users expect?
            - sna acceleration is mature enough to use with this generation of chips ?
            For the first question - ABSOLUTELY!!! If you take a look at the commits going into xf86-video-intel for the past months, you'll see that Chris is working on support for all generations of Intel GPUs, back from Gen2 (e.g., i865). SNA is not specific to Sandy Bridge and newer GPUs - and actually, in some cases, the best performance increases are achieved on Gen3 and Gen4 GPUs.

            Over the past months, several hundreds of commits went into SNA, and most bug reports were closed within 1 day or so. However, as it is a new technology, which uses GPU acceleration in a completely different way, there are always situations when things go wrong. So if you spot an issue with SNA - no matter if it is a Sandy Bridge, Ironlake, or any other GPU - PLEASE let us know so we could fix it!

            For the past months, I think that SNA is mostly on-part with UXA with regards to stability. I still don't have an answer whether it will be enabled by default in next xf86-video-intel release, but it is generally stable enough in most cases at the moment.

            If Batman used Linux, which distro would he use, and what CPU series is used in the Batcomputer?
            I think it would be BatLinux, a Linux distribution with a Bat logo on boot instead of Pinguin. And it would have /arkham+asylum directory instead of /lost+found .

            (The default root password is 'wayne', but please keep it secret ).

            Intel gfx are a really mixed bag. The development seem to focus on latest upstream xserver + support tools and maybe Ubuntu/Fedora releases.
            I am glad that you brought this up - because I just started doing the intel-drm-backports series of kernel branches this week, for this very end! Please check the http://dodonov.net/blog/2012/01/11/b...table-kernels/ page or the intel-gfx announcement for details (http://permalink.gmane.org/gmane.com...ers.intel/7961).

            For some reason I always have to log out after a few days because somehow the graphics have become sluggish, a restart of X fixes it.
            Please, could you try with latest versions of libdrm and xf86-video-intel? Lots of fixes related to that were into their latest versions for the past months. I believe most of the issues which require you to restart X once in a while were already solved.

            Gallium3D drivers.
            Why? As I already commented here, and other developers have commented on the mailing lists, I simple cannot see the point of moving to Gallium at the moment. Our drivers work, they are being developed, and I cannot think on what could we possible get by moving to Gallium besides 'oh, let's use it because it is cool'.

            Really, if you can provide any compelling reasons to move everything to gallium, I'd be happy to hear. It is a nice technology, and it is great to have it around, but I simply cannot see why it would justify the efforts. And moving everything from one technology to another just for the sake of moving is pointless IMHO...

            Better support for old yet ubiquitos hardware. I'm reminded of all of the i810 class hardware tat nolonger works on Linux due to the driver going completely unmaintained in a state that crashed every 5 mins before being dropped completely.
            The i8xx generation of GPUs are no longer produced nor developed... So I wouldn't expect much improvements for i810-age of GPUs in the future. They are simply too old, and we actually don't have them around to do any testing or usage (well, I think Daniel Vetter still has some i810 relic, but that's pretty much it).

            However, nothing prevents someone to work on that. Last year, a new driver for i740 was released - done completely independently from Intel. But I don't think something will come out from Intel with regards to it.

            GPUs competitive with at least entry level AMD APUs, the Atoms are crap.
            I cannot discuss future products which were not announced yet .

            It's sad to see barely any support for hardware that is not even 3 years old (i915).
            This is not true, the kernel, mesa and 3d driver do support i915 GPUs. If you are expecting those GPUs to run on the same speed as Ivy Bridge for example, well, that's simple not possible. Hardware evolves over time, and 3 years in GPU development is a very long time.

            However, i915 GPUs are still pretty much supported and developed. As I mentioned in the SNA comment, most of the development on SNA backend applies to all generations of GPUs. Heck, you can even expect better performance on i915 than on i965 in some cases with SNA!

            Comment


            • #56
              On my Acer TimelineX, the enablement of rc6 makes its battery last 9 hours instead of 6 (powertop reports 7.5W when idle, down from 11W without rc6 enabled manually), and it causes no advert issues, but the mileage may vary between Ironlake GPUs.
              i can confirm that rc6 has been working well enough for me on my Ironlake based lappy too ( portege r700).
              I havent meassured watt usage but im quite sure that i get _at least_ 2 more hours out of it.
              Not that i care that much but SNA acceleration is not usable here.

              In general im quite happy with the graphic driver support from Intel, so thanks for that.
              Only thing that annoys me is Intels support for wireless devices, which is ... well, lets be nice, not good :-)

              thanks

              Comment


              • #57
                Originally posted by eugeni_dodonov View Post



                Why? As I already commented here, and other developers have commented on the mailing lists, I simple cannot see the point of moving to Gallium at the moment. Our drivers work, they are being developed, and I cannot think on what could we possible get by moving to Gallium besides 'oh, let's use it because it is cool'.

                Really, if you can provide any compelling reasons to move everything to gallium, I'd be happy to hear. It is a nice technology, and it is great to have it around, but I simply cannot see why it would justify the efforts. And moving everything from one technology to another just for the sake of moving is pointless IMHO...

                wouldn't moving to g3d mean that you would be able to share more code with the other driver devs and put resources in new features??

                if the above is true it seems like a good reason to change to g3d

                Comment


                • #58
                  Why is my bug ignored since over a year? https://bugs.freedesktop.org/show_bug.cgi?id=31960

                  Comment


                  • #59
                    Many browsers only work with Nvidia drivers, because (according to the browser developers) their are too many bugs in drivers from other vendors. Please fix those bugs, so that hardware rendering is enabled for intel GPUs by the browser developers.
                    Yes, but unfortunately, the flow of information is sometimes not that good. The upstream disabled WebGL acceleration because 'it is broken', but we have no reported bugs or issues about what exactly is broken.

                    WebGL works well enough on my Ironlake and Sandy Bridge, but I admit I am not using it that hard. When comparing fps with windows drivers, I get mostly similar results, and I don't see stability issues with that. But that's pretty much it. Also, things have been remarkable faster in nightly builds of firefox (I am using nightly versions of firefox since 2006 or so). So perhaps I am seeing different results due to this.

                    So if you have a specific list of situations or test-cases where WebGL support in Intel drivers on Linux could be improved, please, please, PLEASE, let me know! I'd be very interested in hearing what exactly we need to improve, so we could direct our strenghts towards that goal.


                    How many people are on your development team?
                    The team is listed at http://intellinuxgraphics.org/team.html. Of course, the drivers are also being developed with huge help from community contributors, and with patches flowing from developers in Redhat, Canonical, Suse, Google - among many others. Thank you guys!

                    How many people are in the teams working on Windows drivers?
                    I don't know the exact number, but I'd guess around 500. It is just a my guess though, so do not interpret it as official Intel answer.

                    Is anything being done to improve driver support on other operating systems like FreeBSD, OpenBSD and Illumos?
                    Not from within our team, we are working primarily on Linux drivers. I know that Konstantin is working on FreeBSD project support, and frequently comments on the patches and features in our mailing lists. As for other operating systems, I simply don't know.

                    Please establish an open (free as in speach) standard for switching between discrete GPU and IGP
                    (I also summed all the other gpu-switching questions here)
                    We do not develop those solutions at Intel, those are being carried out by the vendors (Nvidia and AMD). So there is not much I can comment about that.

                    However, by default, machines boot with integrated cards by default for power saving reasons.

                    And finally, I believe that Dave Airlie is working on some support for a simplier switching between different GPUs. Phoronix covers this development from time to time (http://www.phoronix.com/scan.php?pag...tem&px=MTAyMDM).

                    I know you don't care that much about old hardware (9xx) but just wanting to say that the 915GM occasionally locks up, with grey lines all over the screen
                    Could you file in a bug about this, following the http://intellinuxgraphics.org/how_to_report_bug.html guidelines? There are some similar bugs which could fit into your description, but without additional details I cannot say which bug you are hitting, and whether it was fixed in latest kernel version or has a patch somewhere...

                    At CES Ivy was shown using a cool VLC video that would support DX 11 class acceleration. This would mean all from some advanced pixel shadders to tesselation. The obvious question is: will be Linux side exposed to the same level of acceleration let's say in 6 months from Ivy Bridge launch out of the box? Let's say: can I play Oil Rush on Ivy Bridge?
                    I cannot comment on the product which was not officially launched yet, so please wait a couple of weeks until the IVB launch. But yes, there are significant hardware changes with regards to GPU in Ivy Bridge. Both Linux and Windows drivers use the same registers and work similarly to some point, so it is possible to expect nice things from our drivers on Ivy Bridge. But I cannot give more details now..

                    As for Oil Rush, I haven't tried it myself yet, but I will give it a try and let you know how it goes on Ivy Bridge .

                    Then I upgraded to a Sandy Bridge box and have tearing video again, currently using Debian Sid, Gnome 3.2 and recent kernels. I tried XV and VAAPI playback in different players (mplayer, vlc, MythTV) and all were affected. This really makes me sad.

                    Any hints what I'm doing wrong or, if it's not my fault, when tearing free playback will be possible on Sandy Bridge?
                    You are hitting the unfamous bug https://bugs.freedesktop.org/show_bug.cgi?id=37686 . There is no fix for this at the moment, but I expect it to get fixed at some point.

                    Meanwhile, you can play videos without tearing on Sandy Bridge if you use GL or VAAPI video outputs and disable compositing (or use a compositor which plays fine with that). The bug itself have some tricks which you can try to get it working. But for real fix, no, it is not there yet, sorry .

                    1) My experience matches Kano's, in that if you happen to use even one different version
                    than mentioned in the quarterly package, it's unstable. This doesn't happen on other FOSS
                    drivers (ati, nouveau).
                    The Intel Linux Graphics stack releases (which were quarterly releases previously, but will follow a more flexible scheme starting this year) go through a much more detailed and complete testing by Intel's QA team prior to their release. This is why they are recommended for general usage - because the QA team did all the possible tests on them, filled the bugs, and knows what to expect from them.

                    The usual releases of kernel, mesa, and so on, receive a lighter testing. Not that they are less stable - but they can have new issues which the QA team had no time to caught.

                    So the advantage of periodic stack release is that you know exactly what to expect from it - e.g., you know what works, what are the highlights, and what are the known issues to expect.

                    Even though as a dev I understand why you did it, as a customer I'm still pissed about
                    how you dropped the support for i8xx hw.
                    Yes, I understand, this is sad. But unfortunately, the resources required to maintain and develop such old components are pretty high. So we cannot dedicate as much efforts towards those drivers as they'd need unfortunately.

                    This is the same situation which older GPUs are coming through right now, with DRI1 removal from Mesa. If you need to work with such old hardware, you can use one of the older releases, where things were tested and verified. With latest versions, not much verifications and stabilization for such components take place, so yes, sometimes they simple are not expected to work at all.

                    That's sad, but that's one of the disadvantages of the open-source model. Stuff gets supported and developed as long as someone is interested in that. Think about kernel for example - when a maintainer for a given hardware loses the interest, and nobody steps up, its driver gets dropped. And with hardware which is no longer developed for a couple of years, it is really hard to put additional efforts towards its maintainance.

                    On 2D side, Chris Wilson and Daniel Vetter do put some efforts into supporting those cards still. But this is mostly their own-time work.

                    Problem I have: Powering htpc before TV/amplifier will result in black screen, only solution is to turn everything off, then first power amplifier, TV then htpc.
                    Answer i found on IRC or Forums on this question was: its way Intel driver works.. Is this correct?
                    I believe there was some patches from Paulo Zanoni and Jesse Barnes which should have solved this issue, or some very similar issues. Could you try with latest 3.2 kernel please when you have the time?

                    Otherwise, if you could fill in the bug about this problem, we'll try to look into it.

                    In any case, thanks for bringing this up - with so many different hardware combinations and possible connectivity solutions it is impossible to test all possible cases. So in case we missed this problem, we'll be interested in learning about what is wrong and how to fix it. Thanks!

                    Comment


                    • #60
                      Hey, eugeni! Thanks for presence on forum!! Really!

                      My only question about linux and intel, is that you please really take a look on Gallium again and then another look if result is negative.
                      It would be superb, if there would be no two different gfx stacks in linux, but one joint collaborative. No one would prevent you from forking and hacking the gallium code in order to improve it and reintroduce features back, so there would not be such redundancy as currently.
                      Thanks!

                      Originally posted by eugeni_dodonov View Post
                      Why? As I already commented here, and other developers have commented on the mailing lists, I simple cannot see the point of moving to Gallium at the moment. Our drivers work, they are being developed, and I cannot think on what could we possible get by moving to Gallium besides 'oh, let's use it because it is cool'.

                      Really, if you can provide any compelling reasons to move everything to gallium, I'd be happy to hear. It is a nice technology, and it is great to have it around, but I simply cannot see why it would justify the efforts. And moving everything from one technology to another just for the sake of moving is pointless IMHO...
                      Of course, nothing is done without reason and but also these wishes do not arise without reason.
                      -) reduced redundancy
                      --) in linux stack
                      --) in packages on linux machines
                      --) you will do less "ape work" and more sane work

                      -) establishing standard from collective effort, rather than pushing two different boxes with double slowness
                      --) better cooperability between hardware running opensource stack
                      --) better help for Xorg switching mechanism
                      --) less problematic use of Intel IGP(CPU) and switching to AMD discrete card
                      --) less bugs

                      You agreed on opengl and agreed on directx, why then you with AMD reinvent wheels in linux?
                      Last edited by crazycheese; 13 January 2012, 11:36 AM.

                      Comment

                      Working...
                      X