Gallium for i965 stuff like Sandy Bridge.
You guys have no excuse purging the i965g tree when the i915g tree was doing just fine.
I have been writting list of things that are prorities to me but while reading the whole thread I was pleased to see that they are mostly adressed or being adreesed for my Ironlake platform. While probably as most typical users I just want my PC to work meaning that when new shiny feature or software pops around I just want it to work without spending to much time in order to get it runing therefore I need for it my platform to be supported. While now we spend time on discussing whether opengl,opencl,vaapi,webgl or anything else should be a priority the question that seems we should ask first is how long (in years) can we expect for the hardware to be supported because while it is supporeted those features listed above will be worked on for the hardware users have in their possesion.
Besides I am big fan of getting the most out of the stuff already around for longest period possible. More on that here http://www.youtube.com/watch?v=I5DCwN28y8o
PS. Priority no2
Running higest resolution possible of desktop/video/game with lowest energy consumption
Gallium for i965 stuff like Sandy Bridge.
You guys have no excuse purging the i965g tree when the i915g tree was doing just fine.
Last edited by LLStarks; 01-14-2012 at 12:55 AM.
I generaly want to thanks you for trouble free usage of Intel graphics for years.
I have particular problem with some intel hardware however.
my girlfriend is running ThinkPad X41 Tablet (with Intel 915GM, running in 1024x768 with 32bpp), with Debian testing, and she sometimes finds strange problem. When using gnome shell (or any compositing manager), many font glyphs are corupted (I assume after resume). It appears most easly in gnome terminal/terminator, but sometimes also in applications menus. I workaround unreadable glyphs make zooming in terminal using Ctrl-+, few times, but going back, corrupted glyphs reapear. Corrupted glyphs are even preserved when taking screenshot:
As of general questions, i will ask:
1. Do you plan to do something positive and create single resonable open-source api for video coding/decoding? Looks vdapu is sensible, and simple to use, cannot you adapt it also? It will make users and software developers lives easier. Please.
2. How are you commited to having older card and chipsets working even in newer kernels, newer mesa version, etc? For example I have currently some crashes, as well mipmap generation problems on one of machines with Intel 865G on board, and nobody is doing anything with it. How are you testing newer releases on older hardware? Do you have access to lots of physical computers with newer and older hardware when testing?
3. Any plans to migrate to gallium infrastructure? How will it affect answer to question 2, I mean older hardware?
4. How big is Linux Intel GFX team? Will it be bigger (to accomodate testing, and coverage of large number of chipsets, as well features)?
Thanks in advance for all answers.
I'd suggest to do both. If you can reproduce those issues consistently, it would be very interesting to investigate them. Macbooks are coming with a much more standardized configurations, so it would be much easier to track issue if they happen on all of them.I get strange graphics corruption with RC6 enabled (and can't disable VD-t because I don't have a BIOS)
Should I file bugs about this, or email the mailing list?
Thanks a lot, it is a very interested lead for tracking the rc6 issues!
This is the probably the most popular question.(About all the OpenCL-related questions). I guess it can be summed up as 'will there be opencl support in open-source drivers from Intel?
Unfortunately, as far as I know, there are no plans to provide opencl support in the open-source drivers at the moment. There is a SDK (http://software.intel.com/en-us/blog...ort-available/) available since last May, but that's all for now.
Of course, I cannot comment on Intel plans which were not made public at this time. So this is all I can say about OpenCL support at the moment.
When (if) something changes with regards to it, I'll certainly let you know.
Well, phoronix test suite workloads are used for many testing since a long time already. We just don't post all of the results in that page, because:To Intel, I noticed that you do some good internal testing (http://intellinuxgraphics.org/testing.html) on every release. Any chance of a collaboration with phoronix test suite?
1. it would make it very big , and
2. Michael already does an amazing job with open-source testing on phoronix.com.
I am actually very thankful for this question, as it gives us an opportunity to thank Michael once again for all his hard work. Phoronix.com is one of the few (if not the only one) which does complete and in-depth testing of major features related to open-source graphics drivers out there. And phoronix benchmark is a nice tool to automatize such tests and allow the community to share the results.
And receiving comments from a third-party web site is certainly good. When things are bad, it is better to hear the hard truth than sweet lies. And when things are good, it is very satisfying to receive feedback from the real users we have!
(It is just my opinion about the matter).I'd just love to see the work that Intel is doing continue to benefit the entire stack (yes, it's selfish), as I don't have Intel graphics in all of my machines.
The beauty of open-source is the freedom of choice. Yes, if we would have 1 kernel configuration, 1 window manager, 1 browser, 1 media player and 1 way of doing drivers, it could be easier to some point. But this is not how open-source works - the evolution of different solutions results in competition, and thus moves forward the progress of entire ecosystem.
For example, it certainly would be much easier to support just 1 distribution instead of Ubuntu/Mandriva/Redhat/Suse/Arch/.....
At the same time, it certainly would benefit more the end-users if there would be just one desktop environment (for example, Gnome), without possibility of choice. And same thing for browsers (everyone using firefox), office suite (everyone only working on abiword) and media stacks (mplayer rocks, why bother with gstreamer, xine, ffmpeg and vlc??)...
However, this also leads to the question - why should anyone support Linux/BSD/Haiku at all, if most of the users out there use Windows? Shouldn't we just give up on everything but 1 OS with 1 possible application of each task?
So my opinion is - let Gallium 3D and traditional MESA co-exist. Their evolution only benefits the progress of open-source as a whole. When Mesa advances, Gallium has a possibility of taking a leap ahead. And vice-versa, when Gallium makes progress, Mesa needs to catch up again.
So in the end, everyone wins.
Yeah, another very interesting question.Are we ever going to have GUI editing tools for driver options, in the likes of the windows variant for linux?
The truth is - the way we work, we work directly on upstream. So if upstream (X.org, kernel, ...) settles on how to support options in a cross-device and mainstream way, without giving a preference to one unique tool, this is what we do. This is why we work on upstream xrandr support instead of intel-settings tool for instance. This gives the freedom of choice to select how you want to control those settings. In kde, you have systemsettings; in gnome, you have its control panel; in console, you have man xrandr, and so on.
If you need a user-friendly control panel for the driver options, the most correct solution is to ask your distribution to add it. They know for certain who their users are. We provide them all with support and give no preferences nor discriminations to any distribution or desktop environment - but how those settings are visualized and used, it is up to them.
Sorry, I am trying to answer all of those, but it is Saturday, and since yesterday, I've spent most of my time driving (600km since yesterday in total, at least roads in Brazil are very well concerned ), and also we have 10 pages of comments . The ones which I skipped either require a more dedicate post to answer, or I don't have an answer to those, or I felt that those were answered in one of my previous replies.You could say "I don't know", or "I can't talk about that", but instead you keep ignoring them. Why ask for questions if you don't want to answer them?
In the end, I hope to answer all the questions, but if I keep quiet far too long about some of those, feel free to ping me. Maybe I missed some comments among the way or the notification email didn't arrived into right time.
It should. Note that not all the 3-outputs could work with this resolution at the same time (I'll provide more details about how IVB 3-display modes work closer to IVB release date).I read that Ivy Bridge supports 4K Output over DisplayPort in Windows. Will this also be possible on Linux? (I might also buy an external monitor when the first cheap 4K Monitors are available.)
To be fair, i810-like series of chipsets went out in production several years ago, are not manufactured for several years ago, and even the longest possibly warranty for them has already expired years ago as well. But they still work, just do not support the newest features.Simply it's mean Intel doesn't intrested in support of customers (both of home and corp. market) of Intel hardware, right?
Just like with windows drivers, you cannot expect to install Nvidia drivers for latest series of their graphics cards, and expect it to power up Riva TNT2 card and allow you to run Crysis. But if you do install the supported drivers (which you can find in form of older kernel releases + older mesa releases + older xf86-video-intel), they will work .
Could you file a bug according to http://intellinuxgraphics.org/how_to_report_bug.html please? I think that this will require additional investigation, and it is easier to do it via bugzilla...I have the following from dmesg:
$ dmesg | grep -i drm
[drm] Initialized drm 1.1.0 20060810
[drm] Supports vblank timestamp caching Rev 1 (10.10.2010).
[drm] Driver supports precise vblank timestamp query.
[drm:intel_framebuffer_init] *ERROR* unsupported pixel format
Thanks for bringing this up, I wasn't aware of this issue actually.
On open-source side of the drivers, for technical cooperations, we have monthly calls with Canonical and most other OSVs, as Michael mentioned initially in the article. Also, many developers use Ubuntu on their machines. And finally, Canonical developers contact us directly with issues brought into their attention for feedback constantly.- What kind of cooperation do you have with Ubuntu (and other distros as well) if any at all?
In general, all the distributions and their maintainers and developers are free to contact us via any possible means (mailing list, irc, via commercial contacts and so on).
Yes, we considered pre-building binary releases of the drivers. But this idea haven't received any positive feedback from the community at the time, so we put it on hold. And in general, Suse OBS does a nice job of building drivers for all the possible distributions already.- Are there any plans to improve the situation relative to the availability of stable releases precompiled packages (ordinary user point of view)?
What kind of binary driver releases would you be interested in? Wouldn't OBS pre-packaged drivers be adequate for all the needs?
No, Linux drivers are developed without any direct connection with Windows drivers. They are simply too different architectures.- Are you able to reuse much work of the windows driver team ?
Sorry, I don't know what the issue is, and I don't have any i7 around... I'll see what I can find out.Will we see a proper kernel support for core i7 920 and above ?
Mesa git is focused on adding GL 3.0 support at the moment, so it received lots of refactoring and changes all around. I'd say that some of those changes caused unexpected regressions in some workloads.A recent phoronix article showed some performance regressions on Core i something over the past year. Why did these regressions happen?
As for Urban Terror, I don't have a clue. There are too many possibilities (kernel changes, mesa changes and 2D changes), so without bisecting it is impossible to tell. My guess is that with 2011Q1 it was fast, but there were some misrenderings in some cases, which were fixed. With thus, everything started to be rendered properly, but this required more work to be done and lowered FPS.
For Vdrift, I think there was an issue where graphics weren't rendered correctly due to stencil issues, so it rendered fast but with very few details. When it got fixed, it started to render things correctly, but thus got to render much more things. Hence the performance drop.
But it is just a guess, without bisecting I cannot tell for sure. I believe there is an auto-bisecting tool within phoronix benchmark, but I never used it. If someone is willing to investigate those items, this would be awesome!
I am not a commercial nor marketing person, so I cannot give any dates .how long (in years) can we expect for the hardware to be supported because while it is supporeted those features listed above will be worked on for the hardware users have in their possesion.
What I can say is that the hardware is supported as long as it is being manufactured + as long it is being warrantied + as long as we have customers using it in large enough scale to justify allocating resources for its support. After that time, it can be supported if there are people interested in such support.
Think on Windows XP - it was supposed to be dead a couple of years ago, then its support got extended, and then it got extended again. And it is still among us, and will be for a long time if I am to guess something.
With our drivers, specially considering Sandy Bridge architecture, we have a huge bonus though - our specifications are open! So if necessary, the community is able to implement new features or support new stuff even without our direct intervention. Lots of new features were developed for the drivers without our team - the community did it. This is what the true power of open-source is IMHO.
This is our priority as well!Running highest resolution possible of desktop/video/game with lowest energy consumption
If you consider the history, each new generation of GPUs improves the FPS in most workloads and also lowers the power usage in both idle and loaded modes. Heck, you can even run open-arena and nexuiz for several hours on battery now - could you have imagined this a couple of years ago?
Ergh, it weren't us who purged it. It was the Gallium developers themselves who said that they give up and see no point in maintaining it, so they deleted it themselves. Or I am wrong?You guys have no excuse purging the i965g tree when the i915g tree was doing just fine.
For the Phoronix Test Suite in order to automate the parsing of hardware information there's then separate code paths for Intel, Radeon, and Nouveau (along with the binary drivers and then on the debugfs for handling differences between kernel versions since there's been some stupid naming convention breaks there for some drivers)...(and then for some of these new ARM DRM drivers, the DRM drivers obviously know this information but some aren't exposing it and others are also doing it with their own node names as well).
Besides a standard making it easier for tools like PTS, it would also be nice for other debugging / dumping scripts more universal. Plus if there is any other utility that comes about at some point in the future for better configuring the DRM drivers, it would just be a clusterfuck supporting the different drivers for parsing this information that's pretty standard to GPUs/driver, etc.
What I really miss is a tear free driver (xv output, flash). I'm quite surprised that after so many months since SB was launched it is still suffering from tear issues. In my case using gl output is fine with mplayer but some softwares support only an xv output. So no luck there... To be true I was surprised (and really disappointed) when I found out about this situation. And it seemed at that time that SB was the perfect candidate for an HTPC system...
Regarding the phoronix article (back at the beginning of 2011) which stated that the SB drivers were a mess on Linux why was there never any mention of any tearing issue? And I don't recall any article that mentions it either...
I won't be able to find the e-mail again but "why's gallium better?" was asked by one of the intel guys on mesa-dev.
Marek was quick to give an answer that something -- technical stuff, obviously I wouldn't know the details and can't remember now -- was _way_ much faster.
So I think there _is_ a technical argument besides the potential state trackers.
IMHO, using gallium for ivy+1 would be reasonable. Of course it's not for me to decide.
Oh, and when AMD, i.e. Tom gets clover working it will work on intel, too. (Not to mention NVIDIA.)
I have systems with several generations of Intel graphics. I have a GM965, a HD4500 and a 5700MHD. I like your approach, I bought in, and kept supporting your graphics. But none of my cards can provide satisfactory performance in games developed 8 or 9 years ago (I am not asking for much, almost a decade). Primarily I am referring to Wolfenstein and Wolfenstein: Enemy Territory, good games with native Linux ports.
What I mean by satisfactory is a stable 30FPS (I am not asking for much, gamers play at 3 times that). What is worse when granades start to fall, those are special effects so to speak, FPS can fall to 0 and games crash. Six years ago my nvidia geforce2 could provide stable 25-30FPS. Three years ago my intel gm965 could provide stable 25-30FPS. That brings me to my question.
When I ran old EXA drivers on the oldest gm965 my hardware worked. You promised a lot with UXA and GEM and then failed to deliver. Even today a 5700MHD can not match what gm965 was doing 3-4 years ago. When are we going to see some improvements?
Hello Intel Linux GPU driver team,
please make one driver with complete feature set. Or at least please add hardware acceleration to console framebuffer.
Now I have to use at least 2 incompatible Intel drivers to have almost complete feature set. To make things worse Intel driver switch requires reboot because none of them can be unloaded.
I use this driver: /usr/src/linux-3.2.1/drivers/gpu/drm/i915/intel_fb.c which works great but is NOT accelerated (but works with X):
To have accelerated linux console I use this one which is hw accelerated but only for console and there is no kernel mode switching and no X - this one is non drm compatible:Code:.fb_fillrect = cfb_fillrect, .fb_copyarea = cfb_copyarea, .fb_imageblit = cfb_imageblit,
I never could understand what is wrong with all these GPU Linux driver writers what pushes them to create several drivers for one hardware incompatible witch each other with different feature set so user is forced to do reboots depending what he would like to do. This is sick and this plague affects Intel Nvidia AMD so there is no choice. On Linux platform this madness is present only for GPU drivers. Why noone can make one driver with all features.Code:.fb_open = intelfb_open, .fb_release = intelfb_release, .fb_check_var = intelfb_check_var, .fb_set_par = intelfb_set_par, .fb_setcolreg = intelfb_setcolreg, .fb_blank = intelfb_blank, .fb_pan_display = intelfb_pan_display, .fb_fillrect = intelfb_fillrect, .fb_copyarea = intelfb_copyarea, .fb_imageblit = intelfb_imageblit, .fb_cursor = intelfb_cursor, .fb_sync = intelfb_sync, .fb_ioctl = intelfb_ioctl