Announcement

Collapse
No announcement yet.

50% slower than AMD! Intel FAIL again with the HD4000 graphic hardware.

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • bridgman
    replied
    Some of our customers wanted open source drivers; others want the performance and features that can only come from proprietary drivers, so we are doing both.

    The only surprise was that the open source drivers turned out to be more attractive to embedded customers than we first expected, partly for ease of porting to other OSes/CPUs and partly because of the ability to support *extremely* long product lifetimes.

    Leave a comment:


  • bug77
    replied
    Originally posted by Kano View Post
    Dont you think that oss drivers are bit pointless when they do not provide h264 / vc1 accelleration? It does not matter much if via vdpau or vaapi, but it should be there.
    I can understand nvidia's stance: oss drivers are just like a bootstrap until you can install proper drivers. But ATI's approach is not so clear cut. They support people willing to bang their heads against the walls writing drivers that will never catch up with Catalyst anyway. To what end, I don't know...

    Leave a comment:


  • bridgman
    replied
    Originally posted by Kano View Post
    Dont you think that oss drivers are bit pointless when they do not provide h264 / vc1 accelleration? It does not matter much if via vdpau or vaapi, but it should be there.
    No, although it would be a nice addition.

    Leave a comment:


  • Kano
    replied
    Dont you think that oss drivers are bit pointless when they do not provide h264 / vc1 accelleration? It does not matter much if via vdpau or vaapi, but it should be there.

    Leave a comment:


  • bridgman
    replied
    Originally posted by crazycheese View Post
    John, guys, I understand your approach, but it is not my approach. If you have certain approach, then you apply whole different method and same method can be seen as least efficient and most efficient depending on view only.
    I actually think what's going on is a mixup between "what we did to catch up" and "what we are doing now". I agree completely that this is not necessarily obvious since Alex did a lot of work helping the transition to KMS and that *did* involve working on older hardware, but if you think of KMS as the exception rather than the rule this will all make a lot more sense.

    Originally posted by crazycheese View Post
    That said, you develop legacy, tail, hardware and slowly approach the "head of the fish". But your company sells the "head of the fish", and that head is supported only via proprietary.
    That only applied while we are catching up. We came fairly close to catching up in time for SI and would have if the delta between NI/SI had been comparable to the delta between previous generations, and expect to be substantially caught up by the next generation. Most of our focus now is on new hardware, although as we learn things we often go back and apply fixes to older hardware at the same time.

    Originally posted by crazycheese View Post
    So hence your approach prioritizes following strategy:
    1) you depend on proprietary, you cannot remove it - your selling of "head"s will stop - and hence support of whole drivers stops
    2) your proprietary will stay more advanced than opensource, forever; see 1.
    3) you will throw huge HR into segment that does not touch opensource at all, and is largely uninteresting for non-enterprise high volume consumers/retailers
    4) you will never accomplish to utilize complex architectures seen on top models efficiently in opensource; because you simply lack human resources - which are gained only and only from selling of those cards
    5) this means you are not interested in selling your cards to be used by opensource solutions
    I don't understand #1 - are you just saying that work on the proprietary driver uses resources which could help the open source driver ? If so then I agree, but again please note that those resources would not be enough to let the open source driver meet the market needs which the proprietary driver satisfies.

    Re: #2, yes but the only way to prevent it would be to get rid of the proprietary driver so that there would be nothing to compare with.

    re: #3, yes but then we would need to walk away from a very important market which *nobody* supports with open drivers.

    re: #4, maybe (although one could argue that it's optimization work rather than underlying architecture that makes the difference) but funding the implementation of complex architectures / optimizations requires sales from the entire PC market, not just the Linux portion -- being able to share code and investment across the entire market is the main reason that proprietary drivers even exist.

    re: #5, I don't agree with your conclusion. If you said something more like "we are not willing to walk away from the workstation market even if doing so would allow us to offer an attractive open source solution more quickly" then there might be some truth to that. However, you need to remember that it was *never* our plan to write the drivers exclusively ourselves. If you get a chance please re-read the comments we made back in 2007.

    Originally posted by crazycheese View Post
    This goes completely different with intel. Intel prioritizes development on "head of the fish", and only then, if ever, tail. This gives exact the situation their legacy hardware is non-usable under linux, but they have fast pace open development with top-notch current hardware. I want to buy newest hardware and use it with opensource - this is why intels approach is better for me.
    Not different at all -- that's what we are doing as well and have been for a while, although note that the developers funded by embedded are working on the areas considered most important for the embedded market, which typically includes "recent" chips as well as "newest".
    Last edited by bridgman; 01-23-2012, 05:41 PM.

    Leave a comment:


  • crazycheese
    replied
    Originally posted by Tgui View Post
    I had an Intel 4500MHD in my old HTPC, and now an HD2000 in my current core i5 HTPC. Both work fine, can play basic 3D games and are fast enough to render HD video at 1080P. These machines were/are up for months at a time as well.

    I've had a number of ATi cards and had nothing but pain with their open and closed source drivers. It got to the point I just went out and bought an NVIDIA card for my workstation.

    I like Intel's GPUs. For me they pretty much just work in a machine with limited graphics needs.

    Though obviously trolling, even if "INTEL FAIL" inside graphics are 50% slower than "AMD!", they're still 50% faster than what needs dictate.
    Wow, wow, I have rather same yet different experience.
    My experience was that fglrx crashed already when switching tty1's and opensource did 1-5 fps.
    In a year, opensource went up to 40-50 fps (out of 300 possible); talking about raw performance here, not percieved or "enough for you anyway".

    But, you know, investing 3-4 days in research, hunting specific GPU, overpaying for rare card of outdated generation - because opensource driver was developed from tail and could not support newer, cheaper, more efficient hardware - that was already a warning sign...

    I wouldn't say intel graphic solution is miserable crap. Newest stack, complete usage of underlying hardware, performance on level of 8800-9800gt with proprietary drivers. Opensource that I've been waiting from AMD come from different company ... yeah...

    Leave a comment:


  • crazycheese
    replied
    Originally posted by bridgman View Post
    That's not a policy or anything, just a technical constraint. Each GPU generation builds on the one before, so we had to start with the oldest unsupported parts and work forward until we caught up with the release of new hardware.

    It took ~4 years, but then again there were 6 generations of hardware to support, more if you include the KMS rewrite for 4 earlier generations.
    John, guys, I understand your approach, but it is not my approach. If you have certain approach, then you apply whole different method and same method can be seen as least efficient and most efficient depending on view only.
    That said, you develop legacy, tail, hardware and slowly approach the "head of the fish". But your company sells the "head of the fish", and that head is supported only via proprietary.
    So hence your approach prioritizes following strategy:
    1) you depend on proprietary, you cannot remove it - your selling of "head"s will stop - and hence support of whole drivers stops
    2) your proprietary will stay more advanced than opensource, forever; see 1.
    3) you will throw huge HR into segment that does not touch opensource at all, and is largely uninteresting for non-enterprise high volume consumers/retailers
    4) you will never accomplish to utilize complex architectures seen on top models efficiently in opensource; because you simply lack human resources - which are gained only and only from selling of those cards
    5) this means you are not interested in selling your cards to be used by opensource solutions

    This goes completely different with intel. Intel prioritizes development on "head of the fish", and only then, if ever, tail. This gives exact the situation their legacy hardware is non-usable under linux, but they have fast pace open development with top-notch current hardware. I want to buy newest hardware and use it with opensource - this is why intels approach is better for me.

    Even if I would buy "high performance by factor 200%" AMD APU, it will reach its "200%" performance only via proprietary driver, and even then it is widely known you prioritize directx way more than opengl.
    So under linux, your "200% APU" becomes "50%" APU. Plus currently less efficient PM and performance per core ratio. This is why only intel stays within choice.

    Of course, if id go high gpu performance way, it will be different - only because intel does not do high-performance gpu hardware. Then, we will have a clash between two evils of red and green.
    Your directx love, shorter core packages (glibc,kernel,xorg) and hardware generations support window pretty much signs the decision yet again not in your favorite.

    John, you are wonderful, patient, consecutive man; but its all about company approach, not charisma.


    Lets talk about my recent purchases. Recent year I have purchased used 260gtx instead of hd57xx; brand new 560ti instead of 6970 for friend of mine; i3 instead of 41xx/A6. Total costs I think around 1000€. For linux hardware.
    You see, linux people do go shopping and they shop for hardware which supports their OS.
    It is common practice that companies detect market development and adapt early - not following after it is already too late or looking at outdated statistics or trying to influence buyers decision by forcing them to do what mass-market of such people does (because each of them is influenced with exact same approach recursively). You buy, because everybody buys, because such as yourself buys - so you have no choice but to follow. This is very very dirty society, it stinks with monopolies, I dont invest money there; I have my own head.
    Last edited by crazycheese; 01-23-2012, 03:52 PM.

    Leave a comment:


  • crazycheese
    replied
    Originally posted by kobblestown View Post
    Dude, it just doesn't work like that! You've got confused by Hyperthreading, most probably. With HT each core can handle multiple threads simultaneously. But in fact, it cannot. It can only handle a single thread at any given cycle (I guess that goes for each pipeline unit individually). It only switches from thread to thread when, for instance, one of the threads is waiting for data to arrive or in a round robin fashion when more threads are ready for execution. But it cannot move a thread to a different core. It doesn't matter if you think it's a good idea or not. There's no processor that I know of, that is equipped with such hardware. That's what the OS is for.
    No man, I have done courses in operating systems development and I fairly know what Im talking about. I have not been following development of x86 hardware guts since Pentium/am5x86 though, that is why I'm operating in generic terms.
    I don't mean HT, HT seen in core2 hardware is, as you described correctly, attempt to show double amount of cores to external interfaces for the reason to saturate IO and then make internals do the job. This approach is win most of the times, if hardware is capable to manage itself precisely; and loose when cores are forced to work on completely different tasks using high IO & low instruction load OR when they do lots of JMPs - all this results in cache trashing; under HT - cache trashing plus core has only 1/2 of usable cache, which may actually slow down execution compared to execution without HT(1/2 of cores). I think phoronix has this already covered (how intel/amd scale).

    The part I mean is this:

    Branch prediction and memory execution and ordering - cpu internal scheduling logic. I didn't mean "scheduler" as it is classically understood belonging withing OS kernel logic. It is by far lower level, but it does group instructions to memory space and throws them on single cores/modules layer-wise till cores are at 90%.

    Optimally it should monitor the load at lowest level and quickly issue reordering. This will allow specific cores to be loaded till they are at 90% of capacity; and only then reorder tasks to other cores - sleeping meanwhile.
    This would mean those unneeded cores can sleep - less energy consumption. Same logic is used within GPU; zero load, or how AMD marketing calls it. I have already told they should really use the GFX experience in CPU segment.
    On Bulldozer, that would mean modules will be loaded precisely, decreasing load spread. Call it "intelligent instruction grouping" if you want.

    Leave a comment:


  • Qaridarium
    replied
    Originally posted by BlackStar View Post
    Q does have a point in this thread. A CPU with a good integrated GPU is *much* better than dealing with Optimus crap.

    AMD's Fusion GPUs are good enough that you don't need a dedicated GPU unless you are a hardcore gamer. Intel's Sandy Bridge may be 10-30% faster than AMD's Llano clock for clock, but Llano's GPU is ~100-200% faster than SB's which makes for a much better all-around performer (that can run modern games in 'medium' settings).

    That's one part of the equation. The other is drivers, where Intel faces significant problems. One year later and Sandy Bridge still fails to vsync properly and still suffers from graphics corruption (esp. in Gnome Shell). The new Atoms are equipped with a DX10/GL3 PowerVR GPU - but guess what. Intel delayed them due to 'driver issues' and finally announced they will only support DX9 (no OpenGL!) and 32-bit Windows. No Linux, no 64-bit, just 32-bit Windows.

    And that's why you don't buy Intel GPUs: their drivers suck. At least with AMD you know you'll get decent support on Linux (with open-source drivers - fglrx sucks) and great support on Windows. No such guarantees with Intel.
    hey you are right ! *Happy* thank you for helping me.

    Leave a comment:


  • Qaridarium
    replied
    Originally posted by Drago View Post
    Q, I am waiting for the AMD announcement today
    hey I?m waiting to i can not force amd to do it faster.

    Leave a comment:

Working...
X