Announcement

Collapse
No announcement yet.

50% slower than AMD! Intel FAIL again with the HD4000 graphic hardware.

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by Qaridarium View Post
    most of your point are only true on: WINDOWS;;;

    whatever it doesnít matter in 2-3 months amd will bring a new quatchannel desktop socket and a octachannel server socket.

    the bulldozer successor will perform well on the new sockets with new chip-sets.
    Nope, all my points are OS-agnostic.
    The scheduler which I meant in 4th point is CPU scheduler (o-o-o execution etc). Not OS scheduler, that should be actually be mostly ignored (I think). The OS scheduler is not good at assigning tasks to cores, cause it lacks internal details. It is good for tasks prioritization and resposibility etc.

    Comment


    • #32
      Originally posted by BlackStar View Post
      AMD's Fusion GPUs are good enough that you don't need a dedicated GPU unless you are a hardcore gamer. Intel's Sandy Bridge may be 10-30% faster than AMD's Llano clock for clock, but Llano's GPU is ~100-200% faster than SB's which makes for a much better all-around performer (that can run modern games in 'medium' settings).

      That's one part of the equation. The other is drivers, where Intel faces significant problems. One year later and Sandy Bridge still fails to vsync properly and still suffers from graphics corruption (esp. in Gnome Shell). The new Atoms are equipped with a DX10/GL3 PowerVR GPU - but guess what. Intel delayed them due to 'driver issues' and finally announced they will only support DX9 (no OpenGL!) and 32-bit Windows. No Linux, no 64-bit, just 32-bit Windows.

      And that's why you don't buy Intel GPUs: their drivers suck. At least with AMD you know you'll get decent support on Linux (with open-source drivers - fglrx sucks) and great support on Windows. No such guarantees with Intel.
      i3-2105, things that I observed:
      - 2 cores performed faster that 4cored phenom, eats less than half of energy
      - it synced properly. if you give me special details, i can test on weekend where it does not sync. Had Linux mint 12 installed (I think it was gnome shell 3, unsure) and urt/openarena testings
      - fpsed 60 on 1600/900 maxed details on urt, at all time, on all maps. 9800 gt was equally fast. If you ask, id say hd3000 is very powerful igp.
      - I wouldnt take powervr/g500 chips as intel chips, very probably they outsourced and powervr refuses to release specs.
      - video acceleration
      - intel supports newest cpus first, amd oldest first.
      driver pretty stable, no crashes or bugs after 16 hr run.

      I refuse to comment on fglrx "guarantee"...


      about no 64bit driver for Atom - Atom is 32bit cpu..
      Last edited by crazycheese; 01-23-2012, 08:14 AM.

      Comment


      • #33
        Originally posted by crazycheese View Post
        Nope, all my points are OS-agnostic.
        The scheduler which I meant in 4th point is CPU scheduler (o-o-o execution etc). Not OS scheduler, that should be actually be mostly ignored (I think). The OS scheduler is not good at assigning tasks to cores, cause it lacks internal details. It is good for tasks prioritization and resposibility etc.
        Errr, what?! It's precisely the OS scheduler's task to say which thread runs on which core (I don't think that BD's modules are exposed as such). The CPU scheduler (if there's such a thing) cannot move a thread to a different module. It cannot even move a thread to a different core within the same module. Or at least it shouldn't. Again, that's the OS scheduler's task.

        The problem with BD performance on Windows is precisely the fact the OS scheduler doesn't distinguish between cores in the same module and cores on different modules and how this difference affects performance. Hence, it cannot make the proper scheduling decision. And there's nothing the CPU can do about this, as shown by the dismal performance of BD under Windows. Presumably, the Linux scheduler is a bit better in this respect but I guess it will improve with time too.

        Comment


        • #34
          Originally posted by kobblestown View Post
          Errr, what?! It's precisely the OS scheduler's task to say which thread runs on which core (I don't think that BD's modules are exposed as such). The CPU scheduler (if there's such a thing) cannot move a thread to a different module. It cannot even move a thread to a different core within the same module. Or at least it shouldn't. Again, that's the OS scheduler's task.
          If CPU scheduler, scheduling agent, resource watch/manager sees that cores are loaded inefficiently, it should override as long as out of order does not influence the result. OS scheduler does not know about technical details and its latency is way to high to manage that.
          It is same situation as VLIW inefficiency. Im suspect sb does this.

          Comment


          • #35
            Originally posted by crazycheese View Post
            amd (supports) oldest first
            That's not a policy or anything, just a technical constraint. Each GPU generation builds on the one before, so we had to start with the oldest unsupported parts and work forward until we caught up with the release of new hardware.

            It took ~4 years, but then again there were 6 generations of hardware to support, more if you include the KMS rewrite for 4 earlier generations.
            Last edited by bridgman; 01-23-2012, 08:35 AM.

            Comment


            • #36
              Originally posted by crazycheese View Post
              If CPU scheduler, scheduling agent, resource watch/manager sees that cores are loaded inefficiently, it should override as long as out of order does not influence the result. OS scheduler does not know about technical details and its latency is way to high to manage that.
              It is same situation as VLIW inefficiency. Im suspect sb does this.
              Dude, it just doesn't work like that! You've got confused by Hyperthreading, most probably. With HT each core can handle multiple threads simultaneously. But in fact, it cannot. It can only handle a single thread at any given cycle (I guess that goes for each pipeline unit individually). It only switches from thread to thread when, for instance, one of the threads is waiting for data to arrive or in a round robin fashion when more threads are ready for execution. But it cannot move a thread to a different core. It doesn't matter if you think it's a good idea or not. There's no processor that I know of, that is equipped with such hardware. That's what the OS is for.

              Comment


              • #37
                Originally posted by crazycheese View Post
                i3-2105, things that I observed:
                - 2 cores performed faster that 4cored phenom, eats less than half of energy
                - it synced properly. if you give me special details, i can test on weekend where it does not sync. Had Linux mint 12 installed (I think it was gnome shell 3, unsure) and urt/openarena testings
                - fpsed 60 on 1600/900 maxed details on urt, at all time, on all maps. 9800 gt was equally fast. If you ask, id say hd3000 is very powerful igp.
                - I wouldnt take powervr/g500 chips as intel chips, very probably they outsourced and powervr refuses to release specs.
                - video acceleration
                - intel supports newest cpus first, amd oldest first.
                driver pretty stable, no crashes or bugs after 16 hr run.

                I refuse to comment on fglrx "guarantee"...


                about no 64bit driver for Atom - Atom is 32bit cpu..

                I had an Intel 4500MHD in my old HTPC, and now an HD2000 in my current core i5 HTPC. Both work fine, can play basic 3D games and are fast enough to render HD video at 1080P. These machines were/are up for months at a time as well.

                I've had a number of ATi cards and had nothing but pain with their open and closed source drivers. It got to the point I just went out and bought an NVIDIA card for my workstation.

                I like Intel's GPUs. For me they pretty much just work in a machine with limited graphics needs.

                Though obviously trolling, even if "INTEL FAIL" inside graphics are 50% slower than "AMD!", they're still 50% faster than what needs dictate.

                Comment


                • #38
                  Originally posted by Drago View Post
                  Q, I am waiting for the AMD announcement today
                  hey Iím waiting to i can not force amd to do it faster.

                  Comment


                  • #39
                    Originally posted by BlackStar View Post
                    Q does have a point in this thread. A CPU with a good integrated GPU is *much* better than dealing with Optimus crap.

                    AMD's Fusion GPUs are good enough that you don't need a dedicated GPU unless you are a hardcore gamer. Intel's Sandy Bridge may be 10-30% faster than AMD's Llano clock for clock, but Llano's GPU is ~100-200% faster than SB's which makes for a much better all-around performer (that can run modern games in 'medium' settings).

                    That's one part of the equation. The other is drivers, where Intel faces significant problems. One year later and Sandy Bridge still fails to vsync properly and still suffers from graphics corruption (esp. in Gnome Shell). The new Atoms are equipped with a DX10/GL3 PowerVR GPU - but guess what. Intel delayed them due to 'driver issues' and finally announced they will only support DX9 (no OpenGL!) and 32-bit Windows. No Linux, no 64-bit, just 32-bit Windows.

                    And that's why you don't buy Intel GPUs: their drivers suck. At least with AMD you know you'll get decent support on Linux (with open-source drivers - fglrx sucks) and great support on Windows. No such guarantees with Intel.
                    hey you are right ! *Happy* thank you for helping me.

                    Comment


                    • #40
                      Originally posted by kobblestown View Post
                      Dude, it just doesn't work like that! You've got confused by Hyperthreading, most probably. With HT each core can handle multiple threads simultaneously. But in fact, it cannot. It can only handle a single thread at any given cycle (I guess that goes for each pipeline unit individually). It only switches from thread to thread when, for instance, one of the threads is waiting for data to arrive or in a round robin fashion when more threads are ready for execution. But it cannot move a thread to a different core. It doesn't matter if you think it's a good idea or not. There's no processor that I know of, that is equipped with such hardware. That's what the OS is for.
                      No man, I have done courses in operating systems development and I fairly know what Im talking about. I have not been following development of x86 hardware guts since Pentium/am5x86 though, that is why I'm operating in generic terms.
                      I don't mean HT, HT seen in core2 hardware is, as you described correctly, attempt to show double amount of cores to external interfaces for the reason to saturate IO and then make internals do the job. This approach is win most of the times, if hardware is capable to manage itself precisely; and loose when cores are forced to work on completely different tasks using high IO & low instruction load OR when they do lots of JMPs - all this results in cache trashing; under HT - cache trashing plus core has only 1/2 of usable cache, which may actually slow down execution compared to execution without HT(1/2 of cores). I think phoronix has this already covered (how intel/amd scale).

                      The part I mean is this:

                      Branch prediction and memory execution and ordering - cpu internal scheduling logic. I didn't mean "scheduler" as it is classically understood belonging withing OS kernel logic. It is by far lower level, but it does group instructions to memory space and throws them on single cores/modules layer-wise till cores are at 90%.

                      Optimally it should monitor the load at lowest level and quickly issue reordering. This will allow specific cores to be loaded till they are at 90% of capacity; and only then reorder tasks to other cores - sleeping meanwhile.
                      This would mean those unneeded cores can sleep - less energy consumption. Same logic is used within GPU; zero load, or how AMD marketing calls it. I have already told they should really use the GFX experience in CPU segment.
                      On Bulldozer, that would mean modules will be loaded precisely, decreasing load spread. Call it "intelligent instruction grouping" if you want.

                      Comment


                      • #41
                        Originally posted by bridgman View Post
                        That's not a policy or anything, just a technical constraint. Each GPU generation builds on the one before, so we had to start with the oldest unsupported parts and work forward until we caught up with the release of new hardware.

                        It took ~4 years, but then again there were 6 generations of hardware to support, more if you include the KMS rewrite for 4 earlier generations.
                        John, guys, I understand your approach, but it is not my approach. If you have certain approach, then you apply whole different method and same method can be seen as least efficient and most efficient depending on view only.
                        That said, you develop legacy, tail, hardware and slowly approach the "head of the fish". But your company sells the "head of the fish", and that head is supported only via proprietary.
                        So hence your approach prioritizes following strategy:
                        1) you depend on proprietary, you cannot remove it - your selling of "head"s will stop - and hence support of whole drivers stops
                        2) your proprietary will stay more advanced than opensource, forever; see 1.
                        3) you will throw huge HR into segment that does not touch opensource at all, and is largely uninteresting for non-enterprise high volume consumers/retailers
                        4) you will never accomplish to utilize complex architectures seen on top models efficiently in opensource; because you simply lack human resources - which are gained only and only from selling of those cards
                        5) this means you are not interested in selling your cards to be used by opensource solutions

                        This goes completely different with intel. Intel prioritizes development on "head of the fish", and only then, if ever, tail. This gives exact the situation their legacy hardware is non-usable under linux, but they have fast pace open development with top-notch current hardware. I want to buy newest hardware and use it with opensource - this is why intels approach is better for me.

                        Even if I would buy "high performance by factor 200%" AMD APU, it will reach its "200%" performance only via proprietary driver, and even then it is widely known you prioritize directx way more than opengl.
                        So under linux, your "200% APU" becomes "50%" APU. Plus currently less efficient PM and performance per core ratio. This is why only intel stays within choice.

                        Of course, if id go high gpu performance way, it will be different - only because intel does not do high-performance gpu hardware. Then, we will have a clash between two evils of red and green.
                        Your directx love, shorter core packages (glibc,kernel,xorg) and hardware generations support window pretty much signs the decision yet again not in your favorite.

                        John, you are wonderful, patient, consecutive man; but its all about company approach, not charisma.


                        Lets talk about my recent purchases. Recent year I have purchased used 260gtx instead of hd57xx; brand new 560ti instead of 6970 for friend of mine; i3 instead of 41xx/A6. Total costs I think around 1000€. For linux hardware.
                        You see, linux people do go shopping and they shop for hardware which supports their OS.
                        It is common practice that companies detect market development and adapt early - not following after it is already too late or looking at outdated statistics or trying to influence buyers decision by forcing them to do what mass-market of such people does (because each of them is influenced with exact same approach recursively). You buy, because everybody buys, because such as yourself buys - so you have no choice but to follow. This is very very dirty society, it stinks with monopolies, I dont invest money there; I have my own head.
                        Last edited by crazycheese; 01-23-2012, 02:52 PM.

                        Comment


                        • #42
                          Originally posted by Tgui View Post
                          I had an Intel 4500MHD in my old HTPC, and now an HD2000 in my current core i5 HTPC. Both work fine, can play basic 3D games and are fast enough to render HD video at 1080P. These machines were/are up for months at a time as well.

                          I've had a number of ATi cards and had nothing but pain with their open and closed source drivers. It got to the point I just went out and bought an NVIDIA card for my workstation.

                          I like Intel's GPUs. For me they pretty much just work in a machine with limited graphics needs.

                          Though obviously trolling, even if "INTEL FAIL" inside graphics are 50% slower than "AMD!", they're still 50% faster than what needs dictate.
                          Wow, wow, I have rather same yet different experience.
                          My experience was that fglrx crashed already when switching tty1's and opensource did 1-5 fps.
                          In a year, opensource went up to 40-50 fps (out of 300 possible); talking about raw performance here, not percieved or "enough for you anyway".

                          But, you know, investing 3-4 days in research, hunting specific GPU, overpaying for rare card of outdated generation - because opensource driver was developed from tail and could not support newer, cheaper, more efficient hardware - that was already a warning sign...

                          I wouldn't say intel graphic solution is miserable crap. Newest stack, complete usage of underlying hardware, performance on level of 8800-9800gt with proprietary drivers. Opensource that I've been waiting from AMD come from different company ... yeah...

                          Comment


                          • #43
                            Originally posted by crazycheese View Post
                            John, guys, I understand your approach, but it is not my approach. If you have certain approach, then you apply whole different method and same method can be seen as least efficient and most efficient depending on view only.
                            I actually think what's going on is a mixup between "what we did to catch up" and "what we are doing now". I agree completely that this is not necessarily obvious since Alex did a lot of work helping the transition to KMS and that *did* involve working on older hardware, but if you think of KMS as the exception rather than the rule this will all make a lot more sense.

                            Originally posted by crazycheese View Post
                            That said, you develop legacy, tail, hardware and slowly approach the "head of the fish". But your company sells the "head of the fish", and that head is supported only via proprietary.
                            That only applied while we are catching up. We came fairly close to catching up in time for SI and would have if the delta between NI/SI had been comparable to the delta between previous generations, and expect to be substantially caught up by the next generation. Most of our focus now is on new hardware, although as we learn things we often go back and apply fixes to older hardware at the same time.

                            Originally posted by crazycheese View Post
                            So hence your approach prioritizes following strategy:
                            1) you depend on proprietary, you cannot remove it - your selling of "head"s will stop - and hence support of whole drivers stops
                            2) your proprietary will stay more advanced than opensource, forever; see 1.
                            3) you will throw huge HR into segment that does not touch opensource at all, and is largely uninteresting for non-enterprise high volume consumers/retailers
                            4) you will never accomplish to utilize complex architectures seen on top models efficiently in opensource; because you simply lack human resources - which are gained only and only from selling of those cards
                            5) this means you are not interested in selling your cards to be used by opensource solutions
                            I don't understand #1 - are you just saying that work on the proprietary driver uses resources which could help the open source driver ? If so then I agree, but again please note that those resources would not be enough to let the open source driver meet the market needs which the proprietary driver satisfies.

                            Re: #2, yes but the only way to prevent it would be to get rid of the proprietary driver so that there would be nothing to compare with.

                            re: #3, yes but then we would need to walk away from a very important market which *nobody* supports with open drivers.

                            re: #4, maybe (although one could argue that it's optimization work rather than underlying architecture that makes the difference) but funding the implementation of complex architectures / optimizations requires sales from the entire PC market, not just the Linux portion -- being able to share code and investment across the entire market is the main reason that proprietary drivers even exist.

                            re: #5, I don't agree with your conclusion. If you said something more like "we are not willing to walk away from the workstation market even if doing so would allow us to offer an attractive open source solution more quickly" then there might be some truth to that. However, you need to remember that it was *never* our plan to write the drivers exclusively ourselves. If you get a chance please re-read the comments we made back in 2007.

                            Originally posted by crazycheese View Post
                            This goes completely different with intel. Intel prioritizes development on "head of the fish", and only then, if ever, tail. This gives exact the situation their legacy hardware is non-usable under linux, but they have fast pace open development with top-notch current hardware. I want to buy newest hardware and use it with opensource - this is why intels approach is better for me.
                            Not different at all -- that's what we are doing as well and have been for a while, although note that the developers funded by embedded are working on the areas considered most important for the embedded market, which typically includes "recent" chips as well as "newest".
                            Last edited by bridgman; 01-23-2012, 04:41 PM.

                            Comment


                            • #44
                              Dont you think that oss drivers are bit pointless when they do not provide h264 / vc1 accelleration? It does not matter much if via vdpau or vaapi, but it should be there.

                              Comment


                              • #45
                                Originally posted by Kano View Post
                                Dont you think that oss drivers are bit pointless when they do not provide h264 / vc1 accelleration? It does not matter much if via vdpau or vaapi, but it should be there.
                                No, although it would be a nice addition.

                                Comment

                                Working...
                                X