Announcement

Collapse
No announcement yet.

Radeon Driver Enables Full 2D Acceleration For HD 7000

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by Ancurio View Post
    Please, for the love of deities, don't feed the trolls...
    Oh, I am sorry!


    @bridgman
    Thats good to know.. but you should also understand that this is still inefficient. A two year old plain direct scenario, a Linux user wants to buy a video card - and is still confronted with more troubles when going AMD route - is still actual. Fix power management, improve 3D performance to 25%, take our money if you need to. How much do you guys get from each windows installation that you are fanatically polishing it? If, say, every Linux user would pay you out of own pocket to reach near same performance and feature set you offer *for free* under Windows, would you think about it managing your open driver in a more efficient way? Just start a poll,.. anywhere.. even on Ubuntu forums, as an official AMD member. See the your potential buyer reaction. Intel started to invest into opensource driver recently, and voila, they punched through marketshare of both red and green team like through butter. I bet the hardware sells covered up the expenses, as they are continuing.

    Comment


    • #22
      Originally posted by liam View Post
      Speaking of Glamour, I'd like to see some power consumption numbers that compare the various 2d schemes. Particularly, I'd like to see, a W/frame value for the schemes. I'd suspect the intel driver blows everything else away.
      Well, mostly I would think whatever hardware has the lowest idle power usage would probably win. So yeah, Intel is likely the leader here. Especially since they don't have to send info across the wire and can have everything reside directly in main memory. 2D workloads just aren't difficult enough to push the hardware much.
      Hmm, has anyone else wondered why the intel driver is so much better than the amd driver? The intel team isn't that big (perhaps bigger than the amd team, but not bigger than the amd + red hat team + community contributors).
      It's a lot bigger. I'm not sure why you don't think so, but it is. Also, a lot of the Red Hat and community contributers only work on it part time, while the Intel devs are full time only working on their hardware.
      I wonder if the architechtural path Intel has chosen is the smarter one, or if they are going to require major changes if they want to reach higher performance levels...
      Well, they don't have discrete cards at all. So yeah, that's a simpler architecture that will obviously have less power.

      Comment


      • #23
        Originally posted by Ancurio View Post
        Please, for the love of deities, don't feed the trolls...

        On another note, something I never quite understood was how you can provide separate 2D acceleration,
        when the openGL model is completely 3D oriented (ie. you have to emulate 2D with orthographic projections etc.).
        Could someone explain please? =O
        It depends on what you mean by "2D". Classic 2D engines in old chips basically did three things:
        1. draw solid filled rectangles
        2. copy rectangles
        3. draw lines

        All of those a simple enough to perform on a 3D engine. 1. is just drawing a solid filled quad. 2. is just drawing a textured quad. 3. is just drawing a line.

        Where is gets complicated is where "2D" APIs start to diverge from what can be done on classic 2D hardware. Unfortunately, just emulating classic 2D engine functionality on the 3D engine is not performant on modern "2D" APIs like RENDER. RENDER supports things like transforms (scaling, rotation, keystone, etc.) and alpha blending which require a 3D engine to implement. Modern 3D hardware requires a compiler to properly generate the shaders required for all the RENDER options, before you know it, you end up needing a driver stack almost as complicated as the GL driver. Additionally, RENDER semantics were developed for software rendering so in many cases they do not map easily to 3D engine designed for other APIs (GL or DX). So you can either build and maintain two separate driver stacks for both RENDER and GL, or you can layer RENDER on top of GL. It's not perfect, but there is no hardware designed for RENDER. Applications should be using APIs that the hardware is designed for, namely GL. Over time we've seen more and more applications move to GL. It's not worth the effort to maintain a ultra-tuned device specific RENDER stack if you have limited time and resources; it's never going to perform as well as GL.

        Comment


        • #24
          Originally posted by crazycheese View Post
          ...
          You can still be a troll even if you use the oss drivers. For proof, look at your own posts.

          Comment


          • #25
            Originally posted by liam View Post
            Hmm, has anyone else wondered why the intel driver is so much better than the amd driver? The intel team isn't that big (perhaps bigger than the amd team, but not bigger than the amd + red hat team + community contributors).
            It is definitely much bigger. Intel pays over 20 developers to work on graphics drivers. With AMD it's maybe 4 or 5, I don't remember. Intel's developers seem to be much more active as well.

            Comment


            • #26
              Originally posted by smitty3268 View Post
              You can still be a troll even if you use the oss drivers. For proof, look at your own posts.
              Yes, they are becoming less and less demanding. Still.. we are by far not there. http://openbenchmarking.org/result/1...RA-1212216CR57
              No, I don't regret selling it.

              Comment


              • #27
                Originally posted by crazycheese View Post
                + Nvidia driver works, it is best proprietary supported driver and it brought 3D to Linux.
                - But its closed source and conflicts with GPL libre license, written ONLY for corporate customers with minor adaptations to fit general audiency, it lacks features compared to windows driver, it does not use advantages of Linux kernel, it is pain in butt to integrate (and Linux is designed this way for a reason). And when nvidia says its over, its over; when nvidia says - you don't get this functionality - you don't get it.
                Basically, your quarrel is the license. I wonder how often you have to use that that makes cry rivers over it.
                Besides optimus, I couldn't name any missing features compared to Windows and optimus support is blown way out of proportions. One feature I particularly like is getting simultaneous driver releases for both Linux and Windows.

                Comment


                • #28
                  Originally posted by smitty3268 View Post
                  Well, mostly I would think whatever hardware has the lowest idle power usage would probably win. So yeah, Intel is likely the leader here. Especially since they don't have to send info across the wire and can have everything reside directly in main memory. 2D workloads just aren't difficult enough to push the hardware much.
                  The idle power wouldn't be an issue with benchmarks that usually have a defined endpoint. Also, I want the W/frame numbers.
                  As for 2d loads not being difficult enough, complicated vector images with filters applied will bring most pcs to their knees.



                  Originally posted by smitty3268 View Post
                  It's a lot bigger. I'm not sure why you don't think so, but it is. Also, a lot of the Red Hat and community contributers only work on it part time, while the Intel devs are full time only working on their hardware.
                  Looking over recent commit history for the xorg driver, chris wilson is the only name I see.
                  Intel employs 22 devs (not including release managers and QA) but they do tons of X work (I'd imagine they are, by far, the biggest X contributor).
                  For their 3d driver over the last six months I count 12 contributors for the i915 driver but not all of the contributors are from intel (Marek, for instance, unless he was hired recently).
                  For xorg radeon I count 7 devs over the last 5 months.
                  For mesa radeon I count 10 devs for the last two months.

                  Originally posted by smitty3268 View Post
                  Well, they don't have discrete cards at all. So yeah, that's a simpler architecture that will obviously have less power.
                  I don't understand your point here. AMD also make non-discrete gpus but the architecture should be pretty similar to their discrete gpus (obviously memory management would be quite different, though).

                  Comment


                  • #29
                    Originally posted by brent View Post
                    It is definitely much bigger. Intel pays over 20 developers to work on graphics drivers. With AMD it's maybe 4 or 5, I don't remember. Intel's developers seem to be much more active as well.
                    A number of those intel engineers, like Keith Packard, seem to be more stack oriented than particular driver oriented. I gave a simple accounting of commits by author in my post to smitty above.
                    Of course, that says nothing about activity.

                    Comment


                    • #30
                      Originally posted by liam View Post
                      For their 3d driver over the last six months I count 12 contributors for the i915 driver but not all of the contributors are from intel (Marek, for instance, unless he was hired recently).
                      Heh, I don't even have an Intel IGP. I have recently made small changes in all gallium drivers while improving the gallium interface. I think I made about the same number of changes in each driver.

                      Comment

                      Working...
                      X