Announcement

Collapse
No announcement yet.

What can I expect from 3D radeonhd?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    I think EXA works better than glucose would on the radeon driver at any rate. Glucose probably will not be widely used, because once things get implemented on gallium, render acceleration can just be implemented generically on top of that, instead of having to mess with an OpenGL layer, too.

    Comment


    • #12
      Agreed. I guess it's possible for a 2d-over-GL or 2d-over-Gallium driver to be better if the underlying driver had a better shader compiler or something, but the shaders are pretty simple and it seems unlikely that anything would beat a straightforward EXA implementation as long as the full API was supported.

      I think the big argument for Glucose (or equivalent over Gallium) is that if you compare "one set of code runs on all GL drivers" Glucose acceleration over "easy in principle but nobody had time" EXA drivers, Glucose wins every time. That's actually a pretty fair point if you look at how few GPUs have full EXA implementations.
      Last edited by bridgman; 17 June 2008, 12:48 AM.
      Test signature

      Comment


      • #13
        True, that. I think render over Gallium may be a better idea than render over opengl, though, because it gives finer-grained control, and makes one not such a slave to the state machine. However, I think if someone had time to make gallium, they probably had time for EXA, so there may be no point (witness nouveau, radeon someday, intel) Older, less common drivers may just be stuck with XAA (eg. savage)

        Comment


        • #14
          Can you really blame them though? Maybe if S3 or SIS or any one of these littler graphics companies get there shit together, and release a truly competitive product then it may be a good idea to update the driver to a more recent code base. Until then though the majority of development efforts should be made towards ATi, nVidia, and Intel. These are the three biggest companies to date, and they should receive the most attention.
          Last edited by duby229; 18 June 2008, 03:59 PM.

          Comment


          • #15
            Originally posted by bridgman View Post
            There are lots of improvements which could happen with the current 5xx 3d implementation (you can read what nh_ and MostAwesomeDude are doing on #radeon or #radeonhd irc at radeonhd.org); lots of changes in just the last couple of days.

            As to "how far can it go" it's probably approaching the point where it makes more sense to jump onto Gallium and build there, but that's not a hard, clear line -- just a point where you start to suspect that you're inventing something which Gallium already has.

            I expect 5xx will go a lot further, certainly to GL 2.x, just not sure it makes sense to get there on the current code base. Once we get some 6xx 3d to the same level that 5xx is today (yesterday, I guess; it's further today ) I expect we'll mostly jump onto memory manager and Gallium.
            Can you tell me if you will have initial support for Xen in the memory manager? Being able to partition the graphics card into a virtual half would be amazing.

            Comment


            • #16
              Originally posted by l337programmer View Post
              Can you tell me if you will have initial support for Xen in the memory manager? Being able to partition the graphics card into a virtual half would be amazing.
              I don't think that such things will happen in near future, on the other end i know that some of the people working on virtualization are trying to find out if they could abstract video card through gallium to enable acceleration in the virtual environnement what ever is the underlying card.

              Comment


              • #17
                Originally posted by glisse View Post
                I don't think that such things will happen in near future, on the other end i know that some of the people working on virtualization are trying to find out if they could abstract video card through gallium to enable acceleration in the virtual environnement what ever is the underlying card.
                I've done several hours of research on this matter, and apparently with an IOMMU, most of the memory problem is solved. This is an interesting read.

                This guy has two nVidia 7800GTXs installed and running two separate instances of UT2004 at the same time on one physical machine. His DomU and Dom0 are both FC5. I'm 90% certain his machine doesn't have an IOMMU, however.

                I was thinking a software IOMMU might be able to be implemented in the Xen virtual 3D device for systems that don't have a hardware one for the XP drivers to interface with. Unfortunately I am not a very skilled coder, or I would at least take the time to read through the code and see what I could do myself.

                Comment

                Working...
                X