Announcement

Collapse
No announcement yet.

Intel Just Released A Crazy Fast Acceleration Architecture

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    allquixotic; sounds like something specific to your system (or specific GPU, or...) - I think most people are seeing higher performance even on slower hardware

    elanthis; don't think there is a plan to expose the Gallium3D interface as a "stable API" - hardware layers usually need to evolve continuously in order to stay efficient on new hardware and I think that is the plan for Gallium3D. That argues for a "linked-in" model rather than sharing a single copy per-system and exposing it from Mesa or elsewhere.

    mailbox isn't full - I got a full message a week or two ago and cleaned out some space, so it should have been fine recently
    Test signature

    Comment


    • #52
      Originally posted by allquixotic View Post
      I would love to get 60 fps. Even 30 fps would be a dream come true. For me, I can't seem to find more than 12 - 15 FPS on any 3d application at all. Even glxgears gives me 30 fps, and I can't think of anything simpler except rendering a single triangle. I have swap buffers wait set to off. And I don't exactly have a mid-end box; until the HD6990 came around, the HD5970 was the fastest single-card solution on the planet. But right now it's performing about as fast as an r200 card (maybe slower).
      Something is very very wrong, my 5670 easily gives a couple of thousand fps in glxgears, and most games are very much playable (perhaps with a lower resolution than normal and some effects turned off, but still playable).

      I wouldn't mind more higher performance of course, but overall r600g seems to be doing pretty well.

      Comment


      • #53
        Originally posted by MostAwesomeDude View Post
        The pros (Dave and Alex) informed me that 2D/3D switches are too slow, and that I should not bother turning on the 2D hardware, so I didn't. We made a similar decision about hardware fog units.
        I'm surprised: when I was writing video drivers years ago (not for ATI) there really wasn't much 2D-specific hardware in the chip, in the early days about 95% of the units were shared between 2D and 3D. Even the fast solid fill hardware was probably used to clear the Z-buffer and framebuffer in 3D.

        If I remember correctly, by the time I stopped writing video drivers the only piece of 2D-specific hardware left in our chip was a fast rectangular read which avoided the overhead of using the texture units for blits (no filtering, scaling, etc, just raw pixel data from RAM).

        There was some overhead from switching between 2D and 3D rendering because they were different drivers with their own hardware contexts, but no significant overhead from mixing 2D and 3D operations in a single driver. Having separate 2D and 3D cores never made sense to us.

        Comment


        • #54
          Originally posted by movieman View Post
          There was some overhead from switching between 2D and 3D rendering because they were different drivers with their own hardware contexts, but no significant overhead from mixing 2D and 3D operations in a single driver. Having separate 2D and 3D cores never made sense to us.
          MostAwesomeDude is talking about older parts, basically 2005 and earlier (although the r5xx generation lasted . Starting with r6xx everything is done on the 3D core and there is no 2D core. There was some transitional hardware and emulation microcode in the early 6xx parts but we never used it, and it was removed from r7xx anyways.
          Test signature

          Comment


          • #55
            Originally posted by movieman View Post
            I'm surprised: when I was writing video drivers years ago (not for ATI) there really wasn't much 2D-specific hardware in the chip, in the early days about 95% of the units were shared between 2D and 3D. Even the fast solid fill hardware was probably used to clear the Z-buffer and framebuffer in 3D.

            If I remember correctly, by the time I stopped writing video drivers the only piece of 2D-specific hardware left in our chip was a fast rectangular read which avoided the overhead of using the texture units for blits (no filtering, scaling, etc, just raw pixel data from RAM).

            There was some overhead from switching between 2D and 3D rendering because they were different drivers with their own hardware contexts, but no significant overhead from mixing 2D and 3D operations in a single driver. Having separate 2D and 3D cores never made sense to us.
            On the chips that had 2D engines (on newer hardware, there is no 2D engine.), the 2D and 3D engines indeed shared some hardware. IIRC, the main issue was that there were separate caches for 2D and 3D and internally the hardware had to switch modes when switching between 2D and 3D modes because some of the hardware was shared. They were still programmed synchronously though via the same interface. In the Intel example from this article, the 3D engine and BLT engine are completely asynchronous so the driver must deal with synchronizing buffers shared between them.

            Comment


            • #56
              and every couple of month intel folks relase some 'new architecture' before the last one even has been finalized/stabilized.

              Comment


              • #57
                Originally posted by agd5f View Post
                IIRC, the main issue was that there were separate caches for 2D and 3D and internally the hardware had to switch modes when switching between 2D and 3D modes because some of the hardware was shared.
                Mmm, true, I seem to remember we had some issues with cache coherency between the rectangular read unit and the texture units. I think we probably flushed the texture cache when they might overlap but I'm getting hazy on the details as the years go by .

                Comment


                • #58
                  Originally posted by energyman View Post
                  and every couple of month intel folks relase some 'new architecture' before the last one even has been finalized/stabilized.
                  For instance?

                  Comment


                  • #59
                    Intel Linux

                    Intel Linux does not have any closed source super duper drivers like nVidia and AMD, so performance patches should have Intel priority first.

                    Since most low end computers come with Intel graphics buildin.

                    We Intel users dont have any other options than Open Source drivers.

                    Cheers

                    Comment


                    • #60
                      Intel's oss drivers are awesome though. I switched form a dedicated ati card with catalyst to intel and its been a far better experience, even with much less powerful graphics.

                      Comment

                      Working...
                      X