Announcement

Collapse
No announcement yet.

A few questions about video decode acceleration

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Windows drivers are kind of messy because a large amount of the code goes in-kernel. Hopefully, Windows 7 should fix that, but you have to expect that the Windows graphical system will be less stable than the Linux graphical system.

    Comment


    • #52
      "Hmmm. Any chance you could post the log somewhere ?"
      Ill see if i can get access to the box again.

      "Tearing like "big ugly diagonal lines across the image" or tearing like "video not synced to vertical refresh" ? What vertical refresh rate are you running, and do you have vsync enabled (sorry I don't remember the exact option string)."
      its sortof hard to explain, it looks like its synced to vsync, but then again, not, easy to explain, but its definetly not right. I did enable vsync and stuff. 60hz flatpanel output.

      "Agreed; once you have the driver working on a distro for a few versions it will often tend do stay working, but it takes a bunch of testing/fixing to get there in the first place."
      I dont believe its about getting it working on distributions, and then it continues to work, i believe its just about getting it working on specific software versions, like X and the kernel. Distributions change all the time, but the drivers really shouldnt care, like nvidia does not. On gentoo or slackware for instance, i could compile myself heaps of different versions of xorg, and it works with nvidia, i wish this too with fglrx(well, actually i dont much care, i just want the free drivers, and this problem goes away instantly.)

      "I'm not sure. It used to with some drivers; don't know if FGLRX cares today. I believe that we set some defaults at install time depending on which card you have, ie 5xx and above cards default to textured video while earlier cards default to video overlay, there could be more serious ones as well."
      now you are scaring me.. this should really not be done at build time, it _WILL_ cause problems.

      "Our crash numbers on Windows tend to lower than most (frequently lower than anyone if you consider market share); not sure where the numbers are published but they probably make an interesting read."
      Yes, as said, like 20-25% of crashes on vista, nvidia were like 40%+ - its a significant difference, but the numbers are still WAY too high for me being comfortable.. intel was a great deal lower than this.

      Comment


      • #53
        Yeah, I always find this funny. The X world is moving code into the kernel to make things more stable, while the Windows world thinks things are less stable because so much is in the kernel.

        There's no question that having the code in-kernel simplifies a lot of code paths and avoids some really awkward situations where kernel needs usermode to do something. On the other hand, bad code in kernel is worse than bad code in usermode.

        That said, since the whole X stack needs to run with root-ish privileges anyways I guess it doesn't make much difference
        Test signature

        Comment


        • #54
          Originally posted by Redeeman View Post
          "Hmmm. Any chance you could post the log somewhere ?"
          Ill see if i can get access to the box again.
          Thanks.

          Originally posted by Redeeman View Post
          "Tearing like "big ugly diagonal lines across the image" or tearing like "video not synced to vertical refresh" ? What vertical refresh rate are you running, and do you have vsync enabled (sorry I don't remember the exact option string)."
          its sortof hard to explain, it looks like its synced to vsync, but then again, not, easy to explain, but its definetly not right. I did enable vsync and stuff. 60hz flatpanel output.
          Yeah, this is the hardest thing about video problems; we pretty much have to drive to your house to see what is really going on ;(

          Originally posted by Redeeman View Post
          "Agreed; once you have the driver working on a distro for a few versions it will often tend do stay working, but it takes a bunch of testing/fixing to get there in the first place."
          I dont believe its about getting it working on distributions, and then it continues to work, i believe its just about getting it working on specific software versions, like X and the kernel. Distributions change all the time, but the drivers really shouldnt care, like nvidia does not. On gentoo or slackware for instance, i could compile myself heaps of different versions of xorg, and it works with nvidia, i wish this too with fglrx(well, actually i dont much care, i just want the free drivers, and this problem goes away instantly.)
          I think there are a couple of factors here. One is that we follow the DRI conventions for our drivers while NVidia basically carries a complete self-contained stack over, so there is a heap of distro-to-distro variance which they don't need to deal with. There are some downsides to this, of course, which Matthew has mentioned in other posts.

          The other is that we have power management stuff woven through the driver which (today) the free driver does not. One of the most obvious distro-specific issues is atieventsd, which needs to point to some other file which apparently is in different places on different distros.

          Originally posted by Redeeman View Post
          "I'm not sure. It used to with some drivers; don't know if FGLRX cares today. I believe that we set some defaults at install time depending on which card you have, ie 5xx and above cards default to textured video while earlier cards default to video overlay, there could be more serious ones as well."
          now you are scaring me.. this should really not be done at build time, it _WILL_ cause problems.
          Not build time, just install time. These are things which go in amdpcsdb and/or conf.

          Originally posted by Redeeman View Post
          "Our crash numbers on Windows tend to lower than most (frequently lower than anyone if you consider market share); not sure where the numbers are published but they probably make an interesting read."
          Yes, as said, like 20-25% of crashes on vista, nvidia were like 40%+ - its a significant difference, but the numbers are still WAY too high for me being comfortable.. intel was a great deal lower than this.
          Our 2d doesn't crash either
          Test signature

          Comment


          • #55
            >> Is there a chart somewhere that shows which chips have which features

            > We're working on one so we can contribute some testing for the open source
            > driver. We're producing it as an internal document but I think it would be
            > useful as a wiki page as well.

            It would be useful for this chart to be on a web page somewhere:

            (a) Owners of chip 'x' could watch to see when feature 'y' becomes
            available.

            (b) Users shopping for a new chip could see which model has the
            features they want/need.

            >>> If we EOL the closed driver and focus resources on the open source driver
            >>> you are going to get a very nice open source driver but you are *not*
            >>> going to get the features and performance of the Windows driver. Ever.

            >> Ignoring wintel features that are useless/stupid, why not?

            > Simple. By having common code between OSes we are able to get the benefit of
            > development work done for those other OSes. If we had a Linux-specific open
            > source driver then it would only get the benefit of Linux-specific development
            > resources, which (because of market share realities) have to be much smaller.
            > Having a much smaller development team means fewer features & less performance.

            The open source code should be common across OSes as much as possible. Stuff
            in userland can be mostly portable across OSes. Kernel device driver code will
            tend to be more specific, as internal kernel interfaces are different between
            the *BSDs, OpenSolaris, and Linux, and then you have OS-X, Mach, Plan-9, etc.

            >> If the R600/R700 UVD isn't going to be documented, then put the R600/R700 at
            >> the bottom of the list, and concentrate on documenting *everything* (video decode,
            >> 3D, power saving modes, etc.) for Rage through R500, and on assisting developers
            >> with those chips. Once the open source drivers properly support these chips,
            >> then the users can go out and buy the chips. Then go back and try to find a
            >> solution to the UVD problem.

            > The R6xx family has the same video decode hardware as R5xx *plus* UVD, so I had
            > assumed that users would want it on the list as well since it can do all the
            > same video tasks as 5xx. What do you think ?

            I am not suggesting removing any chips from the list. I observe bits and pieces
            of docs coming out for various chips. No one chip has everything documented yet.
            Users shopping for a new chip still have no chips they can buy that have all
            features supported by open source drivers. So my suggestion is: pick one chip
            and document *everything* for that chip, and assist the developers getting open
            source code written for that chip. Next, pick a 2nd chip and document *everything*
            for that chip, repeat until all chips are completely documented. It has been a
            few months and you haven't figured out how to document UVD without getting a chair
            thrown at you, so I suggest putting the UVD chips (6 & 7) at the bottom of the
            list so that thinking about a solution for UVD doesn't hold up the other chips.

            Comment


            • #56
              Originally posted by Dieter View Post
              The open source code should be common across OSes as much as possible. Stuff in userland can be mostly portable across OSes. Kernel device driver code will tend to be more specific, as internal kernel interfaces are different between the *BSDs, OpenSolaris, and Linux, and then you have OS-X, Mach, Plan-9, etc.
              Agreed, the open source drivers run across *nix OSes fairly easily. I was talking about common code across Linux and OSes with much larger market shares, where the use of common closed-source code allows us to offer Linux users more in the way of features and performance than the *nix desktop market would suppor otherwise.

              Originally posted by Dieter View Post
              I am not suggesting removing any chips from the list. I observe bits and pieces of docs coming out for various chips. No one chip has everything documented yet. Users shopping for a new chip still have no chips they can buy that have all features supported by open source drivers. So my suggestion is: pick one chip and document *everything* for that chip, and assist the developers getting open source code written for that chip. Next, pick a 2nd chip and document *everything* for that chip, repeat until all chips are completely documented.
              Sorry, I should have said "bottom of the list" not "off the list". The issue is that a lot of IP is common between chip generations (eg the 5xx and 6xx have relatively similar display engines) so once we have done the work to let us release 5xx display we can also release 6xx display info with very little extra work.

              Because of this, we end up rolling out documentation by "area of functionality" rather than "chip", although for 5xx we have either released info or code samples (or can use already working driver code, eg. 2d acceleration) for everything except one of the video decode blocks (IDCT). Alex has been experimenting with 5xx power management for a while (using AtomBIOS calls in the radeon driver) but that seems to be largely done now.

              Since the 3d engine info already released for 5xx is also used for video render acceleration and the motion comp portion of video decode acceleration we figured that 6xx 3d info would benefit more users in more ways than the IDCT block.

              Originally posted by Dieter View Post
              It has been a few months and you haven't figured out how to document UVD without getting a chair thrown at you, so I suggest putting the UVD chips (6 & 7) at the bottom of the list so that thinking about a solution for UVD doesn't hold up the other chips.
              Actually we haven't even started looking at UVD yet

              The plan was to bring up driver functionality in four stages :

              - display/modesetting using AtomBIOS
              - 2d acceleration based on existing radeon code
              - 3d acceleration based on existing radeon/drm/mesa code
              - video acceleration

              Once Alex wrote Textured Video acceleration code earlier this year it became possible for video acceleration to progress in parallel with 3d acceleration, rather than not being able to start video until 3d was finished. Right now the last bits to be documented will be 6xx+ power management and the last bits of video acceleration (the IDCT block and possibly UVD).

              The main deviation from the plan you are suggesting is that we are prioritizing 6xx 3d documentation over 5xx IDCT documentation, since the 6xx 3d info can help in many more areas (3d, 2d, video render and some of video decode).
              Last edited by bridgman; 08 June 2008, 03:12 PM.
              Test signature

              Comment


              • #57
                > The issue is that a lot of IP is common between chip
                > generations (eg the 5xx and 6xx have relatively similar
                > display engines) so once we have done the work to let
                > us release 5xx display we can also release 6xx display
                > info with very little extra work.

                That makes sense, but I get the impression that 6xx
                is more than "very little extra work".

                Are there any estimates of when open source drivers will
                have Xvmc support? With NTSC broadcasts going away in
                February, adding ATSC tuners to computers makes sense,
                allowing DVR functionality, but we need a way to watch
                the resulting mpeg2ts files. Speaking of which, what
                is the maximum resolution of ATI's s-video output?
                Is it limited to 720x480, or can it do, say, 1200x480
                or 1600x480?

                Comment


                • #58
                  Originally posted by Dieter View Post
                  That makes sense, but I get the impression that 6xx (display) is more than "very little extra work".
                  My understanding from Dave & Alex was that 5xx and 6xx pretty much came up together. Other than the more recent HD3xxx parts I think the AtomBIOS calls are not much different for 5xx and 6xx. We were also able to put the initial register spec manuals out within a day or two of each other IIRC. I added (display) to your comment to reflect the earlier discussion -- if you weren't thinking display then I probably agree, 6xx 3d *is* a bunch more work

                  Originally posted by Dieter View Post
                  Are there any estimates of when open source drivers will have Xvmc support? With NTSC broadcasts going away in
                  February, adding ATSC tuners to computers makes sense, allowing DVR functionality, but we need a way to watch the resulting mpeg2ts files.
                  I think it's just a matter of developers having time to work on it. The main thing you need for XvMC is motion compensation (MC) which is done on the 3d engine anyways. The 5xx 3d documentation package included the special rounding modes etc.. we use for MPEG2 MC so I guess work could start any time. The IDCT hardware is next on the list after 6xx 3d info, so with luck we should be able to get that info out in a month or so.

                  Originally posted by Dieter View Post
                  Speaking of which, what is the maximum resolution of ATI's s-video output? Is it limited to 720x480, or can it do, say, 1200x480 or 1600x480?
                  Alex has been experimenting with the TV outputs when time permits so he may be able to jump in here. I think we have only used it at 720x480 so far.

                  Since S-Video uses the same colour subcarrier as composite video (~200 subcarrier cycles across the active scanline) I doubt that higher horizontal resolutions would give much improvement in picture quality although I guess it might help with text display.
                  Last edited by bridgman; 08 June 2008, 10:26 PM.
                  Test signature

                  Comment


                  • #59
                    So basically everything except UVD ought to be functional through r600 by the end of the year? That would be great. My next card will be from ATI, too.

                    Comment


                    • #60
                      That seems pretty safe, as long as there is some existing XvMC code we can use as a starting point. I'm hoping we can be largely caught up with Gallium and the next generation GPUs by then as well.

                      I have a nagging worry that we may still be fiddling with tvout at the end of the year though
                      Last edited by bridgman; 08 June 2008, 11:40 PM.
                      Test signature

                      Comment

                      Working...
                      X