Announcement

Collapse
No announcement yet.

A few questions about video decode acceleration

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • glisse
    replied
    Originally posted by Redeeman View Post
    why the f*** would power management be a problem?! who cares if you tell the world how to power down your chip?

    anyway, i am quite concerned about that. What are examples of such lines already taken with 3d, considered "cant release", but that you have "released enough for a good driver".

    Please bottom line it for me, which features does the chips have, which you to this date(or know about for later stuff), have been unable to give specs on? Even if good drivers providing a good user experience can be made, with less "features" documented, i'd sure like to know what it is that AMD ahs deemed itself unable to tell us how to use.

    the more i hear about the specs released and stuff, the more i get the feeling that the specs released isnt gonna be able to help create anything but a second class citizen feature wise(and no, i dont count DRM in as a feature.).
    Did you ever noticed that today industry focus is on power management ? Everywhere you see the paradigm of watt efficiency. The competition is now on delivering more computational power while consuming less watt. So obviously power management is sensitive, any trick AMD use can be an advantage over others constructors. And the world we leave in is just "cruel", if you ever tell your enemy what you do don't expect them to congratulate you on the cleverness of your solution but rather to copy or take advantage of this knowledge. I am not against emulations, as matter of fact this one of the things i love, it's just that sadly today you have to protect knowledge.

    On the 3d side i have fairly solid experience of radeon hw and i can tell you that if we ever want to achieve speed & functionalities parity with the closed source driver we better start founding a massive crew of engineers. The fact is that open source people are not genius neither more clever then the closed source engineers. It's just the same brand of human eating same kind of food and drinking same kind of beer

    In the end i am sure that the open source driver will be somewhere around 80%-90% of speed of the closed source one. And as we will limit the rendering to the refresh rate if the 10% gap is just frame you can't see then as a result we will be on pair with the closed one. For me this doesn't look like a 2 class citizen

    I think that AMD is mostly hidding stuff like performances registers which are helpfull only if you got this large crew of engineers i was talking about. And if you look at released spec you can often spot gap here and their and from context make wild guess and with little more effort find out what is missing.

    Leave a comment:


  • Redeeman
    replied
    Honestly, if we took that kind of black/white approach we would not have been able to release 3d information either. What we do instead is put together a team of technical and legal experts then pick through the IP and try to draw a dotted line between "the stuff you need to write a good driver" and "the stuff that we can't release" so we can make everyone happy. For some of the blocks (2d) it's pretty easy. For others (3d) it's a lot harder but we were able to do it successfully.

    Video and power management are probably the hardest of all, but we're definitely going to release enough to make for a great user experience. I just don't know exactly where the line is going to be yet.
    why the f*** would power management be a problem?! who cares if you tell the world how to power down your chip?

    anyway, i am quite concerned about that. What are examples of such lines already taken with 3d, considered "cant release", but that you have "released enough for a good driver".

    Please bottom line it for me, which features does the chips have, which you to this date(or know about for later stuff), have been unable to give specs on? Even if good drivers providing a good user experience can be made, with less "features" documented, i'd sure like to know what it is that AMD ahs deemed itself unable to tell us how to use.

    the more i hear about the specs released and stuff, the more i get the feeling that the specs released isnt gonna be able to help create anything but a second class citizen feature wise(and no, i dont count DRM in as a feature.).
    Last edited by Redeeman; 06-07-2008, 04:41 PM.

    Leave a comment:


  • duby229
    replied
    Originally posted by bridgman View Post
    I'm sure it is unintentional, but you are posting things I didn't say then calling me a liar for saying them.

    What I said (and please go back and check my posts if you have any doubts) was that licenced (aka certified) players check to see if they are running in a secure environment, and will not play (or will constrict the quality) if that secure environment is not present.

    Right now the certified players and secure environments only exist on Windows, at least for HD/BD. I did not say anything about not being able to play on Linux, in fact I said multiple times that the DRM hardware would *not* prevent you from being able to play protected content on Linux unless the player app and OS were working together to stop you.
    Ok so then the quewstion becomes how does this have anything at all to do with Linux? How does this mean that we need a closed driver? You said it yourself not me. You said that we need a closed driver to enable a secure environment. I didnt say that, you did. There really is only one way to understand that....

    If you dont have any DRM support enabled in the closed driver, then why do you use a secured environment as an argument that you need a closed driver?

    Leave a comment:


  • Redeeman
    replied
    I dont see why performance in the open driver cannot match that of the closed.. sure, features like crappy shit drm might not be there, but who gives a rats ass? certainly not anyone actually shelling you money(except if hollywood execs are doing it?).

    What you should do, is just focus your ressources on a free driver

    Are you saying that the 3d performance that radeonhd eventually will get, wont be nearly as good as on windows?

    Leave a comment:


  • duby229
    replied
    Originally posted by bridgman View Post
    Holy mixed signals, Batman !!

    All we heard for years was two messages :

    1. We want the Linux driver to have all the same features and performance of the Windows driver.

    2. We don't want you to open source fglrx and we don't want you to write the open source driver. Just give us the register specs and the community will write the driver.

    If we EOL the closed driver and focus resources on the open source driver you are going to get a very nice open source driver but you are *not* going to get the features and performance of the Windows driver. Ever.

    At the start of the project I talked to a lot of developers and users, and it seemed pretty clear that there were two largely non-overlapping sets of users.

    One group felt that an open driver was what mattered, and they were quite willing to live with a "reasonable" feature and performance delta against the Windows driver as long as the driver would let them reliably run everyday tasks, including light gaming and typical (non-workstation) 3d apps. They expected the driver to work with upstream code and bleeding-edge distros.

    The second group expected feature and performance parity with Windows (at minimum ), and primarily worked with mainstream distros, either Ubuntu for consumers or RHEL/SLED for enterprise. For this group, sharing code with other OSes is the only practical way to deal with both the high feature/performance expectations and Linux's relatively low market share as a desktop OS (available info tends to suggest around 1/2 percent, although I acknowledge it's probably higher than that).
    I dont know who you talked to at the beginning of these projects, but I can tell you for certain that the open source drivers will support all the features linux needs. The real question is, How soon or late will it be in getting to that point... I think it is clear here that there is such a thing as bloat. Some people call it Feature Creep.I'd be willing to guess that more then half of the "features" on windows are either not needed because the problem that feature was designed to solve doesnt exist on linux or it's not needed becouse it's purpose is to enhance corporate interests.

    This is all just my one single opinion, but I'll promise you that a whole lot of people who arent responding here are shaking there heads in agreement.

    So the bottom line is that I can live with a closed driver, but that closed driver simply cannot and never will serve any purpose better then an open driver can. In the end the open drivers will have taken longer to develop, and will perform slower... Why? Becouse ATi has wasted it's Time, Money, an Power on a closed driver that has --no-- benefit to anyone not even it's own self.

    Leave a comment:


  • bridgman
    replied
    Originally posted by duby229 View Post
    No I'm just saying that you should drop the closed driver completely, and EOL that code base for linux right now. Today. Then make one of the open source drivers the officially supported driver by AMD. Then give that open source driver, whichever one you prefer, 100% of your attention on Linux. I'm not saying that you have to devote extra resources, simply give it the same resources that you already have allocated.
    Holy mixed signals, Batman !!

    All we heard for years was two messages :

    1. We want the Linux driver to have all the same features and performance of the Windows driver.

    2. We don't want you to open source fglrx and we don't want you to write the open source driver. Just give us the register specs and the community will write the driver.

    If we EOL the closed driver and focus resources on the open source driver you are going to get a very nice open source driver but you are *not* going to get the features and performance of the Windows driver. Ever.

    At the start of the project I talked to a lot of developers and users, and it seemed pretty clear that there were two largely non-overlapping sets of users.

    One group felt that an open driver was what mattered, and they were quite willing to live with a "reasonable" feature and performance delta against the Windows driver as long as the driver would let them reliably run everyday tasks, including light gaming and typical (non-workstation) 3d apps. They expected the driver to work with upstream code and bleeding-edge distros.

    The second group expected feature and performance parity with Windows (at minimum ), and primarily worked with mainstream distros, either Ubuntu for consumers or RHEL/SLED for enterprise. For this group, sharing code with other OSes is the only practical way to deal with both the high feature/performance expectations and Linux's relatively low market share as a desktop OS (available info tends to suggest around 1/2 percent, although I acknowledge it's probably higher than that).

    Leave a comment:


  • bridgman
    replied
    Originally posted by Ex-Cyber View Post
    So if I'm understanding things right, the decode-specific hardware is "tainted" by secure path stuff that can't be publicly documented, but some decoding work could still be offloaded (albeit less efficiently) to shader units. Is that about right?
    It's actually a bit better than that. The 6xx family has the same decode hardware as all the previous generations (IDCT/MC, primarily for MPEG2) and I'm pretty sure we will be able to open that. Some of the 6xx chips add a new decode block, the UVD, optimized for H.264 and VC-1 and I'm not sure we will be able to open *that* up.

    What will be available with pretty high confidence is :

    - render acceleration (scaling, colour space conversion etc..) using shaders (aka Xv)

    - decode acceleration for MPEG2 using the dedicated IDCT block and shaders for MC, same as we use in the Windows driver (aka XvMC)

    - decode acceleration for H.264 and VC-1, possibly using the IDCT block for IDCT, definitely using the shaders for MC, and using a mix of software and shaders for the rest

    So, in summary, all of the decode hardware in previous generations of GPUs is carried forward to 6xx and I expect it will all be available to open source developers. The only "at risk" block is the UVD which was added to most of the 6xx chips.

    BTW all of the decode hardwarwe from all of the HW vendors has contained protection logic for at least 8 years, and probably more. The only difference is that I'm pretty sure we can separate the decode and protection info for the IDCT/MC hardware but not so sure we can do that for UVD.
    Last edited by bridgman; 06-07-2008, 03:56 PM.

    Leave a comment:


  • bridgman
    replied
    Originally posted by duby229 View Post
    But saying that we linux users wouldnt be able to play back protected content is a flat out lie.
    I'm sure it is unintentional, but you are posting things I didn't say then calling me a liar for saying them.

    What I said (and please go back and check my posts if you have any doubts) was that licenced (aka certified) players check to see if they are running in a secure environment, and will not play (or will constrict the quality) if that secure environment is not present.

    Right now the certified players and secure environments only exist on Windows, at least for HD/BD. I did not say anything about not being able to play on Linux, in fact I said multiple times that the DRM hardware would *not* prevent you from being able to play protected content on Linux unless the player app and OS were working together to stop you.
    Last edited by bridgman; 06-07-2008, 03:57 PM.

    Leave a comment:


  • duby229
    replied
    Originally posted by Ex-Cyber View Post
    So if I'm understanding things right, the decode-specific hardware is "tainted" by secure path stuff that can't be publicly documented, but some decoding work could still be offloaded (albeit less efficiently) to shader units. Is that about right?

    That would mean EOLing it before the open drivers have the features and performance/compatibility tweaks expected by workstation customers. ATI/AMD would be stupid to do that.
    That is the same understanding that I have as well. Honestly though I think that is a decent compromise. I'd rather have the shaders decoding video then to use tainted hardware.

    As far as EOLing the closed drivers as, the sooner they do it, the sooner they can re-allocate existing resources to the open drivers. I understand that ATi is doing the best that it can under these circumstances, but they can certainly improve the circumstances by EOling the closed drivers as soon as possible. The sooner they do it the better off they'll be.

    Leave a comment:


  • Ex-Cyber
    replied
    So if I'm understanding things right, the decode-specific hardware is "tainted" by secure path stuff that can't be publicly documented, but some decoding work could still be offloaded (albeit less efficiently) to shader units. Is that about right?

    Originally posted by duby229 View Post
    EOL that code base for linux right now. Today.
    That would mean EOLing it before the open drivers have the features and performance/compatibility tweaks expected by workstation customers. ATI/AMD would be stupid to do that.
    Last edited by Ex-Cyber; 06-07-2008, 01:21 PM.

    Leave a comment:

Working...
X