Announcement

Collapse
No announcement yet.

Mixing open and closed source

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #46
    Originally posted by bridgman View Post
    Hold on there buddy

    We are *not* the founders of DRM. We are the companies that are required to implement it in order to sell chips. Intel developed HDCP because DRM on the outputs was a requirement from content providers in order to play HD or BD on PCs. Macrovision and (I think) CGMSA came from outside the PC industry. The rest of the standards (CSS, ACSS etc.) came from content providers and consumer electronics groups.
    sorry, i was confusing this with tcg and tcpa. i think that about all the users know what it is and what the real problem is with this stuff. you're right instead about drm, which is a microzoz patent (strange). if someone doesn't know what tcg and tpca are, they're the same organization and wikipedia has articles about it http://en.wikipedia.org/wiki/Trusted_Computing_Group and about the chip that "should protect and not invade our rights" called tpm http://en.wikipedia.org/wiki/Trusted_Platform_Module


    Originally posted by bridgman View Post
    Bingo


    The problem with starting fresh for 2d and 3d is that you effectively duplicate the code even though the HW is largely unchanged. Better IMO to have code splits based on hardware splits as you say.
    yep, but i was suggesting that the basecode should be equal and that the split codes would be loaded by this core module based on the identification of the board. it should be easier to use and also maintain. also, maybe some devs that see a code to do something that is done in a bad way would rewrite it better. this means that new codebase would also clean out old "brutal" code present in the old codebase.

    Originally posted by bridgman View Post
    We did that at the start of the project. That`s how we ended up with separate code for display and common code for everything else. I even have slides
    have you showed them to the devs?!
    i'm joking on this.

    Originally posted by bridgman View Post
    Here`s where it gets tricky. The display block is totally different between 4xx and 5xx, so it needs new code, but display implemented the way we want is the smallest part of the driver stack. Since display was implemented first it seems a lot more black-and-white now, but once 2d, 3d and video get in the amount of unchanged hardware will be bigger than the amount of different hardware.
    wouldn't the splitted module help more on the cleaness of code?! also if the various different chips get maintained in different small modules is faster to find out where the problems are when modifying stuff. i'm for the split theory since i think that putting many small modules is more proficient also in the memory use, not only in the code maintenance. also if we now know that old boards cannot have better code then it should be better to split them into modules so that these modules wouldn't be touched by new code written for the newer boards. porting the new code to the old modules would be simple since there would only be the need for testing the new modules on older boards and see what happens. that should help you avoid problems like the ones with firegl boards ot working. if they'd have own dedicated modules from older releases they'd have still worked even if the new code lines for the new boards would prevented them to work.
    but if the base settings and initialization of the board is different then this doesn't apply anymore.

    Originally posted by bridgman View Post
    Actually, IGPs are not much different. The memory management setup is a bit different (which is a huge pain if you are RE-ing but otherwise not a big deal) and the 3d engines need vertex processing in software, but everything else is the same.
    so what they really need is just to reset the hw registers and set up the correct ones (what it seems for catalyst to do on windows at startup) to work better?!

    Originally posted by bridgman View Post
    It`s harder than that. Everything below the decoding would need to be closed source and tamper proofed. Between that and the closed source 3d everyone seems willing to consider, there isn`t much left.
    i haven't said that you should do it right away. i was telling out that having the same core module for fglrx and radeon at startup would mean that in the future the 2 drivers could somehow share stuff if amd would consider the thing as a good street to walk on in the future. what problem would be to have a fglrx 3d module only that does not only 3d accel but also uvd and other stuff, but using opensource 2d module instead of fglrx one?! obviously this won't work on newer boards that emulate 2d via 3d. also since drm is present also in the kernel why not using directly the xorg drm also for fglrx?! is it so different?! having this might also get drm stuff to be better supported into the kernel itself and could also make linus consider some way of middle way between supporting these "life intrusion" mechanisms as drm and similar stuff and not supporting them at all. this is what happened when the selinux tried to get supported into the kernel and a new security layer was created. this could also happen, under the right circumstances, with this stuff. a user for example would not have this compiled by default, but if he wanted, he could compile the modules and use bd and other stuff on his pc.
    i'm talking about this because i'm planning to build a multimedia home-tv center to be attached to my lcd tv and i'd like to put inside a bd viewer to be able to take advantage of the full-hd definition films. but to do that i'd need to use windows and pay a lot of money for windows tools, to have them do what they want and not what i want them to do, while i have myth-tv and a lot of other useful stuff on linux and i wouldn't be required to pay a single chip if i don't want to donate to the various projects and i'd have the apps configured and working the way i like and the way someone else likes.

    Comment


    • #47
      Originally posted by givemesugarr View Post
      yep, but i was suggesting that the basecode should be equal and that the split codes would be loaded by this core module based on the identification of the board. it should be easier to use and also maintain. also, maybe some devs that see a code to do something that is done in a bad way would rewrite it better. this means that new codebase would also clean out old "brutal" code present in the old codebase.
      On the first implementation I expect the split code will be tiny and the base code will be 90+%. Most of the cruft is in the base code that doesn't change from one chip to another. I figure we're going to want to do a fresh implementation from scratch once Gallium stabilizes anyways, so why do it twice ?

      Originally posted by givemesugarr View Post
      also if we now know that old boards cannot have better code then it should be better to split them into modules so that these modules wouldn't be touched by new code written for the newer boards. porting the new code to the old modules would be simple since there would only be the need for testing the new modules on older boards and see what happens. that should help you avoid problems like the ones with firegl boards ot working. if they'd have own dedicated modules from older releases they'd have still worked even if the new code lines for the new boards would prevented them to work.
      I agree -- if there aren't resources to test and fix issues on older boards then separate modules may be the way to go, even if they contain largely duplicated code. Downside of this is that you stop enhancing support for older boards and the owners come looking for you with flaming torches and pitchforks. If you always push changes from one module to the other then having separate modules is a waste of time. This issue is tough enough with fglrx but I don't think open source drivers should leave the older chips behind, particularly when 90% of the code will be the same. Fair question though.

      Again, if I thought there would be significantly different code for the different ASIC generations then I would be arguing for separate modules. It's not that we don't understand where separate modules can be useful, it's that so much of the code *is* common across the generations.

      Originally posted by givemesugarr View Post
      but if the base settings and initialization of the board is different then this doesn't apply anymore.
      Drivers for the current mesa architecture have a lot of code that isn't specific to the ASIC. One of the things Gallium brings is that the drivers are a lot more "pure", ie they contain relatively more chip-specific stuff and relatively less common stuff. I figure that's the time to clean house.

      Originally posted by givemesugarr View Post
      (IGP) so what they really need is just to reset the hw registers and set up the correct ones (what it seems for catalyst to do on windows at startup) to work better?!
      That's one part. The other part is that someone has to write the code that makes up for the lack of vertex shaders. Dave Airlie did the initial work and got 3d running on RS4xx, but more work is still needed.

      Originally posted by givemesugarr View Post
      i haven't said that you should do it right away. i was telling out that having the same core module for fglrx and radeon at startup would mean that in the future the 2 drivers could somehow share stuff if amd would consider the thing as a good street to walk on in the future. what problem would be to have a fglrx 3d module only that does not only 3d accel but also uvd and other stuff, but using opensource 2d module instead of fglrx one?! obviously this won't work on newer boards that emulate 2d via 3d.
      Simple. Protecting the decoded HD/BD images requires DRM code in the display and kernel drivers, not just the video decode/render bits. That doesn't leave much to open source.

      Originally posted by givemesugarr View Post
      also since drm is present also in the kernel why not using directly the xorg drm also for fglrx?! is it so different?! having this might also get drm stuff to be better supported into the kernel itself and could also make linus consider some way of middle way between supporting these "life intrusion" mechanisms as drm and similar stuff and not supporting them at all. this is what happened when the selinux tried to get supported into the kernel and a new security layer was created. this could also happen, under the right circumstances, with this stuff. a user for example would not have this compiled by default, but if he wanted, he could compile the modules and use bd and other stuff on his pc.
      The kernel DRM will need a full implementation of TTM at minimum (including management of video memory and migration between pools) before it can support our client code. A lot of the features added by TTM & Gallium have been in proprietary drivers for a while. That's one of the reasons I'm saying "let's not have this discussion now".

      If it was just "leaving stuff out" that would be OK, but the stuff that stays in pretty much needs to be double-implemented depending on what lower level functions are present. We also need to tamper-proof any of the secure code, and that means everything below needs to be tamper-proofed as well.

      If you believe in separate code for separate functions, that argues for separate implementations of the secure/nonsecure paths, and once you do *that* you end up with an open driver which is the collection of nonsecure modules, and a closed driver which is the collection of secure modules. Plus or minus 20%, anyways.

      We went through many of these discussions internally, including consideration of various open/closed hybrids, before settling on the current "two driver" plan. I think there are places where we could use open source effectively in fglrx, particularly installation and initialization/startup, but going much further than that starts to put real constraints on what we can do with fglrx in the future.
      Last edited by bridgman; 02-03-2008, 02:24 PM.

      Comment


      • #48
        My fingers are getting tired. Can we stop talking about DRM for a while ?

        Comment


        • #49
          Originally posted by bridgman View Post
          On the first implementation I expect the split code will be tiny and the base code will be 90+%. Most of the cruft is in the base code that doesn't change from one chip to another. I figure we're going to want to do a fresh implementation from scratch once Gallium stabilizes anyways, so why do it twice ?
          correct observation. this means that for the moment the right thing is to continue adding features and do a full cleanup when gallium has stabilized.

          Originally posted by bridgman View Post

          I agree -- if there aren't resources to test and fix issues on older boards then separate modules may be the way to go, even if they contain largely duplicated code. Downside of this is that you stop enhancing support for older boards and the owners come looking for you with flaming torches and pitchforks. If you always push changes from one module to the other then having separate modules is a waste of time. This issue is tough enough with fglrx but I don't think open source drivers should leave the older chips behind, particularly when 90% of the code will be the same. Fair question though.
          well, a workaround for this split vision might be to have the various older modules ready and to have them inserted when there are compatibility problems. for example, you release a new version but for some reason this new version doesn't work anymore on the rs400 series. at this point, you push out the oldest module that worked fine for these chipsets with the name rs400 and the driver would use that one and will use the new one for the other chips. this could be a workaround for the problems that a split duplicate code would insert.

          Originally posted by bridgman View Post
          Again, if I thought there would be significantly different code for the different ASIC generations then I would be arguing for separate modules. It's not that we don't understand where separate modules can be useful, it's that so much of the code *is* common across the generations.
          well, this is good to hear, since this means that the docs released could be helpful also for older boards and that the process for implementing new features in radeonhd could be somehow faster because of already written code.


          Originally posted by bridgman View Post
          Simple. Protecting the decoded HD/BD images requires DRM code in the display and kernel drivers, not just the video decode/render bits. That doesn't leave much to open source.
          well, maybe in the future someone could think of a solution for this problem...

          Originally posted by bridgman View Post
          The kernel DRM will need a full implementation of TTM at minimum (including management of video memory and migration between pools) before it can support our client code. A lot of the features added by TTM & Gallium have been in proprietary drivers for a while. That's one of the reasons I'm saying "let's not have this discussion now".

          If it was just "leaving stuff out" that would be OK, but the stuff that stays in pretty much needs to be double-implemented depending on what lower level functions are present. We also need to tamper-proof any of the secure code, and that means everything below needs to be tamper-proofed as well.

          If you believe in separate code for separate functions, that argues for separate implementations of the secure/nonsecure paths, and once you do *that* you end up with an open driver which is the collection of nonsecure modules, and a closed driver which is the collection of secure modules. Plus or minus 20%, anyways.

          We went through many of these discussions internally, including consideration of various open/closed hybrids, before settling on the current "two driver" plan. I think there are places where we could use open source effectively in fglrx, particularly installation and initialization/startup, but going much further than that starts to put real constraints on what we can do with fglrx in the future.
          ok, now i've understood why actual base merging for fglrx and radeon isn't possible, but from how you've described it when gallium arrives that will most likely happen and maybe at that time we would be able to switch from radeon to fglrx without much pain.
          i'd like to thank you for the time you've spent around here giving answers to our questions and for helping us understand better how the things are going inside the developing process and on what to expect from amd in the near future and in the long term one.

          Comment


          • #50
            Originally posted by Kano View Post
            Well decoding is possible using the ffmpeg - mplayer can use the built-in or external ffmpeg. Also there is a patch to use coreavc as decoder.

            http://code.google.com/p/coreavc-for-linux/

            No decoder is hw accellerated. But not only decoding would be nice to have, also encoding. Currently mpeg4 encoding is fast enough (could be always faster howerver) but h264 is really slow.

            Maybe somebody implements it for CUDA (NVIDIA 8 series), that would be a possibility.

            well i have tried coreAVC but it doesnt work for me
            mplayer crashes as soon as you use it

            Comment


            • #51
              Originally posted by bridgman View Post
              My fingers are getting tired. Can we stop talking about DRM for a while ?

              Wrong thread, but yes, I agree.

              Comment


              • #52
                Here is my two cents. I have a seven years old box and I am still using it. Basicly I only watch tvrips and browse webs, so it is ok. However, since today's tvrips I watch are in H264 720p format mostly, my box can't do this job anymore. I don't care about BD/HD-DVD, but I do want a hardware decoder to decode unprotected H264 content. I think other people on this forum already say this. We only want H264 decoding... Bridgeman, if it is Ok to release the H264 part of UVD, please release it. If the closed source driver is the only possible choice, please please make this driver decoding unprotected H264 content too, please!

                Comment


                • #53
                  Originally posted by doubledr View Post
                  Here is my two cents. I have a seven years old box and I am still using it. Basicly I only watch tvrips and browse webs, so it is ok. However, since today's tvrips I watch are in H264 720p format mostly, my box can't do this job anymore. I don't care about BD/HD-DVD, but I do want a hardware decoder to decode unprotected H264 content. I think other people on this forum already say this. We only want H264 decoding... Bridgeman, if it is Ok to release the H264 part of UVD, please release it. If the closed source driver is the only possible choice, please please make this driver decoding unprotected H264 content too, please!
                  first there's the need to support hw decoding in fglrx. for example i'm not able to use my board's hw decoding capabilities (divx 5 compliant movies) with fglrx but i'm able to do this with the radeon driver. so, i suspect that either the hw decoding isn't enabled in fglrx or it isn't on my board. i haven't tried the other boards since i don't often use them. i also haven't seen anything in the release notes about hw video decoding capabilities in fglrx.

                  Comment


                  • #54
                    Originally posted by bridgman View Post
                    I agree with the split, although lwn seems to be a mix of developers and slashdot-style ranters. The devs are fine (and you can see them trying to keep discussion on track) but for example if you look at the Intel article there was more ranting than reasoned discussion there.
                    Hm... I just went back and read through the comments on the Intel article:

                    http://lwn.net/Articles/267436/

                    I can see what you mean about some of the comments having a high level of emotion, and it must feel bad to you personally when people say things like "ATI, and later AMD, have been promising to release hardware documentation repeatedly, for over 8 years. ... They've been mocking us...".

                    Obviously no-one at ATI was ever in the business of "mocking" customers, so this is inaccurate and inflammatory language.

                    On the other hand, that comment, and every comment that I looked at in that discussion, contains at least a little bit of substantive relevant fact. That particular one includes this news article: "October 21, 1999: [...] ATI announces that it will be releasing 3D programming information for its video adapters". Mentioning that fact that ATI announced that on that date isn't just ranting -- it's also pointing out something that is highly relevant to some of us -- ATI's "track record" of following through on releases that it has publicly promised.

                    So, while that post and one or two others in that discussion are fraught, none of the posts that I looked at were devoid of relevant facts. Contrast this with post #41 in this thread, which offers a strongly held opinion but no added facts.

                    It must be interesting to ATI/AMD that out of two dozen people who posted on that thread, eight of them publicly exclaimed that they were eager to replace their current discrete graphics components with Intel components because of this news, and one mentioned that he had recently purchased an integrated graphics component from Intel because of their tradition of open source. I wonder if there is any way that ATI/AMD could find out if people like those eight posters follow through with this intent: how can you tell who buys what graphics components, and how can you tell why?

                    (I can tell you personally that when my brother bought a new laptop last year I advised him to choose the model with Intel integrated graphics, specifically because of Intel's stellar track record of open source drivers.)

                    Regards,

                    Zooko

                    Comment


                    • #55
                      Originally posted by Zooko View Post
                      (I can tell you personally that when my brother bought a new laptop last year I advised him to choose the model with Intel integrated graphics, specifically because of Intel's stellar track record of open source drivers.)

                      Regards,

                      Zooko
                      But then you have to look at the performance of the hardware, and ask yourself if it will be adequate for your needs? For most people the answer will be hell no. I'm not trying to diminish your closing statement any, just pointing out that Intel's graphics performance is much less then sub par.

                      Comment


                      • #56
                        Originally posted by duby229 View Post
                        But then you have to look at the performance of the hardware, and ask yourself if it will be adequate for your needs? For most people the answer will be hell no. I'm not trying to diminish your closing statement any, just pointing out that Intel's graphics performance is much less then sub par.
                        Yep -- you are right of course. My answer to that question has apparently been different than yours. For myself over the last decade or so, for my son's computer three years ago, for my wife's workstation this year, and for my brother's laptop last year, the answer was always that the performance of the most-open-source hardware was adequate. As I mentioned, until the radeonhd announcement, this was the Radeon 9250 (using the open source 'radeon' driver) in discrete components, and the in integrated components it was the Intel chips.

                        From my particular perspective, the strategic situation is that Intel was doing a good job of being open source, but a terrible job on performance, where ATI was doing a good job on performance, and a terrible job on open source. Recently, ATI nee AMD suddenly started doing better on open source, and Intel, with the aforementioned release of its graphic programming manuals, went from "good" to "just about perfect" on the open source front. Looking into the future, Intel apparently has plans to jump into new levels of performance and generality with its Larrabee project, and AMD is continuing to make progress on better and better open source support, thanks to the efforts of folks like John Bridgman here.

                        Now, if Intel can deliver competitive new products while maintaining its current excellent level of openness, despite the DRM pressures that John Bridgman has described here, or else if AMD can overcome those DRM pressures to achieve the kind of openness that Intel currently has, then we'll have a real winner of the little niche market that I live in.

                        Regards,

                        Zooko Wilcox-O'Hearn

                        Comment


                        • #57
                          Mentioning that fact that ATI announced that on that date isn't just ranting -- it's also pointing out something that is highly relevant to some of us -- ATI's "track record" of following through on releases that it has publicly promised.
                          That was a perfect example. What the poster missed, and others corrected them on, was that we *did* release 3d information for the then current chips (R100/R200), and that information was used to write a good part of the current "radeon" driver and the corresponding drm & mesa bits.

                          If someone wanted to complain that we trailed off after the R300 (when producing and releasing docs became exponentially more difficult) and only restarted recently, that would be true, but what you read is usually much worse.

                          Comment


                          • #58
                            What I wonder though, what I don't get is, why wouldn't it be possible to have DRM support in an OSS driver? I suppose because everybody could just take that code, modify it to his own needs, and then have it unprotected, e.g. 'grab-able'. I always was under the impression if you do something properly, it can be open and secure. Security through obscurity never worked ...

                            It always really is only a temporary thing anyway, as the protection gets cracked anyway (just like with DVD, just like it has been done with HDDVD/BD already).

                            I know we all agree that DRM really only is there to annoy customers, as those who'd want to circumvent it know how to do so anyway, and those who don't care, buy their media anyway.

                            I fully understand why AMD has to support it, no questions there. You wanna feed your kids, Others make you do it pretty much.

                            It's just a shame to have developer resources split across all these things.


                            All in all, I'm glad Audio DRM is slowly going away. It's hurting the consumers, those who actually buy it.
                            Whether the same will happen for video ... maybe people are becoming more aware. In the old days, we always fast forwarded past the copyright stuff. So no issues there, on copied VCR tapes, you would have already skipped it and never noticed it. With DVD's it became annoying, that you coudln't skip it. Was it annoying to hackers, kids who download their stuff, people who copy them? They didn't care. It only annoyed people who actually bought the thing.

                            Anyway. Sorry to have ranted about DRM, again. I know your fingers where getting tired as it is I joined this discussion late.


                            Here's hope, that we one day have one solid mighty performing open source driver, with a closed source DRM module (fine, it would be a rebuild driver with the DRM module embedded) that makes everyone happy

                            P.S. Thanks John, for giving us feedback. I can only imagine how many hours per day are burned keepying you from real work, but it's AMD's company image you are strongly improving. It's geeks like us that make decisions for a lot of peoples purchases, if that is the mattering part. But having a community of geeks speaking highly of you and supporting you should be worth more then just hard cash earned ... Thanks from all of us.

                            Comment


                            • #59
                              I don't care about the ability to decode DRM'd crap.

                              I just want decent hardware video decoding for un-DRM'd stuff.

                              No offense, I don't know 1 person who owns something outside of an xbox that has an optical device for HD playback. Not one. Yet I still know tons of people who watch various unencrypted h264, xVid, theora, mpg4, etc., on their computers. The only, and I mean only, DRM'ed format I see people using is DVD, which everyone and their brother circumvents anyhow, whether it's because they want to make backups or skip the warnings, ads, and trailers.

                              You guys are just part of a collusive anti-consumer industry, which must suck.

                              Personally, instead of modularity, I think you should just do one open source driver that works right instead of having all this monkey business with like 3 drivers, none of which actually work well.

                              I do appreciate your guys efforts, and I know hearing this stuff must be frustrating and discouraging, but there are those of us who own ati cards which no matter what driver we use basically function at like 60% of what they should because with every driver there is some stupid trade-off based around stuff like this discussion.

                              Just give me working 3-D, decent video playback for unprotected video, and monitor autoconfiguration that actually works properly in one driver and I'll be happy. Until you've done that, worrying about DRM is putting the cart before the horse.

                              Comment


                              • #60
                                Originally posted by mgc8 View Post
                                Why would Crysis (or any other game for that matter) on Linux have an issue with the open driver? Am I missing something here?
                                I'm not QUITE sure what he was getting to on the comment about Cysis. In all honesty, if the drivers perform, as a developer on consulting contract for a publisher, I don't think anyone gives a flip about whether the driver's proprietary or not. R200 support works pretty well. Well enough to support some of the titles we've done in the past. It's open. It works. fglrx currently doesn't support some of the functionalities that are present in all the other drivers that are relevant for the games in progress. It works, but only sort-of, and on only really well on select GPUs. It's closed. The lack of FBO support is obnoxious because we'd LIKE to rid ourselves of the PBuffer type programming needed to do dynamic texture support that is needed for some types of games we're porting.

                                THAT, folks, is what studio cares about.

                                Comment

                                Working...
                                X