Announcement

Collapse
No announcement yet.

X.Org 7.8 Isn't Actively Being Pursued

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by Luke View Post
    Some say AMD's sudden interest in the open driver implies an effort to emulate Intel's strategy
    of closed driver for Windoze, separate open driver for Linux.
    Just curious, what is this "sudden interest" you're talking about ? Work on UVD (and radeonSI) started in 2011, and power management even before that.
    Test signature

    Comment


    • #42
      OK, make that sudden results

      Originally posted by bridgman View Post
      Just curious, what is this "sudden interest" you're talking about ? Work on UVD (and radeonSI) started in 2011, and power management even before that.
      Are you referring to work inside AMD to get the code "cleaned," cleared, and released, or to parallel efforts like the old style power management and VDPAU decode on the shaders? I was using old-style "profile" power management since late Spring 2012.

      The UVD code drop, like the power management code drop, was something I wasn't really counting on. Everyone said it was held up behind corporate lawyers, nobody knew if it would ever come out. Then these two long-desired items came out at intervals of a month. I don't know whether someone told programmers to find and strip out every last piece of 3ed party patented DRM hooks that could cause issues, told third party copyright holders to either give permission or never get another contract/job from AMD, or what, but the close timing of two code drops that had both been bottled up behind lawyers just seemed to me like someone decided to get damned serious.

      Those code drops didn't change openGL performance on older cards that boot to full speed, but they sure as hell made power management easier to use-and automatic power management made VDPAU practical to use, as setting a "low" manual profile didn't leave enough power to play a 1080p video on the UVD block, yet was enough for all desktop activities. I don't have any APU machines, but I saw Phoronix stories that made them sound like Nvidia cards when running open drivers due to old style PM not working combined with booting to low GPU clocks-just like Fermi on Nouveau. Now all that has been changed.

      It would have taken longer, maybe a lot longer, to reverse engineer both UVD and power management with no prior knowledge of the algorithms the hardware had been designed to use. The previous power management worked, but in Dynpm mode flashed the display. It also used more power in low mode than the new way does on dpm, probably as it had only clock setting and not clock gating support. I'm curious about how long people would have waited for that support from AMD before someone threw in the towel and put in the long hours it would have no doubt taken to make old-school dynpm work? As for the UVD block, I remember predictions that DRM code would make that permanently unreleasable, and that only shader decoding was likely to ever be supported. Now even Nouveau is getting video decode support.

      Comment


      • #43
        Originally posted by Luke View Post
        Are you referring to work inside AMD to get the code "cleaned," cleared, and released, or to parallel efforts like the old style power management and VDPAU decode on the shaders? I was using old-style "profile" power management since late Spring 2012.

        The UVD code drop, like the power management code drop, was something I wasn't really counting on. Everyone said it was held up behind corporate lawyers, nobody knew if it would ever come out. Then these two long-desired items came out at intervals of a month. I don't know whether someone told programmers to find and strip out every last piece of 3ed party patented DRM hooks that could cause issues, told third party copyright holders to either give permission or never get another contract/job from AMD, or what, but the close timing of two code drops that had both been bottled up behind lawyers just seemed to me like someone decided to get damned serious.

        Those code drops didn't change openGL performance on older cards that boot to full speed, but they sure as hell made power management easier to use-and automatic power management made VDPAU practical to use, as setting a "low" manual profile didn't leave enough power to play a 1080p video on the UVD block, yet was enough for all desktop activities. I don't have any APU machines, but I saw Phoronix stories that made them sound like Nvidia cards when running open drivers due to old style PM not working combined with booting to low GPU clocks-just like Fermi on Nouveau. Now all that has been changed.

        It would have taken longer, maybe a lot longer, to reverse engineer both UVD and power management with no prior knowledge of the algorithms the hardware had been designed to use. The previous power management worked, but in Dynpm mode flashed the display. It also used more power in low mode than the new way does on dpm, probably as it had only clock setting and not clock gating support. I'm curious about how long people would have waited for that support from AMD before someone threw in the towel and put in the long hours it would have no doubt taken to make old-school dynpm work? As for the UVD block, I remember predictions that DRM code would make that permanently unreleasable, and that only shader decoding was likely to ever be supported. Now even Nouveau is getting video decode support.
        I'm sure he'll respond to you himself, but Bridgman has told me in the past, (Or rather I probably just read it here somewhere on this forum) that it wasn't so much about lawyers as it was about code review. AMD has a number of developers that they pay directly to work on the radeon driver. Plus there are independent developers that help a lot. This is all new code. None of it was once proprietary. So it had to be peer reviewed before release so that everyone involved could get their opinions in on how it should work. That took some time to put together. Meanwhile there were a lot of other things going on like learning the hardware and getting documentation in a format that could be released. In addition everyone always expected that the end result would accumulate into same day launch support of the OSS drivers. They are mostly caught up on that, but still have a ways to go.

        I do think internally the guys working on the OSS drivers do have a much stronger leg to stand on recently. I think that is going to result in very good things for all of AMD's (linux) customers
        Last edited by duby229; 13 August 2013, 12:16 AM.

        Comment


        • #44
          Originally posted by bridgman View Post
          Just curious, what is this "sudden interest" you're talking about ? Work on UVD (and radeonSI) started in 2011, and power management even before that.
          Didn't you notice that you got a few new colleagues in the last few months?

          Intel has a special Android version that uses Mesa. I don't know if Luke means the same but I'm pretty certain that AMD wants to go the same route. Intel does the Android?Mesa work anyway, so why not strengthen AMD's Mesa drivers. I'm sure that's cheaper than porting Catalyst to Android.

          Comment


          • #45
            Originally posted by Vim_User View Post
            You still get it backwards, seems that you are resistant to learning. The Wayland developers are those with years of experience in developing display servers, so they are the pros, while the Mir developers are those with no experience, hence the amateurs.
            realize that a display server is not exactly rocket science
            rather, it's like putting together individual low - and high level pieces, and envision objects (classes / methods) and data structures (in this case to support windows and their manipulation) as you would for any other sw problem, something that anyone who has studied sw engineering 101 and is given time to analyze the use case, requiremesnt and the APIs to use, for this one as would for any other problem, can do

            OTOH go and look for yourself how many professional positions in the sw industry require one to know and be proficient with methodologies such as Agile, XP, SCRUM, Test Driven Development and the like, or with languages such as (e/) C++, Java, C#, or ..., and how much software is written in C that is both *new* (ie other than GNU, core *nix code -stemmming from a legacy culture even when new- or straight legacy codebases , mandating the choice of language) and *professional* (written with a proper design and QA process)
            in the real world (when theres no kernel to be developed and you can choose your language and tools freely) C is often looked at with suspicion, due to the fact that it's the choice of hobbyists hacking together half working code without a design a qa process (without maybe even knowing what qa means , much less about writing tests before code - "tests? what are tests?"), in my field it often bears an image of unprofessionalism
            and yet wayland developers have chosen to perpetuate the use of C and not use a development methodology that would make the code both more modern and closer to persistent (because tests being part of the codebase help a great deal to avoid regressions) correctness

            judging from them not (seemingly) knowing nor applying current sw development methodologies, sticking to old fashioned tools and design concepts (previously the display server separated from the window manager separated from the shell, now the display server still separated from the shell - protocols needing to be extensible as if the requirements for something like a desktop were not finite and apriori known...) one may also call them amateurs ( though i won't, in respect for them being paid developers for large companies for quite some time)
            but then, it's be also quite an insult to call someone an amateur, who doesnt have an equally big name yet knows how to do his job - how can you question the professionality of someone you dont know, in his own field?

            Comment


            • #46
              Did you just equate good testing and development methologies with some languages? Completely ignoring other aspects, such as tools, performance, memory use, and usability of libraries that matter with languages? In a post calling others amateurs?

              Comment


              • #47
                Originally posted by silix View Post
                realize that a display server is not exactly rocket science
                rather, it's like putting together individual low - and high level pieces, and envision objects (classes / methods) and data structures (in this case to support windows and their manipulation) as you would for any other sw problem, something that anyone who has studied sw engineering 101 and is given time to analyze the use case, requiremesnt and the APIs to use, for this one as would for any other problem, can do
                Well the thing is, the reason why it's "not rocket science" is because the Wayland devs have spent the better part of 5 years doing the groundwork, developing the graphics stack, making it ready for a modern display system. Now that work is done, it's easier for Mir devs to jump on board with their own solution that builds on all the work done by the actual professionals. Without that work done by Wayland devs, Mir wouldn't even be possible. The Mir devs wouldn't even know what to do if Wayland hadn't shown them the Way.

                OTOH go and look for yourself how many professional positions in the sw industry require one to know and be proficient with methodologies such as Agile, XP, SCRUM, Test Driven Development and the like, or with languages such as (e/) C++, Java, C#, or ..., and how much software is written in C that is both *new* (ie other than GNU, core *nix code -stemmming from a legacy culture even when new- or straight legacy codebases , mandating the choice of language) and *professional* (written with a proper design and QA process)
                in the real world (when theres no kernel to be developed and you can choose your language and tools freely) C is often looked at with suspicion, due to the fact that it's the choice of hobbyists hacking together half working code without a design a qa process (without maybe even knowing what qa means , much less about writing tests before code - "tests? what are tests?"), in my field it often bears an image of unprofessionalism
                and yet wayland developers have chosen to perpetuate the use of C and not use a development methodology that would make the code both more modern and closer to persistent (because tests being part of the codebase help a great deal to avoid regressions) correctness
                The Linux kernel is written in C. Go ask Linus Torvalds what he thinks of C++... and "test-driven" development is not a miracle cure, it's just one method of development, not necessarily any better or worse than other methods.

                Just because something is used in the "professional software world" by big, proprietary software houses doesn't make it better. The so-called professional coders are not always superior. When microsoft was forced to become a contributor to the linux kernel due to their hyper-v code, the microsoft "professional" coders at first couldn't fulfill the high quality standards set by the kernel developers. They were amazed at how disciplined and strict quality requirements the Linux kernel had, as they hadn't had to deal with such requirements in their own work.

                judging from them not (seemingly) knowing nor applying current sw development methodologies, sticking to old fashioned tools and design concepts (previously the display server separated from the window manager separated from the shell, now the display server still separated from the shell - protocols needing to be extensible as if the requirements for something like a desktop were not finite and apriori known...) one may also call them amateurs ( though i won't, in respect for them being paid developers for large companies for quite some time)
                but then, it's be also quite an insult to call someone an amateur, who doesnt have an equally big name yet knows how to do his job - how can you question the professionality of someone you dont know, in his own field?
                Seems like someone's been reading Shuttleworth's blog... firstly no, Wayland doesn't require your display server to be separated from your window manager or shell. Wayland places no such requirements - you can implement them all in one big blob if you want to. Wayland makes very little demands on the design of your software - all it asks is that you speak the protocol correctly, after that you can write your compositor whichever way you want, decide how to allocate buffers (server/client side), use whatever backend you want (EGL, pixman, Android)... Secondly, modularity is a good thing. Adding in everyting in one big chunk just creates a single point of failure, and makes the system less customizable - if your monolithic beast of a display system happens to crash, then the shell, window manager and display server all crash at the same time.

                Thirdly, heck yes protocols need to be extensible. Oh, who's ever going to need more than 640K of memory, that's just preposterous... we never know what happens in the future, the IT field is very volatile that way, and by preparing for that future by allowing extension of the protocol when needed, the Wayland devs avoid having to go through this whole mess in another 5 years. That's called thinking ahead.

                For that matter, Mir doesn't even have a protocol, it's just whatever is needed to communicate with Unity, with no promises of a stable API which makes it unfeasible to anyone except Canonical to use - not much of an improvement there. Wayland still promises a stable API and backwards compatibility, so that developers of all DE's can be assured that the rug won't get pulled from under their feet at some point in the future.

                Comment


                • #48
                  Originally posted by curaga View Post
                  Did you just equate good testing and development methologies with some languages?
                  learn to read better...
                  methodologies and programming languages as in things (NOT the only ones, but some indeed) that define your level of professionalism as long as you are capable to leverage them - and you're rejected if you are not, when you apply for a position in the field that requires them
                  Completely ignoring other aspects, such as tools, performance, memory use, and usability of libraries that matter with languages?
                  factors that matter little in the above context
                  but now that you mention them, note that C++, Java or C# tools are generally more advanced that what's available for what is regarded as the "desert island choice" of programming languages (in fact it's the one you write sw for architectures for which there's no other compiler available)...
                  In a post calling others amateurs?
                  no, in a post replying to someone calling others amateurs on a questionable basis...
                  you are not keith packard or other long time Xorg developer (-> you dont know how to design a display server ) -> you're not an amateur
                  which is flawed logic on so many levels

                  you can be a maintainer of a 30 yo crufty legacy codebase - that says you're an expert programmer, sure
                  but, does knowing everything about display technology from 30 years ago, *automatically* say you are the only one who can design a display server using modern techology, and everyone else is an amateur? NO - if anything, it says that, as much professional as you are, you risk falling into the trap of applying old concepts and tools to the modern one, more that someone who starts afresh (a risk you have to be aware of, and very careful to avoid it)
                  development of a UI display server is, again, not rocket science, and a modern display server actually ought to be as streamlined (yet functional) as possible - that makes development of such thing at the reach of several people, granted they know sw design concepts and methodologies in general

                  this means that Mir developers are not necessarily amateurs - the premise of them working with c++ and tdd would say otherwise by itself

                  Comment


                  • #49
                    Originally posted by silix View Post
                    you can be a maintainer of a 30 yo crufty legacy codebase - that says you're an expert programmer, sure
                    but, does knowing everything about display technology from 30 years ago, *automatically* say you are the only one who can design a display server using modern techology, and everyone else is an amateur? NO - if anything, it says that, as much professional as you are, you risk falling into the trap of applying old concepts and tools to the modern one, more that someone who starts afresh (a risk you have to be aware of, and very careful to avoid it)
                    development of a UI display server is, again, not rocket science, and a modern display server actually ought to be as streamlined (yet functional) as possible - that makes development of such thing at the reach of several people, granted they know sw design concepts and methodologies in general

                    this means that Mir developers are not necessarily amateurs - the premise of them working with c++ and tdd would say otherwise by itself
                    Yes, having to introduce modern display technology in a 30 yo codebase puts you in a very good place to know how to write a new codebase for that technology.
                    You seem to be confusing display technology and programming technology.

                    Wayland developers are experts in display technology, while Mir devs are not (and no, it's not trivial). It has very little to do with what tools they program with.
                    Both are certainly quite literate about programming. TDD is nice, but it's not the end all be all solution for programming.
                    C is a sane choice for an IPC protocol, C++ is adequate too. It doesn't say much about the project.

                    Comment


                    • #50
                      Originally posted by dee. View Post
                      Well the thing is, the reason why it's "not rocket science" is because the Wayland devs have spent the better part of 5 years doing the groundwork, developing the graphics stack, making it ready for a modern display system.
                      nope, it's not rocket science because it is about maintaining a scene graph, performing an input-redraw loop, move some surfaces around in screen relative coordinates, maybe devise a mechanism of sorts for applications to handle drag and drop, some hooks for input methods (ooohhh), and that's it mostly (it's not even about doing complex math for drawing curves or render scenes to surfaces, because what's in the rectangular surface/window is entirely up to the client)

                      it's something that almost any sw engineer (starting with those working with games and game engines) could be tasked with
                      Now that work is done, it's easier for Mir devs to jump on board with their own solution that builds on all the work done by the actual professionals. Without that work done by Wayland devs, Mir wouldn't even be possible. The Mir devs wouldn't even know what to do if Wayland hadn't shown them the Way.
                      stop with this already...
                      you'd be right if Mir were reusing code from weston - but unless you prove otherwise it's not
                      so the only thing they have in common is they are both compositing display servers doing away with X, which is quite vague
                      moreover, there are at least two legacy (X11) -free gui servers, based on the mechanism of compositing, that predate Mir by a decade and Wayland by 5 years, and exploiting gpu capabilities (use the gl pipeline to render the desktop) is in the talk in the linux scene since at least 2004

                      moreover, wayland in itself is nothing special technically - doing away with X separate processes and round trips was the most logical thing anyone would do muche earlier than wayland - the problem has always been more with ecosystem inertia due to the mass of x based software

                      had wayland not existed, it's not like nothing else would have ben born - someone else would have taken on the duty to port compositing from OSX and Longhorn to the free sw world, under a different project name
                      The Linux kernel is written in C. Go ask Linus Torvalds what he thinks of C++...
                      dont need to ask, i perfectly know - so what? his words on C++ are utterly biased, all but pragmatic bs
                      do i need take for dogma? no thank you .. read this http://warp.povusers.org/OpenLetters...oTorvalds.html and come back
                      and "test-driven" development is not a miracle cure, it's just one method of development, not necessarily any better or worse than other methods.
                      it's worse than others in that it's not as fun, and you have to write twice (or more) as much code, actual code + code for intended behaviour + (ideally) code to check against unintended behaviour (or that wrong states or inputs are handled correctly)
                      its' better in that you can stay assured that you have a consistent and working code base (granted all tests pass) at avery iteration - and, by breaking down individual features into tasks and subtasks, and implementing each atomically with the minimal possible code, that a project may build up features quicker
                      Just because something is used in the "professional software world" by big, proprietary software houses doesn't make it better.
                      first, tdd is a development methodology - in itself it's ortogonal to the scale (business vs hobbyist) - in fact, it's something that may benefit a individual developer more because an individual may not have enough resource to devote in QA (which may well require an order of magnitude more effort) after hacking together the code
                      second, it doesnt make it worse, either - if anything, it's better for you to know about it if you apply for a position with certain companies
                      third, in several contexts the scale is what defines the level of professionalism involved with something, in the eyes of IT and non IT people - as a matter of fact "world class" is how a quality product is usually labeled, in contrast to "amateurish" ...
                      fourth, mostly the same criteria apply to proprietary as well as free/ open source software (openness and/or ethos are another characteristic which is orthogonal to the methodology)
                      The so-called professional coders are not always superior.
                      not always inferior, either..
                      When microsoft was forced to become a contributor to the linux kernel due to their hyper-v code, the microsoft "professional" coders at first couldn't fulfill the high quality standards set by the kernel developers. They were amazed at how disciplined and strict quality requirements the Linux kernel had, as they hadn't had to deal with such requirements in their own work.
                      the linux kernel has very big commercial interests behind it, it's not really in the same league as what the wayland project (born in the spare time of a single employee) has been for the most of its life and still, to a certain extent, is
                      Seems like someone's been reading Shuttleworth's blog...
                      i dont make assumptions on why you are the way you are, please dont make assumptions on what i read, either

                      [QUOTE]firstly no, Wayland doesn't require your display server to be separated from your window manager or shell.

                      Wayland places no such requirements - you can implement them all in one big blob if you want to. Wayland makes very little demands on the design of your software - all it asks is that you speak the protocol correctly, after that you can write your compositor whichever way you want, decide how to allocate buffers (server/client side), use whatever backend you want (EGL, pixman, Android)...

                      Secondly, modularity is a good thing. Adding in everyting in one big chunk just creates a single point of failure
                      so, make that one robust (and it's not impossible) and the whole is robust
                      with separate components you have to make sure A, B and the protocol interconnecting them, are robust, first individually (unit) then when put together (integration) - separate components actually mean higher complexity and failure probability , leading to higher debug times..
                      (SW development 101)
                      btw, interesting how you dont have a problem with the kernel being precisely a big chunk (>1 GB in some cases) of privileged code...
                      and makes the system less customizable
                      that maybe, but it's not inherent
                      if your monolithic beast of a display system happens to crash, then the shell, window manager and display server all crash at the same time.
                      not that the display server, the compositor and the shell being separate processes under X, helped much in that regard.. if the x server crashed, all clients were terminated too and the system restarted, with your previous work, lost..
                      the problem is not in the probability of a server crash - any software will crash soner or later, unless it's so simple and thoroughly verified that it wont- the problem lies in how the crash is handled
                      but that is an orthogonal matter to the server being integrated or not - and graceful recovery can be implemented also for a server integrating the shell
                      Thirdly, heck yes protocols need to be extensible.
                      ideally, the window manager / display server is transparent to applications and these would run without being aware of it and of each other (like processes run in a virtually flat memory space without being aware of each other or of the kernel scheduling them)
                      even if that's not the case and some interaction between application and server is involved (at least for details such as input event forwarding, drag and drop and window min/max/resize -), you do realize that there's only so much protocol needed to accomodate a desktop, dont you?
                      so please give a valid reason for why extensibility is needed (leading to multiple code paths in the server AND applications / toolkits, to fallback in the case of missing extensions either side / mismatched protocol version) that is not a vague "eh but the future you dont know..."
                      For that matter, Mir doesn't even have a protocol<...>
                      considering the above - the gui should be as transparent to applications as possible - and the fact Mir merges the WM with the shell, there's little need for something beyond a mir specific api for stuff like theming, input methods, the notification area, WM and DnD...
                      Wayland otoh needs a protocol, having to support a different architecture (with possibly interchangeable shells , input methods and so on)
                      unfeasible to anyone except Canonical to use
                      not having a protocol to develop against doesnt mean something is infeasible, it just means that the target is shifted - instead of developing at the protocol level for something that your application shouldnt know about, as you 'd do for wayland (to develop a compositor other than weston, out of process input method modules, or shells) you develop a mir plugin, or theme, or cutomize mir itself

                      Comment

                      Working...
                      X