Originally posted by Luke
View Post
Announcement
Collapse
No announcement yet.
X.Org 7.8 Isn't Actively Being Pursued
Collapse
X
-
OK, make that sudden results
Originally posted by bridgman View PostJust curious, what is this "sudden interest" you're talking about ? Work on UVD (and radeonSI) started in 2011, and power management even before that.
The UVD code drop, like the power management code drop, was something I wasn't really counting on. Everyone said it was held up behind corporate lawyers, nobody knew if it would ever come out. Then these two long-desired items came out at intervals of a month. I don't know whether someone told programmers to find and strip out every last piece of 3ed party patented DRM hooks that could cause issues, told third party copyright holders to either give permission or never get another contract/job from AMD, or what, but the close timing of two code drops that had both been bottled up behind lawyers just seemed to me like someone decided to get damned serious.
Those code drops didn't change openGL performance on older cards that boot to full speed, but they sure as hell made power management easier to use-and automatic power management made VDPAU practical to use, as setting a "low" manual profile didn't leave enough power to play a 1080p video on the UVD block, yet was enough for all desktop activities. I don't have any APU machines, but I saw Phoronix stories that made them sound like Nvidia cards when running open drivers due to old style PM not working combined with booting to low GPU clocks-just like Fermi on Nouveau. Now all that has been changed.
It would have taken longer, maybe a lot longer, to reverse engineer both UVD and power management with no prior knowledge of the algorithms the hardware had been designed to use. The previous power management worked, but in Dynpm mode flashed the display. It also used more power in low mode than the new way does on dpm, probably as it had only clock setting and not clock gating support. I'm curious about how long people would have waited for that support from AMD before someone threw in the towel and put in the long hours it would have no doubt taken to make old-school dynpm work? As for the UVD block, I remember predictions that DRM code would make that permanently unreleasable, and that only shader decoding was likely to ever be supported. Now even Nouveau is getting video decode support.
Comment
-
Originally posted by Luke View PostAre you referring to work inside AMD to get the code "cleaned," cleared, and released, or to parallel efforts like the old style power management and VDPAU decode on the shaders? I was using old-style "profile" power management since late Spring 2012.
The UVD code drop, like the power management code drop, was something I wasn't really counting on. Everyone said it was held up behind corporate lawyers, nobody knew if it would ever come out. Then these two long-desired items came out at intervals of a month. I don't know whether someone told programmers to find and strip out every last piece of 3ed party patented DRM hooks that could cause issues, told third party copyright holders to either give permission or never get another contract/job from AMD, or what, but the close timing of two code drops that had both been bottled up behind lawyers just seemed to me like someone decided to get damned serious.
Those code drops didn't change openGL performance on older cards that boot to full speed, but they sure as hell made power management easier to use-and automatic power management made VDPAU practical to use, as setting a "low" manual profile didn't leave enough power to play a 1080p video on the UVD block, yet was enough for all desktop activities. I don't have any APU machines, but I saw Phoronix stories that made them sound like Nvidia cards when running open drivers due to old style PM not working combined with booting to low GPU clocks-just like Fermi on Nouveau. Now all that has been changed.
It would have taken longer, maybe a lot longer, to reverse engineer both UVD and power management with no prior knowledge of the algorithms the hardware had been designed to use. The previous power management worked, but in Dynpm mode flashed the display. It also used more power in low mode than the new way does on dpm, probably as it had only clock setting and not clock gating support. I'm curious about how long people would have waited for that support from AMD before someone threw in the towel and put in the long hours it would have no doubt taken to make old-school dynpm work? As for the UVD block, I remember predictions that DRM code would make that permanently unreleasable, and that only shader decoding was likely to ever be supported. Now even Nouveau is getting video decode support.
I do think internally the guys working on the OSS drivers do have a much stronger leg to stand on recently. I think that is going to result in very good things for all of AMD's (linux) customersLast edited by duby229; 13 August 2013, 12:16 AM.
Comment
-
Originally posted by bridgman View PostJust curious, what is this "sudden interest" you're talking about ? Work on UVD (and radeonSI) started in 2011, and power management even before that.
Intel has a special Android version that uses Mesa. I don't know if Luke means the same but I'm pretty certain that AMD wants to go the same route. Intel does the Android?Mesa work anyway, so why not strengthen AMD's Mesa drivers. I'm sure that's cheaper than porting Catalyst to Android.
Comment
-
Originally posted by Vim_User View PostYou still get it backwards, seems that you are resistant to learning. The Wayland developers are those with years of experience in developing display servers, so they are the pros, while the Mir developers are those with no experience, hence the amateurs.
rather, it's like putting together individual low - and high level pieces, and envision objects (classes / methods) and data structures (in this case to support windows and their manipulation) as you would for any other sw problem, something that anyone who has studied sw engineering 101 and is given time to analyze the use case, requiremesnt and the APIs to use, for this one as would for any other problem, can do
OTOH go and look for yourself how many professional positions in the sw industry require one to know and be proficient with methodologies such as Agile, XP, SCRUM, Test Driven Development and the like, or with languages such as (e/) C++, Java, C#, or ..., and how much software is written in C that is both *new* (ie other than GNU, core *nix code -stemmming from a legacy culture even when new- or straight legacy codebases , mandating the choice of language) and *professional* (written with a proper design and QA process)
in the real world (when theres no kernel to be developed and you can choose your language and tools freely) C is often looked at with suspicion, due to the fact that it's the choice of hobbyists hacking together half working code without a design a qa process (without maybe even knowing what qa means , much less about writing tests before code - "tests? what are tests?"), in my field it often bears an image of unprofessionalism
and yet wayland developers have chosen to perpetuate the use of C and not use a development methodology that would make the code both more modern and closer to persistent (because tests being part of the codebase help a great deal to avoid regressions) correctness
judging from them not (seemingly) knowing nor applying current sw development methodologies, sticking to old fashioned tools and design concepts (previously the display server separated from the window manager separated from the shell, now the display server still separated from the shell - protocols needing to be extensible as if the requirements for something like a desktop were not finite and apriori known...) one may also call them amateurs ( though i won't, in respect for them being paid developers for large companies for quite some time)
but then, it's be also quite an insult to call someone an amateur, who doesnt have an equally big name yet knows how to do his job - how can you question the professionality of someone you dont know, in his own field?
Comment
-
Did you just equate good testing and development methologies with some languages? Completely ignoring other aspects, such as tools, performance, memory use, and usability of libraries that matter with languages? In a post calling others amateurs?
Comment
-
Originally posted by silix View Postrealize that a display server is not exactly rocket science
rather, it's like putting together individual low - and high level pieces, and envision objects (classes / methods) and data structures (in this case to support windows and their manipulation) as you would for any other sw problem, something that anyone who has studied sw engineering 101 and is given time to analyze the use case, requiremesnt and the APIs to use, for this one as would for any other problem, can do
OTOH go and look for yourself how many professional positions in the sw industry require one to know and be proficient with methodologies such as Agile, XP, SCRUM, Test Driven Development and the like, or with languages such as (e/) C++, Java, C#, or ..., and how much software is written in C that is both *new* (ie other than GNU, core *nix code -stemmming from a legacy culture even when new- or straight legacy codebases , mandating the choice of language) and *professional* (written with a proper design and QA process)
in the real world (when theres no kernel to be developed and you can choose your language and tools freely) C is often looked at with suspicion, due to the fact that it's the choice of hobbyists hacking together half working code without a design a qa process (without maybe even knowing what qa means , much less about writing tests before code - "tests? what are tests?"), in my field it often bears an image of unprofessionalism
and yet wayland developers have chosen to perpetuate the use of C and not use a development methodology that would make the code both more modern and closer to persistent (because tests being part of the codebase help a great deal to avoid regressions) correctness
Just because something is used in the "professional software world" by big, proprietary software houses doesn't make it better. The so-called professional coders are not always superior. When microsoft was forced to become a contributor to the linux kernel due to their hyper-v code, the microsoft "professional" coders at first couldn't fulfill the high quality standards set by the kernel developers. They were amazed at how disciplined and strict quality requirements the Linux kernel had, as they hadn't had to deal with such requirements in their own work.
judging from them not (seemingly) knowing nor applying current sw development methodologies, sticking to old fashioned tools and design concepts (previously the display server separated from the window manager separated from the shell, now the display server still separated from the shell - protocols needing to be extensible as if the requirements for something like a desktop were not finite and apriori known...) one may also call them amateurs ( though i won't, in respect for them being paid developers for large companies for quite some time)
but then, it's be also quite an insult to call someone an amateur, who doesnt have an equally big name yet knows how to do his job - how can you question the professionality of someone you dont know, in his own field?
Thirdly, heck yes protocols need to be extensible. Oh, who's ever going to need more than 640K of memory, that's just preposterous... we never know what happens in the future, the IT field is very volatile that way, and by preparing for that future by allowing extension of the protocol when needed, the Wayland devs avoid having to go through this whole mess in another 5 years. That's called thinking ahead.
For that matter, Mir doesn't even have a protocol, it's just whatever is needed to communicate with Unity, with no promises of a stable API which makes it unfeasible to anyone except Canonical to use - not much of an improvement there. Wayland still promises a stable API and backwards compatibility, so that developers of all DE's can be assured that the rug won't get pulled from under their feet at some point in the future.
Comment
-
Originally posted by curaga View PostDid you just equate good testing and development methologies with some languages?
methodologies and programming languages as in things (NOT the only ones, but some indeed) that define your level of professionalism as long as you are capable to leverage them - and you're rejected if you are not, when you apply for a position in the field that requires them
Completely ignoring other aspects, such as tools, performance, memory use, and usability of libraries that matter with languages?
but now that you mention them, note that C++, Java or C# tools are generally more advanced that what's available for what is regarded as the "desert island choice" of programming languages (in fact it's the one you write sw for architectures for which there's no other compiler available)...
In a post calling others amateurs?
you are not keith packard or other long time Xorg developer (-> you dont know how to design a display server ) -> you're not an amateur
which is flawed logic on so many levels
you can be a maintainer of a 30 yo crufty legacy codebase - that says you're an expert programmer, sure
but, does knowing everything about display technology from 30 years ago, *automatically* say you are the only one who can design a display server using modern techology, and everyone else is an amateur? NO - if anything, it says that, as much professional as you are, you risk falling into the trap of applying old concepts and tools to the modern one, more that someone who starts afresh (a risk you have to be aware of, and very careful to avoid it)
development of a UI display server is, again, not rocket science, and a modern display server actually ought to be as streamlined (yet functional) as possible - that makes development of such thing at the reach of several people, granted they know sw design concepts and methodologies in general
this means that Mir developers are not necessarily amateurs - the premise of them working with c++ and tdd would say otherwise by itself
Comment
-
Originally posted by silix View Postyou can be a maintainer of a 30 yo crufty legacy codebase - that says you're an expert programmer, sure
but, does knowing everything about display technology from 30 years ago, *automatically* say you are the only one who can design a display server using modern techology, and everyone else is an amateur? NO - if anything, it says that, as much professional as you are, you risk falling into the trap of applying old concepts and tools to the modern one, more that someone who starts afresh (a risk you have to be aware of, and very careful to avoid it)
development of a UI display server is, again, not rocket science, and a modern display server actually ought to be as streamlined (yet functional) as possible - that makes development of such thing at the reach of several people, granted they know sw design concepts and methodologies in general
this means that Mir developers are not necessarily amateurs - the premise of them working with c++ and tdd would say otherwise by itself
You seem to be confusing display technology and programming technology.
Wayland developers are experts in display technology, while Mir devs are not (and no, it's not trivial). It has very little to do with what tools they program with.
Both are certainly quite literate about programming. TDD is nice, but it's not the end all be all solution for programming.
C is a sane choice for an IPC protocol, C++ is adequate too. It doesn't say much about the project.
Comment
-
Originally posted by dee. View PostWell the thing is, the reason why it's "not rocket science" is because the Wayland devs have spent the better part of 5 years doing the groundwork, developing the graphics stack, making it ready for a modern display system.
it's something that almost any sw engineer (starting with those working with games and game engines) could be tasked with
Now that work is done, it's easier for Mir devs to jump on board with their own solution that builds on all the work done by the actual professionals. Without that work done by Wayland devs, Mir wouldn't even be possible. The Mir devs wouldn't even know what to do if Wayland hadn't shown them the Way.
you'd be right if Mir were reusing code from weston - but unless you prove otherwise it's not
so the only thing they have in common is they are both compositing display servers doing away with X, which is quite vague
moreover, there are at least two legacy (X11) -free gui servers, based on the mechanism of compositing, that predate Mir by a decade and Wayland by 5 years, and exploiting gpu capabilities (use the gl pipeline to render the desktop) is in the talk in the linux scene since at least 2004
moreover, wayland in itself is nothing special technically - doing away with X separate processes and round trips was the most logical thing anyone would do muche earlier than wayland - the problem has always been more with ecosystem inertia due to the mass of x based software
had wayland not existed, it's not like nothing else would have ben born - someone else would have taken on the duty to port compositing from OSX and Longhorn to the free sw world, under a different project name
The Linux kernel is written in C. Go ask Linus Torvalds what he thinks of C++...
do i need take for dogma? no thank you .. read this http://warp.povusers.org/OpenLetters...oTorvalds.html and come back
and "test-driven" development is not a miracle cure, it's just one method of development, not necessarily any better or worse than other methods.
its' better in that you can stay assured that you have a consistent and working code base (granted all tests pass) at avery iteration - and, by breaking down individual features into tasks and subtasks, and implementing each atomically with the minimal possible code, that a project may build up features quicker
Just because something is used in the "professional software world" by big, proprietary software houses doesn't make it better.
second, it doesnt make it worse, either - if anything, it's better for you to know about it if you apply for a position with certain companies
third, in several contexts the scale is what defines the level of professionalism involved with something, in the eyes of IT and non IT people - as a matter of fact "world class" is how a quality product is usually labeled, in contrast to "amateurish" ...
fourth, mostly the same criteria apply to proprietary as well as free/ open source software (openness and/or ethos are another characteristic which is orthogonal to the methodology)
The so-called professional coders are not always superior.
When microsoft was forced to become a contributor to the linux kernel due to their hyper-v code, the microsoft "professional" coders at first couldn't fulfill the high quality standards set by the kernel developers. They were amazed at how disciplined and strict quality requirements the Linux kernel had, as they hadn't had to deal with such requirements in their own work.
Seems like someone's been reading Shuttleworth's blog...
[QUOTE]firstly no, Wayland doesn't require your display server to be separated from your window manager or shell.
Wayland places no such requirements - you can implement them all in one big blob if you want to. Wayland makes very little demands on the design of your software - all it asks is that you speak the protocol correctly, after that you can write your compositor whichever way you want, decide how to allocate buffers (server/client side), use whatever backend you want (EGL, pixman, Android)...
Secondly, modularity is a good thing. Adding in everyting in one big chunk just creates a single point of failure
with separate components you have to make sure A, B and the protocol interconnecting them, are robust, first individually (unit) then when put together (integration) - separate components actually mean higher complexity and failure probability , leading to higher debug times..
(SW development 101)
btw, interesting how you dont have a problem with the kernel being precisely a big chunk (>1 GB in some cases) of privileged code...
and makes the system less customizable
if your monolithic beast of a display system happens to crash, then the shell, window manager and display server all crash at the same time.
the problem is not in the probability of a server crash - any software will crash soner or later, unless it's so simple and thoroughly verified that it wont- the problem lies in how the crash is handled
but that is an orthogonal matter to the server being integrated or not - and graceful recovery can be implemented also for a server integrating the shell
Thirdly, heck yes protocols need to be extensible.
even if that's not the case and some interaction between application and server is involved (at least for details such as input event forwarding, drag and drop and window min/max/resize -), you do realize that there's only so much protocol needed to accomodate a desktop, dont you?
so please give a valid reason for why extensibility is needed (leading to multiple code paths in the server AND applications / toolkits, to fallback in the case of missing extensions either side / mismatched protocol version) that is not a vague "eh but the future you dont know..."
For that matter, Mir doesn't even have a protocol<...>
Wayland otoh needs a protocol, having to support a different architecture (with possibly interchangeable shells , input methods and so on)
unfeasible to anyone except Canonical to use
Comment
Comment