Announcement

Collapse
No announcement yet.

X.Org 7.8 Isn't Actively Being Pursued

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • dee.
    replied
    Originally posted by silix View Post
    realize that a display server is not exactly rocket science
    rather, it's like putting together individual low - and high level pieces, and envision objects (classes / methods) and data structures (in this case to support windows and their manipulation) as you would for any other sw problem, something that anyone who has studied sw engineering 101 and is given time to analyze the use case, requiremesnt and the APIs to use, for this one as would for any other problem, can do
    Well the thing is, the reason why it's "not rocket science" is because the Wayland devs have spent the better part of 5 years doing the groundwork, developing the graphics stack, making it ready for a modern display system. Now that work is done, it's easier for Mir devs to jump on board with their own solution that builds on all the work done by the actual professionals. Without that work done by Wayland devs, Mir wouldn't even be possible. The Mir devs wouldn't even know what to do if Wayland hadn't shown them the Way.

    OTOH go and look for yourself how many professional positions in the sw industry require one to know and be proficient with methodologies such as Agile, XP, SCRUM, Test Driven Development and the like, or with languages such as (e/) C++, Java, C#, or ..., and how much software is written in C that is both *new* (ie other than GNU, core *nix code -stemmming from a legacy culture even when new- or straight legacy codebases , mandating the choice of language) and *professional* (written with a proper design and QA process)
    in the real world (when theres no kernel to be developed and you can choose your language and tools freely) C is often looked at with suspicion, due to the fact that it's the choice of hobbyists hacking together half working code without a design a qa process (without maybe even knowing what qa means , much less about writing tests before code - "tests? what are tests?"), in my field it often bears an image of unprofessionalism
    and yet wayland developers have chosen to perpetuate the use of C and not use a development methodology that would make the code both more modern and closer to persistent (because tests being part of the codebase help a great deal to avoid regressions) correctness
    The Linux kernel is written in C. Go ask Linus Torvalds what he thinks of C++... and "test-driven" development is not a miracle cure, it's just one method of development, not necessarily any better or worse than other methods.

    Just because something is used in the "professional software world" by big, proprietary software houses doesn't make it better. The so-called professional coders are not always superior. When microsoft was forced to become a contributor to the linux kernel due to their hyper-v code, the microsoft "professional" coders at first couldn't fulfill the high quality standards set by the kernel developers. They were amazed at how disciplined and strict quality requirements the Linux kernel had, as they hadn't had to deal with such requirements in their own work.

    judging from them not (seemingly) knowing nor applying current sw development methodologies, sticking to old fashioned tools and design concepts (previously the display server separated from the window manager separated from the shell, now the display server still separated from the shell - protocols needing to be extensible as if the requirements for something like a desktop were not finite and apriori known...) one may also call them amateurs ( though i won't, in respect for them being paid developers for large companies for quite some time)
    but then, it's be also quite an insult to call someone an amateur, who doesnt have an equally big name yet knows how to do his job - how can you question the professionality of someone you dont know, in his own field?
    Seems like someone's been reading Shuttleworth's blog... firstly no, Wayland doesn't require your display server to be separated from your window manager or shell. Wayland places no such requirements - you can implement them all in one big blob if you want to. Wayland makes very little demands on the design of your software - all it asks is that you speak the protocol correctly, after that you can write your compositor whichever way you want, decide how to allocate buffers (server/client side), use whatever backend you want (EGL, pixman, Android)... Secondly, modularity is a good thing. Adding in everyting in one big chunk just creates a single point of failure, and makes the system less customizable - if your monolithic beast of a display system happens to crash, then the shell, window manager and display server all crash at the same time.

    Thirdly, heck yes protocols need to be extensible. Oh, who's ever going to need more than 640K of memory, that's just preposterous... we never know what happens in the future, the IT field is very volatile that way, and by preparing for that future by allowing extension of the protocol when needed, the Wayland devs avoid having to go through this whole mess in another 5 years. That's called thinking ahead.

    For that matter, Mir doesn't even have a protocol, it's just whatever is needed to communicate with Unity, with no promises of a stable API which makes it unfeasible to anyone except Canonical to use - not much of an improvement there. Wayland still promises a stable API and backwards compatibility, so that developers of all DE's can be assured that the rug won't get pulled from under their feet at some point in the future.

    Leave a comment:


  • curaga
    replied
    Did you just equate good testing and development methologies with some languages? Completely ignoring other aspects, such as tools, performance, memory use, and usability of libraries that matter with languages? In a post calling others amateurs?

    Leave a comment:


  • silix
    replied
    Originally posted by Vim_User View Post
    You still get it backwards, seems that you are resistant to learning. The Wayland developers are those with years of experience in developing display servers, so they are the pros, while the Mir developers are those with no experience, hence the amateurs.
    realize that a display server is not exactly rocket science
    rather, it's like putting together individual low - and high level pieces, and envision objects (classes / methods) and data structures (in this case to support windows and their manipulation) as you would for any other sw problem, something that anyone who has studied sw engineering 101 and is given time to analyze the use case, requiremesnt and the APIs to use, for this one as would for any other problem, can do

    OTOH go and look for yourself how many professional positions in the sw industry require one to know and be proficient with methodologies such as Agile, XP, SCRUM, Test Driven Development and the like, or with languages such as (e/) C++, Java, C#, or ..., and how much software is written in C that is both *new* (ie other than GNU, core *nix code -stemmming from a legacy culture even when new- or straight legacy codebases , mandating the choice of language) and *professional* (written with a proper design and QA process)
    in the real world (when theres no kernel to be developed and you can choose your language and tools freely) C is often looked at with suspicion, due to the fact that it's the choice of hobbyists hacking together half working code without a design a qa process (without maybe even knowing what qa means , much less about writing tests before code - "tests? what are tests?"), in my field it often bears an image of unprofessionalism
    and yet wayland developers have chosen to perpetuate the use of C and not use a development methodology that would make the code both more modern and closer to persistent (because tests being part of the codebase help a great deal to avoid regressions) correctness

    judging from them not (seemingly) knowing nor applying current sw development methodologies, sticking to old fashioned tools and design concepts (previously the display server separated from the window manager separated from the shell, now the display server still separated from the shell - protocols needing to be extensible as if the requirements for something like a desktop were not finite and apriori known...) one may also call them amateurs ( though i won't, in respect for them being paid developers for large companies for quite some time)
    but then, it's be also quite an insult to call someone an amateur, who doesnt have an equally big name yet knows how to do his job - how can you question the professionality of someone you dont know, in his own field?

    Leave a comment:


  • Awesomeness
    replied
    Originally posted by bridgman View Post
    Just curious, what is this "sudden interest" you're talking about ? Work on UVD (and radeonSI) started in 2011, and power management even before that.
    Didn't you notice that you got a few new colleagues in the last few months?

    Intel has a special Android version that uses Mesa. I don't know if Luke means the same but I'm pretty certain that AMD wants to go the same route. Intel does the Android?Mesa work anyway, so why not strengthen AMD's Mesa drivers. I'm sure that's cheaper than porting Catalyst to Android.

    Leave a comment:


  • duby229
    replied
    Originally posted by Luke View Post
    Are you referring to work inside AMD to get the code "cleaned," cleared, and released, or to parallel efforts like the old style power management and VDPAU decode on the shaders? I was using old-style "profile" power management since late Spring 2012.

    The UVD code drop, like the power management code drop, was something I wasn't really counting on. Everyone said it was held up behind corporate lawyers, nobody knew if it would ever come out. Then these two long-desired items came out at intervals of a month. I don't know whether someone told programmers to find and strip out every last piece of 3ed party patented DRM hooks that could cause issues, told third party copyright holders to either give permission or never get another contract/job from AMD, or what, but the close timing of two code drops that had both been bottled up behind lawyers just seemed to me like someone decided to get damned serious.

    Those code drops didn't change openGL performance on older cards that boot to full speed, but they sure as hell made power management easier to use-and automatic power management made VDPAU practical to use, as setting a "low" manual profile didn't leave enough power to play a 1080p video on the UVD block, yet was enough for all desktop activities. I don't have any APU machines, but I saw Phoronix stories that made them sound like Nvidia cards when running open drivers due to old style PM not working combined with booting to low GPU clocks-just like Fermi on Nouveau. Now all that has been changed.

    It would have taken longer, maybe a lot longer, to reverse engineer both UVD and power management with no prior knowledge of the algorithms the hardware had been designed to use. The previous power management worked, but in Dynpm mode flashed the display. It also used more power in low mode than the new way does on dpm, probably as it had only clock setting and not clock gating support. I'm curious about how long people would have waited for that support from AMD before someone threw in the towel and put in the long hours it would have no doubt taken to make old-school dynpm work? As for the UVD block, I remember predictions that DRM code would make that permanently unreleasable, and that only shader decoding was likely to ever be supported. Now even Nouveau is getting video decode support.
    I'm sure he'll respond to you himself, but Bridgman has told me in the past, (Or rather I probably just read it here somewhere on this forum) that it wasn't so much about lawyers as it was about code review. AMD has a number of developers that they pay directly to work on the radeon driver. Plus there are independent developers that help a lot. This is all new code. None of it was once proprietary. So it had to be peer reviewed before release so that everyone involved could get their opinions in on how it should work. That took some time to put together. Meanwhile there were a lot of other things going on like learning the hardware and getting documentation in a format that could be released. In addition everyone always expected that the end result would accumulate into same day launch support of the OSS drivers. They are mostly caught up on that, but still have a ways to go.

    I do think internally the guys working on the OSS drivers do have a much stronger leg to stand on recently. I think that is going to result in very good things for all of AMD's (linux) customers
    Last edited by duby229; 13 August 2013, 12:16 AM.

    Leave a comment:


  • Luke
    replied
    OK, make that sudden results

    Originally posted by bridgman View Post
    Just curious, what is this "sudden interest" you're talking about ? Work on UVD (and radeonSI) started in 2011, and power management even before that.
    Are you referring to work inside AMD to get the code "cleaned," cleared, and released, or to parallel efforts like the old style power management and VDPAU decode on the shaders? I was using old-style "profile" power management since late Spring 2012.

    The UVD code drop, like the power management code drop, was something I wasn't really counting on. Everyone said it was held up behind corporate lawyers, nobody knew if it would ever come out. Then these two long-desired items came out at intervals of a month. I don't know whether someone told programmers to find and strip out every last piece of 3ed party patented DRM hooks that could cause issues, told third party copyright holders to either give permission or never get another contract/job from AMD, or what, but the close timing of two code drops that had both been bottled up behind lawyers just seemed to me like someone decided to get damned serious.

    Those code drops didn't change openGL performance on older cards that boot to full speed, but they sure as hell made power management easier to use-and automatic power management made VDPAU practical to use, as setting a "low" manual profile didn't leave enough power to play a 1080p video on the UVD block, yet was enough for all desktop activities. I don't have any APU machines, but I saw Phoronix stories that made them sound like Nvidia cards when running open drivers due to old style PM not working combined with booting to low GPU clocks-just like Fermi on Nouveau. Now all that has been changed.

    It would have taken longer, maybe a lot longer, to reverse engineer both UVD and power management with no prior knowledge of the algorithms the hardware had been designed to use. The previous power management worked, but in Dynpm mode flashed the display. It also used more power in low mode than the new way does on dpm, probably as it had only clock setting and not clock gating support. I'm curious about how long people would have waited for that support from AMD before someone threw in the towel and put in the long hours it would have no doubt taken to make old-school dynpm work? As for the UVD block, I remember predictions that DRM code would make that permanently unreleasable, and that only shader decoding was likely to ever be supported. Now even Nouveau is getting video decode support.

    Leave a comment:


  • bridgman
    replied
    Originally posted by Luke View Post
    Some say AMD's sudden interest in the open driver implies an effort to emulate Intel's strategy
    of closed driver for Windoze, separate open driver for Linux.
    Just curious, what is this "sudden interest" you're talking about ? Work on UVD (and radeonSI) started in 2011, and power management even before that.

    Leave a comment:


  • Luke
    replied
    What world do I livce in?

    Originally posted by johnc View Post
    I love you guys, but sometimes I wonder what world you live in.
    One in which I have directly verified the claims I made concerning the current performance of the
    Radeon driver on my own system, and enjoyed the fruits of those two huge code dumps by AMD.
    Some say AMD's sudden interest in the open driver implies an effort to emulate Intel's strategy
    of closed driver for Windoze, separate open driver for Linux. When RadeonSI gets as good as
    R600g, I would not want to be an Nvidia salesman...

    Leave a comment:


  • Vim_User
    replied
    Originally posted by johnc View Post
    I love you guys, but sometimes I wonder what world you live in.
    In the world in which Red Hat tells their customers that they will go to Wayland in the future (my guess, RHEL 8 will have it as default) and the Red Hat customers with their expensive workstation cards ask Nvidia/AMD for drivers, which will simply deliver what their customers need. Once there are Wayland drivers you will get Mir driver automatically.

    Leave a comment:


  • duby229
    replied
    Originally posted by johnc View Post
    I love you guys, but sometimes I wonder what world you live in.
    I only disagree with him in that nVidia will never support an OSS driver and they will cling onto their proprietary driver for as long as they possibly can. Otherwise I agree with him completely.

    If you have any AMD card or APU that is supported by r600g or any Intel APU right now you get damn good support. That's not a dream world. It's a fact. If you want well supported hardware on linux, then you have to buy well supported hardware.
    Last edited by duby229; 12 August 2013, 07:51 PM.

    Leave a comment:

Working...
X