Announcement

Collapse
No announcement yet.

If Or When Will X12 Actually Materialize?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • jrch2k8
    replied
    Originally posted by siride View Post
    Better would be to just leave it as it is and have the toolkits be done client-side and the rendering server-side. Then you wouldn't have to jump through hoops and have parts of the system on the wrong side of the client/server boundary.

    I see server-side toolkit ideas pop up from time to time when people talk about X. First of all, no other OS uses server-side toolkits, so that should say something right there. Secondly, server-side toolkits introduce policy and application-level details into the server in a non-generic way. Server is supposed to just render and demultiplex input (and interface with hardware to do all that). Now it has to deal with things like buttons, accessibility, colors and themes, etc. Now, instead of being able to just update the Qt libraries, you also have to update the X server. And while you can have side-by-side installations of toolkit libraries, you can't have two instances of the X server running to support old and new apps. No more Qt3 apps alongside Qt4 apps. Either the toolkit API must remain extremely stable and backwards compatible over time, or you just lose the apps that run on the older version of the toolkit. Some people propose some way where toolkit code is uploaded into the server. It's obvious that this is a ridiculous solution and not worth trying.

    So once again, we are left with the reality that X really is okay in its fundamental architecture. Tweaking and upgrading, rather than a wholesale rewrite or reworking is the way to go.
    well mate that is good for other stuff but not for a networked render server lol.

    remember doesnt matter how nasty you get coding something for X or Y toolkit, all of them use the xlib api at the end cuz the xorg server is the one which renders, so it make a lot more sense to transmit the raw xorg core api calls cuz beside faster in theory (if properly implemented) is toolkit independant.

    doing this at toolkit level would be hell, cuz somehow you have to at worst render twice cuz would be really hard to intercept the Xorg api calls. so in the end you would need

    * an specific rendering code path per toolkit
    * the specific rendering path have to render at the same time in xorg api too if you wanna have image in your screen too
    * you have to create an event handler for input devices per toolkit.
    * you have to create a protocol that crc the validity of the data send to clients and back to be sure the stuff is rendered as it should


    an many other stuff wich are already in X so it makes 0 sense doing it at toolkit level when it should be way easier just updating the xorg network render system.

    in this particular case is better to plan in the future a major overhaul in the network render system inside Xorg than go crazy and try it at toolkit level.

    the most important overhauls i think it should be:

    * a faster way of crcing the data, ok it exist now and it works but the penalty is massive.
    * make this extension aware of newer extension like damage, Xrender, Composite, multi input, etc(last time i checked it didnt but who knows if something has been done in that time)
    * encryption, well ok ssh do the trick but an ssl3/tls solution should be more ligthweight.
    * improve the compression of the transmited data and improve the latency which is barbaric
    * make some changes to be more mesa aware specially at gsgl not just plain opengl, maybe gallium too
    * add the ability of a secure login system, maybe like heimdal kerberos, so it would be more enterprise like security or pam or both


    anyway, this are discussion for the future cuz X12 or anything big isnt going to happen soon cuz mesa/gallim/ddx/kms work is more important for now

    Leave a comment:


  • fabiank22
    replied
    Originally posted by V!NCENT View Post
    80% of all Linux kernel devs are paid by companies. Take Btrfs for example, Pulseaudio, KMS and GEM... A while ago on /. .
    Actually that depends on you definition of company. Linus for example is paid by a "company" which is funded by other companies, but he can still pretty much do whatever he wants, and accept and reject even proposals by people that passively pay him.

    There are a lot of cases where people ignore solutions by companies that have money involved(Novell/RadeonHD, Intel had a lot of Kernel-patches rejected in the past, VIA...), if they aren't good solutions.

    Also Novell had some weeks in the past where they paid devs to improve whatever they want and google accepts proposals from which they don't profit directly(on the other hand they get a first shot at hiring young talent)

    Leave a comment:


  • V!NCENT
    replied
    Originally posted by fabiank22 View Post
    Well the problem is that what the toolkit wants is not necessarily what you want - QT is commercial software, and thus motivated by the fact that they have to deliver something that works for their clients. Which makes them very different from most of the people working on other parts Linux, like for example the kernel, where your aim is to deliver the implementation most elegant/maintainable, or just plainly want your shit to work, and don't care that much about things you don't use(flash, anyone?)
    80% of all Linux kernel devs are paid by companies. Take Btrfs for example, Pulseaudio, KMS and GEM... A while ago on /. .

    Leave a comment:


  • fabiank22
    replied
    Originally posted by drag View Post
    Very few applications target X11 directly anymore. They all use toolkits.

    Port the toolkit and you port the application.

    And all toolkits worth giving a shit about are designed to be portable in the first place. GTK runs on Windows and OS X; natively. QT is even more portable then that.

    So the solution is not to rework the layers... it's to eliminate them.

    Just throw it all away. Get rid of everything lower then GTK/QT/etc and then have the toolkits work directly on Wayland.
    Well the problem is that what the toolkit wants is not necessarily what you want - QT is commercial software, and thus motivated by the fact that they have to deliver something that works for their clients. Which makes them very different from most of the people working on other parts Linux, like for example the kernel, where your aim is to deliver the implementation most elegant/maintainable, or just plainly want your shit to work, and don't care that much about things you don't use(flash, anyone?)

    D-Bus is a very good example. KDE/QT went with the implementation that worked best for their users. And it's pretty cool software too, something which just beats it's predecessors into the dust and made me very excited when it was first announced. However they didn't really care about networking/remote. Which is okay for me, and probably okay for a lot of people. But a pain in the ass for the people that want it. Now, with the layers they can at least say, hey, lets work around it, or hey, lets use the old stuff directly. But once you only have A toolkit(and you only have one, because gnome and kde share their implementations on freedesktop) that's not going to happen without reinventing the wheel.

    Which is not to say I don't agree with your statement, I just don't think any of the layers, whether it's toolkits or the graphic-stack, are at a point where they can handle this. Well see how gnome will manage the 3.0-transition and what KDE will be up to with nepomuk but for now I think waiting let's you gain far more.

    Leave a comment:


  • phtpht
    replied
    Originally posted by Ex-Cyber View Post
    As I understand it, the main desire behind Wayland was to have a simple playground for testing various techniques with the new Linux graphics stack (KMS/DRI2). I'm pretty sure it's meant to complement Xorg, not replace it.
    Doesn't seem that way from the hype around it.

    Leave a comment:


  • movieman
    replied
    Originally posted by Wingfeather View Post
    Perhaps I'm being obtuse, but why on Earth would you run an X server on a machine with no display adapter? Isn't displaying things the very point of the X server?
    The server is the X client, the X server runs on the client, so to speak .

    In other words, you run an X app on the server which talks to the X server on your desktop machine which does have a graphics card.

    Leave a comment:


  • Wingfeather
    replied
    Originally posted by nanonyme
    How the heck would you use a server with no display adapter card...
    Perhaps I'm being obtuse, but why on Earth would you run an X server on a machine with no display adapter? Isn't displaying things the very point of the X server?

    Leave a comment:


  • nanonyme
    replied
    Originally posted by bridgman View Post
    There are a number of systems around which perform most or all of the rendering on the server side then push bitmaps down to the client.
    Right. But assuming it's fully CPU-rendered and there might be tons of people using the same server with remote X at the same time, that doesn't essentially sound very fast to me unless we're talking of really simple graphical apps.

    Leave a comment:


  • bridgman
    replied
    Originally posted by nanonyme View Post
    How the heck would you use a server with no display adapter card with remote X if graphics weren't fully drawn on client-side?
    There are a number of systems around which perform most or all of the rendering on the server side then push bitmaps down to the client. X terminals are often referred to as one example of thin-client systems, but some of the systems out there have *really* thin clients.

    Leave a comment:


  • siride
    replied
    Originally posted by jrch2k8 View Post
    using for example the software rasterizer from gallium rendering in offscreen and sending the pseudo processed info or the full rendered frame through the client in the network xorg to just end the render or render the frame on the screen. in this case theorically you dont need a gpu, so it possible.

    another method could be if you can send the raw requirememnts to an networked xorg, so the other xorg do the render onscreen wich whatever hardware you have, ofc this would need some sort of initialization protocol to at least inform the base xorg server wich screen size you want and what proportion and wich feature your hardware support(for example if your crd can use exa and composite/xrender). in that case you would need the chance to send glx commands too with shaders.

    if you want to render in more than 1 computer, would be nice to overhaul the damage extension so you can save band just sending the modifications to the orginal frame. that is in case of using multi input and each user having a different desktop. muticast could help too if you only wanna work in 1 pc and watch only in many pc's so that way you avoid multiple renders
    Better would be to just leave it as it is and have the toolkits be done client-side and the rendering server-side. Then you wouldn't have to jump through hoops and have parts of the system on the wrong side of the client/server boundary.

    I see server-side toolkit ideas pop up from time to time when people talk about X. First of all, no other OS uses server-side toolkits, so that should say something right there. Secondly, server-side toolkits introduce policy and application-level details into the server in a non-generic way. Server is supposed to just render and demultiplex input (and interface with hardware to do all that). Now it has to deal with things like buttons, accessibility, colors and themes, etc. Now, instead of being able to just update the Qt libraries, you also have to update the X server. And while you can have side-by-side installations of toolkit libraries, you can't have two instances of the X server running to support old and new apps. No more Qt3 apps alongside Qt4 apps. Either the toolkit API must remain extremely stable and backwards compatible over time, or you just lose the apps that run on the older version of the toolkit. Some people propose some way where toolkit code is uploaded into the server. It's obvious that this is a ridiculous solution and not worth trying.

    So once again, we are left with the reality that X really is okay in its fundamental architecture. Tweaking and upgrading, rather than a wholesale rewrite or reworking is the way to go.

    Leave a comment:

Working...
X