Originally posted by dfx.
View Post
Announcement
Collapse
No announcement yet.
Experimental Code Published For Virtual CRTCs
Collapse
X
-
-
Hi, I am the author of the original dri-devel mailing post that was picked up by Phoronix to create this article and also the principal author of the Virtual CRTC code. Some posts I am reading are hitting very close to what the original objective of this code is. In the next few posts I'll respond to some comments and questions and hopefully clarify things and/or provide some useful information.
Comment
-
Originally posted by rohcQaH View PostYes, I was talking about a GPU accelerated X server. Xvnc is the next best thing, but software rendering is slow; even more if your CPU is busy with vnc compression.
Regarding the compression, you have to "pay" for it in either case. What Virtual CRTC really buys you is the ability to eliminate another X-server in the equation, which is the main performance bottleneck. On a related note, this article http://www.virtualgl.org/About/Background provides a nice overview of limitations of various VNC-like solutions. They have their own solution based on hacking OpenGL library. Virtual CRTC, essentially eliminates all the problems discussed in that article.
Comment
-
Originally posted by PreferLinux View PostI've looked for stuff about Xinerama with 3D, but couldn't find much. I'll have another look.
Yes, I'm running three monitors on NVIDIA cards, which only support two monitors each.
Bad news for you is that we currently, don't support NVIDIA. Changes necessary to nouveau driver would be straightforward (and one can use an example of Radeon patches that we uploaded), but we have not got around to implement them (yet).
Comment
-
Originally posted by rohcQaH View PostIt'd also be useful to create X servers with virtual CRTCs and expose them via a VNC or RDP server. Great for headless machines. Getting them to start X and create a proper framebuffer without any monitors attached is somewhat painful.
Now if someone added multiseat support to run multiple X servers on a single GPU, you could have multiple accelerated X servers on a single GPU running at the same time. Some of these could run some virtual machine. That might be pretty useful.
Comment
-
... or think about this application: you create some virtual CRTCs, you run the desktop with a bunch of graphical applications on them (or a full screen game) and attach CRTCs to V4L2CTD driver. Since that essentially connects your frame buffer to /dev/video0 device, you can open your favorite instant messenger (e.g. Skype) and tell it that that's your "webcam". So you Skype out your direct-rendered desktop to your friend. We tried to do exactly that test with Skype, but Skype requires some YUV format that our V4L2CTD does not implement yet. Once we get it working, we'll push patches to the driver, and then this kind of application will be possible.
Comment
-
re: re: naming
Originally posted by agd5f View PostCRTCs and display hw blocks in general have never been linked to drawing or any other processing hw block on the GPU. The CRTC has always been, and continues to be, the part of the display hardware that generates the display timing. I don't see where the confusion is. CRTC has never had anything to do with drawing.
i mentioned drawing only because term "CRTC" was conceived at a time of Video Display Controllers when there were no such thing as "video card" and CRTC itself was a simplified analogue of Video coprocessor, a rudimentary GPU of its time. it's not just been separated from drawing hardware, there was no drawing hardware to be found - everything was done on CPU and at best, it would take data from RAM and generate proper analogue signal for CRTs at a time. of course, it separate of any computational logic.
the confusion in the name itself. "CRTCs" term is meaningless because hardware named by it is not simple thing, that takes data from system's RAM and pumps it for analogue CRTs. it's not about CRT or any analogue signal or even some some particular type of video output at all.
it's generic video output controller which has no ties to any particular encoder or origin for its input. this name just incorrect, it doesn't represent anything meaningful anymore.
is it just about generating timing ? why then it's not called a "timing generator" ? and this Virtual CRTC doesn't look like just about timing. maybe it's mostly only about taking data from framebuffer and putting somewhere other than an encoder of the video card, but why then "CRTC" is in its name if its not about CRTs or even timing ? that is confusing part.
PS: great blog post about it. spammers got their way with it though.
Originally posted by ihadzic View PostBad news for you is that we currently, don't support NVIDIA. Changes necessary to nouveau driver would be straightforward (and one can use an example of Radeon patches that we uploaded), but we have not got around to implement them (yet).
of course, Intel's video hw is pretty pathetic for any advanced usage and most Nvidia card owners are fanboys and lovers of their proprietary drivers, but still.
Comment
-
@ihadzic: thanks for the detailed explanations. It seems that you have the right ideas, and enough visions to come up with a framework that doesn't just solve one use-case, but enables a myriad of different useful and useless-but-cool things at the same time. My hat's off to you and I'm looking forward to the day when your patches hit my desktop.
One more question google couldn't answer: does V4L give userspace direct access to GPU buffers, or will it move all pixel data to CPU memory first? I'm thinking about starting a new X server on a virtual CRTC and having the results simply composited into your main X server (goodbye Xephyr kludges!) or using the GPU for image compression prior to sending data via VNC or another streaming protocol. Copying everything to CPU memory, then back to the GPU sounds like something one would wish to avoid in these cases, but you'd also want to avoid a new kernel driver for every single application.
Originally posted by dfx. View Post[..] CRTC [..]
Originally posted by dfx. View PostNouveau and Intel devs really should implement this ASAP.
Comment
-
Originally posted by rohcQaH View PostIt's not a term the end-user needs to know about. It's a term used by devs. Who cares if it's correct - as long as everyone knows what they're talking about, it serves it's purpose.
i also thought about F/OSS software as of mostly neat and clean solutions in contrast to closed stuff. proprietarists "good enough"? saying is not good enough for me anymore.
Originally posted by rohcQaH View PostAs far as I can see, these patches haven't hit mainline DRM yet, thus there's no rush to implement anything. The initial reactions are pretty mixed, and no technical review has been done yet. Implementing an interface that's still subject to change is rarely a good idea.
they may even conceive some way to implement common scaling options among OSS video drivers while they at it, who knows.
Comment
-
Originally posted by rohcQaH View PostOne more question google couldn't answer: does V4L give userspace direct access to GPU buffers, or will it move all pixel data to CPU memory first? I'm thinking about starting a new X server on a virtual CRTC and having the results simply composited into your main X server (goodbye Xephyr kludges!) or using the GPU for image compression prior to sending data via VNC or another streaming protocol. Copying everything to CPU memory, then back to the GPU sounds like something one would wish to avoid in these cases, but you'd also want to avoid a new kernel driver for every single application.
For what you want to do, I am not sure that v4l2ctd is the best device to attach to a virtual CRTC (you can, but it would not be the most efficient solution). It sounds like it would be best to loop back the virtual CRTC to the GPU itself. I haven't thought about that much, but I have a vague picture how I could make it work. You would also need multiseat support to run two X servers on the same GPU, which is a different topic for a different thread.
Comment
Comment