Announcement

Collapse
No announcement yet.

Vulkan Wayland Compositors Are Nearing Reality

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • zxy_thf
    replied
    Originally posted by shmerl View Post
    So now they won't need GBM and EGLstreams anymore?
    If the system has a GPU that supports Vulkan.

    Leave a comment:


  • CommunityMember
    replied
    Originally posted by shmerl View Post

    If I remember correctly, Nvidia can't support DMA-BUF ....
    It would appear that you may have forgotten the article from around 50 days ago about nVidia implementing DMA_BUF: https://www.phoronix.com/scan.php?pa...UF-Wayland-KDE although the details were a bit sparse.

    Leave a comment:


  • mangeek
    replied
    [QUOTE=bofh80;n1233585]WHY we need 3d for desktop?/QUOTE]

    It's more efficient and faster to use the 3D hardware and software than to use the CPU and traditional framebuffers. Crazy as it sounds, it's less work to have each app, the UI libraries, and the window compositor just shoot stuff over into the 3D stack and 'show it' than it is to ask the CPU to stop and handle what should be on the screen and schlep 32 megabytes over to video memory 60 times a second. If you think all this 3D biz seems inefficient, consider what 2 gigabytes a second of transfers from CPU to GPU would do you the rest of your system.

    The 3D hardware and software needed to render a 4K desktop smoothly is already on 'everything', and it's optimized to hold 'textures' (windows) and blast them out to the display. Why not put it to use lighten the load on the CPU? I'm pretty sure that even if you strapped a Raspberry Pi's GPU to a Core i7, the Raspbery Pi GPU would 'feel' smoother for regular desktop stuff than a 100% 2D/CPU stack. Plus, the -second- you want eye candy like transparency or animation, you're doomed with CPU. Try a macOS instance out in QEMU-KVM if you want to feel how awful that is.

    Leave a comment:


  • siyia
    replied
    If only one day i could run sway + vulkan on the desktop, it would be the best ever!!!! Latency under vulkan in wayfire is a lot lower compared to sway which feels sluggish at times.
    Last edited by siyia; 20 January 2021, 11:17 PM.

    Leave a comment:


  • intelfx
    replied
    Originally posted by bofh80
    while, that's nice. n all. great work. kudos all around.
    however, would someone put me out of my misery and tell me WHY we need 3d for desktop?
    What's wrong with the fastest path, cpu? like we've been doing for years. (with or without buggy 2d accel).
    And i don't mean the software emulated compositor. . Unless you're going to tell me it's as fast and responsive as.
    Is it just the resolutions we use these days or what?
    (I don't care for stupid fancy effects, i'm looking for some technical reasons beyond eye candy)

    Considering that most of you are using a composited desktop i expect a lot of considered responses.
    Because CPU is not a fastest path anymore. Count pixels on a 4K screen, then multiply by bpp, then multiply by some fixed factor to account for multiple layers of textures in compositing, and then compare that with RAM throughput and CPU frequency*IPC. You'll be surprised.

    TL;DR: yes, we've been doing this for years. If you are content with a single 1024x768 display with windows drawn in immediate mode (so basically no protection against unresponsive apps), feel free to continue to do that. But copying huge blocks of pixels back and forth is literally the worst possible use of CPU time and RAM bandwidth.
    Last edited by intelfx; 20 January 2021, 11:57 PM.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by bofh80
    while, that's nice. n all. great work. kudos all around.
    however, would someone put me out of my misery and tell me WHY we need 3d for desktop?
    What's wrong with the fastest path, cpu? like we've been doing for years. (with or without buggy 2d accel).
    And i don't mean the software emulated compositor. . Unless you're going to tell me it's as fast and responsive as.
    Is it just the resolutions we use these days or what?
    (I don't care for stupid fancy effects, i'm looking for some technical reasons beyond eye candy)

    Considering that most of you are using a composited desktop i expect a lot of considered responses.
    Something to be aware of is in modern GPU there is basically no 2d accel hardware. Glamour stuff in X11 is about using 3d parts of the gpu for 2d acceleration.

    CPU are not designed to shove around image data at speed like GPU are.

    Leave a comment:


  • Mangix
    replied
    For more information about this: https://youtu.be/4PflCyiULO4?t=28058

    I for one welcome Vulkan taking over everything :B.

    Leave a comment:


  • polarathene
    replied
    Originally posted by bofh80
    however, would someone put me out of my misery and tell me WHY we need 3d for desktop?
    What's wrong with the fastest path, cpu? like we've been doing for years.
    Is it just the resolutions we use these days or what?
    Even if you're not fond of any eye-candy, depending on DE it would make sense to prefer the more performant choice for such when it comes to supporting those who would like such.

    What's your definition of eye candy though? You can have transition effects that are rather subtle, be that scaling transforms or fade-ins that are very brief, or accessibility things like zooming into a region of the display that follows the mouse or highlights where the cursor is, inverts display for those with visual impairments. Not everything needs to be stupid fancy effects that benefits from GPU.

    CPU has to blit each individual pixel and track dirty regions for updates vs letting the GPU stack a bunch of layers and decide what's visible from that to paint/update a texture that everything is composited into. I'm not a DE/compositor dev mind you, but I've done some similar stuff in the past with GPU/CPU updates for apps/games.

    CPU gets worse perf as resolution scales, GPU can be very efficient. So yeah I think moving to 4k displays and up, you'd rather not waste CPU resources for one. GPUs excel at handling image data, whereas a CPU is more generic in purpose, doesn't parallelize as well, similar case when it comes to baking 2D textures or rendering 2D frames in 3D graphics for film/games.

    Why do you think CPU is the fastest path? CPUs can do certain tasks faster than GPUs, but image manipulation isn't one of them.

    3D btw is just a bunch of triangles filled with some pixel information, in todays graphics those are often very small triangles, or big triangles with plenty of pixels. The triangle may be skewed on some angle and represent more pixels than the space it has to render in, so another term is often used texels which refer to the pixels the triangle actually can represent, these texels may be rendered at a different scale like resizing an image pixels based on how the triangle is rendered to 2D, those texels may then be represented as a larger or reduced ratio to actual pixels used. This might be better understood if you look at old games from like 10-20 years ago where it was more obvious, but when you go up close to a 3D object in a game, the pixels may get more blurry/pixelated looking as it lacks the actual pixel data to represent details that close to save on memory/performance back then, these days modern games can have much more texture information.

    So if you take away one dimension for those triangles to be in, you get 2D. That third dimension is still used to layer/stack the triangles on top of one another, so call that what you will, 2D, 2.5D, whatever. We work with rectangular shapes on desktop usually, that's a simple 2 triangles, a quad. You can map an image on to that, add a transparency mask for rounded corners and such and be happy with that, it's not as efficient though as subdividing that into some more triangles where you can optimize the textures some more, I don't entirely know how compositors are going about it, and the individual UI toolkits may play a role in it somewhere, every button/icon or letter in text can be two triangles with a texture (text is a good example, since it's one where you can have many instances of the same letters, but texture wise only needs to be one texture with all the letters in it, and some software adds a bunch of quads with offsets that reference a portion of that texture to build up lines of text.

    When you move the windows or resize them, on CPU you are blitting fixed pixels based on position, it gets more complicated with sub-pixels where it may round to the nearest pixel and for movement like resizing sort of look like snapping. Less of an issue the higher the pixel density is, but this is something GPUs handle quite nicely. It's been a while so my knowledge about such could be a bit outdated, I assume it'd still be another slow down for CPU vs GPU however. Not something you'd consider eye-candy, but definitely something you'd notice.

    ---

    TL;DR

    CPU is not the fastest path, GPUs are great at manipulating a lot of pixels and 3D is generally broken down into many triangles, those can still be arranged into shapes for 2D usage where the 3rd dimension is depth for layers (eg windows over other windows, context menus, etc).

    Leave a comment:


  • shmerl
    replied
    Originally posted by CommunityMember View Post

    Vulkan uses DMA_BUF, which the nVidia driver is on the road to support. And eventually the the entire GBM/EGLStreams issue will no longer be interesting except for those that need a reason to hate (and you know who you are).
    If I remember correctly, Nvidia can't support DMA-BUF due to GPL "symbol poisoning" that their blob would cause if they do. So not sure if this can be fixed in an way besides Nvidia upstreaming their driver and cooperating with kernel developers properly.

    Leave a comment:


  • CommunityMember
    replied
    Originally posted by shmerl View Post
    So now they won't need GBM and EGLstreams anymore?
    Vulkan uses DMA_BUF, which the nVidia driver is on the road to support. And eventually (and eventually is not tomorrow, but neither is it years away) then the entire GBM/EGLStreams issue will no longer be interesting except for those that need a reason to hate (and you know who you are).
    Last edited by CommunityMember; 20 January 2021, 09:52 PM.

    Leave a comment:

Working...
X