Announcement

Collapse
No announcement yet.

Louvre Is A New C++ Library Helping To Build Wayland Compositors

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • juxuanu
    replied
    Is there a library to simplify creating a wayland *client*? All the fuss seems to be around compositors.

    Leave a comment:


  • jacob
    replied
    Originally posted by user1 View Post

    If you mean the bad choice was the overly minimalistic design of the Wayland protocol, I agree, but..



    You don't seem to realize that there will never be a standard Wayland implementation / display server. That just never going to happen. Big projects like Gnome and KDE already have their own Wayland display server implementations, so I just don't see them scrapping years of work that has been put into Mutter and KWin Wayland support in favor of some mythical "standard" Wayland display server.
    Never say never. GNOME and KDE were both heavily invested in their ORBit and DCOP IPC systems and happily threw them out when DBUS came along. They had their own audio servers and moved to Pulse Audio (and now PipeWire). They had their own session management logic and readily switched to systemd (at least GNOME did, not sure about KDE). So if there was a Wayland display manager foundation that does what they need, I don't see why they would eventually use it if ultimately it allows them to reduce their development effort.

    Leave a comment:


  • mrg666
    replied
    It is good to have fragmentation. It is not a problem for open source, it is a strength. I am sure the ones who don't like fragmentation will be arguing each other on which projects to eliminate. It is free software after all. There is no option to tell anyone stop what they do, thankfully.

    Leave a comment:


  • MadCatX
    replied
    Originally posted by Kjell View Post

    Huh? The stresstest is measuring the FPS when drawing a high amount of surfaces with each compositor. If you scroll down there's another chart with raw GPU and CPU usage as well as a power usage metric which better explains the results due to multithreading.

    Make a issue report if you feel like something is missing in the test?
    The source code contains a few shell scripts and I assume they were used to collect the benchmarking results. If you take a look at how these scripts operate, it should be apparent that the CPU and GPU utilization values are not collected correctly. Both of them are collected as a single data point, not as an average over the entire benchmark run. The CPU usage of the compositor is sampled *after* the LBenchmark program has exited. The exact state when the GPU data is collected depends on timing. The average FPS is the only value that is measured correctly.

    Fixing this would not be exactly trivial (especially the GPU usage) and the benchmark is probably not supposed to be a rigorous performance evaluation anyway.

    Leave a comment:


  • JMB9
    replied
    Originally posted by Kemosabe View Post

    One of the reasons may be: C is simply not suitable for the 21st century anymore.
    Well, so Linux is written in ... ups, C ... is there any OS currently being technically superior to Linux?
    And all the effort concerning Rust is done to get a few drivers wirtten by ... a larger group of people ...
    which lead to usage of Java (and formerly BASIC) ...

    So looking at currently used code in important cases we see basically C, Cobol, Fortran ...
    and after a gap a huge croud of programming languages (of more than 2000 if countable at all;
    see "The Mess We're In" by Joe Armstrong: https://www.youtube.com/watch?v=lKXe3HUG2l4).
    But maybe someone could shed some light about programming language usage who really knows better.
    And a lot of code is used behind closed doors ...

    Personally, I like having C++ libraries for all purposes - as after my taste C++ would (after C and Fortran
    used in former times) be most attractive (games, GUIs, ...) - while Rust is really in its infancy and
    moving quite fast. But Linux may be a reason that it will get ripe sooner than later ... we will see.

    I think that every technical task without at least 2 alternatives is a problem.
    Would anyone be happy if only GNOME would exits? Choice is important for experts.
    Only beginner are happy to not have any choice or not having plenty of options - as
    experts have to tailor everything to their needs to work in the most effective way
    possible (i.e. {nearly} no mouse or touch screen).

    Leave a comment:


  • ehopperdietzel
    replied
    Hello, I am the developer of Louvre, and I would like to express my gratitude to Michael Larabel for sharing information about the project.

    Creating a Wayland compositor is a challenging task, and one of the motivations behind Louvre was to simplify this process, enabling more individuals to contribute and innovate in this domain. While I could have contributed to the development of wlroots, I was a novice at that time and struggled to find comprehensive documentation on wlroots. Consequently, I decided to start Louvre from scratch to gain a deeper understanding of Wayland and its inner workings.

    Today, I believe it's beneficial to have various alternatives and approaches to developing compositors. Each project or library can introduce innovative designs that contribute to the overall evolution of compositor architecture.

    Regarding the benchmark, I want to address some questions. I used intel_gpu_top to measure GPU consumption, incorporating its option (-s) to specify the frequency of snapshots for GPU consumption. The duration of each benchmark iteration corresponds to the time interval during which the average results are obtained. I also took precautions to turn off any other desktop environments, ensuring the results remain accurate.

    For CPU consumption, I configured all three compositors to use a single CPU core with the taskset command. To measure CPU usage, I employed the `ps -p PID -o %cpu` command. According to the documentation (https://man7.org/linux/man-pages/man1/ps.1.html), this command calculates "CPU time used divided by the time the process has been running (cputime/realtime ratio), expressed as a percentage." So if I am not wrong, this implies that it provides more than just a snapshot of CPU usage.​

    Leave a comment:


  • Kemosabe
    replied
    Originally posted by geearf View Post
    Why not improve wl-root instead?
    One of the reasons may be: C is simply not suitable for the 21st century anymore.

    Leave a comment:


  • kpedersen
    replied
    Originally posted by user1 View Post
    You don't seem to realize that there will never be a standard Wayland implementation / display server. That just never going to happen..
    No I absolutely understand. You can see this in practice by this very forum thread. The closest we will probably get is an Xlib API translation library once Xwayland becomes broken.

    I think the Wayland ecosystem is actively damaging to Linux and people who aren't simply Steam DRM platform gamers or who simply browse webpages will become more and more frustrated as a result.

    Luckily Xorg is still being maintained and its size is barely bigger than future Wayland compositors (many of which are no longer being maintained) so it will certainly be around throughout our lifespan (I am not sure most existing Wayland compositors will be).
    Last edited by kpedersen; 17 November 2023, 02:00 PM.

    Leave a comment:


  • zexelon
    replied
    Originally posted by Vistaus View Post

    Why do people always act like this is a Linux thing only? Do you know how many duplicate apps and efforts there are on Windows, Android, etc.?
    Apps yes... but not in the core OS. Windows basically has 2.5 file systems (FAT, NTFS, and ReFS (basically NTFS with COW)). It also has only one window manager... one desktop environment and one file manager. Yes you can over ride some of these (ex. replacing the shell) but even the replacements basically use all the internal already existing windows infrastructure.

    The Linux ecosystem however is fractured, duplicated and chaotic right down to the kernel level.

    In my opinion both have their place, and its a matter of "market" if you will. Linux targets people who need absolute tuning for their use case and can filter out the chaos into a cogently operating platform for each individual use case. Microsoft targets a specific use case (desktop... or simple to operate servers) with an extremely curated and refined feature set that does not work in a great deal of situations... and is also cost prohibitive to scale with.

    Leave a comment:


  • Kjell
    replied
    Originally posted by MadCatX View Post

    The results are, unfortunately, mostly bogus because neither CPU nor GPU usage is sampled throughout the entire run and averaged. The benchmark only snapshots a single number taken at some time, possibly even after the benchmarking program finished. The only number that is probably correct is the FPS. Just FTR, I ran the benchmark on an ancient Ivy Bridge laptop running KDE 5.27.9 with one 4K screen and one 1600x900 hooked up. With 32 subsurfaces KWin manages 50 FPS on the smaller screen and 30 FPS on the 4K screen but this is probably because HD4000 cannot drive 4K screens at more than 30 Hz refresh rate.
    Huh? The stresstest is measuring the FPS when drawing a high amount of surfaces with each compositor. If you scroll down there's another chart with raw GPU and CPU usage as well as a power usage metric which better explains the results due to multithreading.

    Make a issue report if you feel like something is missing in the test?

    Leave a comment:

Working...
X