Announcement

Collapse
No announcement yet.

KVM Virtualization With Linux 6.9 Brings More Optimizations For Intel & AMD

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Going pure linux has a steep learning curve i just noticed that after i trashed my windows 11.

    I use ubuntu cause that has the out of the box expierience, i can watch tv with vlc without hacking netfilters and steam also works good atm, aswell as lutris and more.

    BUT, the things you want is allready overwhelming and you need to readup alot of things, i want to use amd_psate driver not acpi_cpufreq ok now i need to know the kernel parameter to set and find the setting that suites me amd_pstate=active, no not my thing only the powersave and performance governor work, amd_pstate=guided, sounds good but does not work well, so you go with the standard amd_pstate=passive, wich works pretty good.

    Then you need to select the performance governor, yeay schedutil is standard for ubuntu no prob there.

    Then zram and zswap, zswap.enable=1 zswap.compression=zstd zswap.pool=z3fold, then you need to load the correct modules to your ramdisk phew that worked after 5 tries.

    And so on, what i want to say here is that linux is far away from a good desktop expierience.

    I allways wonder why there is no device manager project, that can scan all /proc /sys files and provided a simple userinerface, i think that i am quite techi but i needed to read alot of forum posts and tries just to figure out how the processor stepping control driver works, ofc that does not hapen in windows cause you cant even see it.

    Ah that includes qemu/kvm and so on i use virtmanger.
    Last edited by erniv2; 24 March 2024, 09:13 PM.

    Comment


    • #12
      Dual-booting an existing Windows install via virt-manager without an image file: Been there, done that. Windows sees it as if you pulled your SSD and put it in a new machine (different hardware), so it triggers all sorts of minor headaches (requires reactivation, have to "recover" your MS account if you signed into that junk, etc), but aside from that, it handles well. Windows VMs without 2+ graphics cards: Everyone is slowly working on this. There's work on an accelerated software Windows display driver (again), and Intel is providing SR-IOV on all their current stuff, which will make things a lot nicer very soon. (Intel SR-IOV will let you split your Intel GPU up into many virtual devices, and attach them to VMs, sharing the 3D acceleration around. I'm excited!)

      I actually added an Intel A770 to my big AMD workstation, specifically to be ready for SR-IOV and the Xe driver. It runs very nicely, I'd consider using Intel GPUs for everything if they were a little more performant. As is, having RDNA3 for the host and A770 SR-IOV for all the VMs should be very nice later this year.

      Comment


      • #13
        Originally posted by Forge View Post
        Dual-booting an existing Windows install via virt-manager without an image file: Been there, done that. Windows sees it as if you pulled your SSD and put it in a new machine (different hardware), so it triggers all sorts of minor headaches (requires reactivation, have to "recover" your MS account if you signed into that junk, etc), but aside from that, it handles well. Windows VMs without 2+ graphics cards: Everyone is slowly working on this. There's work on an accelerated software Windows display driver (again), and Intel is providing SR-IOV on all their current stuff, which will make things a lot nicer very soon. (Intel SR-IOV will let you split your Intel GPU up into many virtual devices, and attach them to VMs, sharing the 3D acceleration around. I'm excited!)

        I actually added an Intel A770 to my big AMD workstation, specifically to be ready for SR-IOV and the Xe driver. It runs very nicely, I'd consider using Intel GPUs for everything if they were a little more performant. As is, having RDNA3 for the host and A770 SR-IOV for all the VMs should be very nice later this year.
        Thanks here as well. You are right, the re-configuring back and forth could be an issue, at least something of interest. I pretty much *have* to have Windows available at work. That said, I am thinking of other ways to do as well, not against running virtualized all the time vs. needing to boot from bare metal.

        On a side note, I am getting more and more into booting into Linux as my "daily driver" and I recently installed Cosmic Desktop via Copr on Fedora. And even as "pre-alpa" software, looking great. I was hoping for something to hit a sweet spot between a full-blown DE and a lightweight tiling window manager. You can toggle between floating and tiling modes, per workspace, and both are implemented very well. System76 is doing a great job as far as I can tell. To be clear, Cosmic is a full desktop, just seems like not too bloated and a good traditional desktop paradigm with other "power" features. I was able to figure things out quickly and configure to my liking in no time.

        I'd like to be able to boot into Linux and spin up Windows "as needed." On that note, with my work desktop I use the Intel iGPU because that is all I need, but there is a dGPU NVidia card in there as well, so I should be able to do graphics pass-through to keep that part performant for virtualized stuff. Good times ahead the way I see it!
        Last edited by ehansin; 25 March 2024, 08:54 AM.

        Comment


        • #14
          Originally posted by erniv2 View Post
          Going pure linux has a steep learning curve i just noticed that after i trashed my windows 11.

          I use ubuntu cause that has the out of the box expierience, i can watch tv with vlc without hacking netfilters and steam also works good atm, aswell as lutris and more.

          BUT, the things you want is allready overwhelming and you need to readup alot of things, i want to use amd_psate driver not acpi_cpufreq ok now i need to know the kernel parameter to set and find the setting that suites me amd_pstate=active, no not my thing only the powersave and performance governor work, amd_pstate=guided, sounds good but does not work well, so you go with the standard amd_pstate=passive, wich works pretty good.

          Then you need to select the performance governor, yeay schedutil is standard for ubuntu no prob there.

          Then zram and zswap, zswap.enable=1 zswap.compression=zstd zswap.pool=z3fold, then you need to load the correct modules to your ramdisk phew that worked after 5 tries.

          And so on, what i want to say here is that linux is far away from a good desktop expierience.

          I allways wonder why there is no device manager project, that can scan all /proc /sys files and provided a simple userinerface, i think that i am quite techi but i needed to read alot of forum posts and tries just to figure out how the processor stepping control driver works, ofc that does not hapen in windows cause you cant even see it.

          Ah that includes qemu/kvm and so on i use virtmanger.
          how about you make one?

          it would be nice to have nice window with "devices" grouped by category + used driver next to it, or some other cool info. it doesnt sound so difficult to code. most of stuff is accessible by bash

          Comment


          • #15
            Originally posted by erniv2 View Post
            And so on, what i want to say here is that linux is far away from a good desktop expierience.
            Curiously all your items above are not at all related to desktop usages but deep dive system tinkering.

            Which is difficult but, unlike on Windows, at least possible on Linux.

            Originally posted by erniv2 View Post
            ofc that does not hapen in windows cause you cant even see it.
            Exactly.
            You have the option of learning things and doing things that are impossible on Windows.
            However you don't have to. You also have the same option you have on Windows and ignore that a system engineer can do things you have not even heard of.


            Comment


            • #16
              Originally posted by ehansin View Post
              Since we are on the subject of running Windows guests with QEMU/KVM, probably a good time to ask a question I have been meaning to ask:

              If I install Windows on a "bare metal" (so like I would if just doing a single-boot Windows machine), and if I then install Linux on a second SSD drive and set for dual-boot, can I run that initial Windows installation within my Linux installation? Meaning, not install Windows within QEMU/KVM onto a QCOW2 disk image file? I get there could be driver and licensing issues jumping between machines (physical vs. virtual) , but think I can solve most of that.

              Would be useful in a work situation where I would need to have Windows installed on "bare metal", but would also allow me to do a supplemental Linux install. And when running Linux, could also attend to Window-based work stuff.
              While you can do this, and I accidentally have, it will corrupt the bare metal installations drivers and you will have to repair them when trying to boot the bare metal installation again. The best thing to do is create a VM and use a dedicated drive by adding it's PCI controller to enable true raw passthrough. For example my primary Windows 10 VM uses a dedicated NVME drive whose controller is in its own IOMMU group, and I added it as a PCI host device in Virtual Machine Manager. It appears as "0000:04:00:0 Phison Electronics Corporation E12 NVMe Controller", and I simply set it as the primary boot device in "Boot Options."

              Comment


              • #17
                Originally posted by polarathene View Post

                Hello! This series of changes spanning across multiple repositories introduce support for 3d accelerated virtiogpu windows guests. Wglgears window is rendered with wgl on virgl and window below i...


                I'm only subscribed, I don't follow the activity or try it. Still seems to be actively worked on.
                Thank you for this information polarathene, I wasn't aware it was still being actively developed. While it's still not ready for prime time it's a difficult task and it's heartening to see that max8rr8 is working so diligently on it.​

                By the way, at the end of "Known Issues" max8rr8 requests help with a WDDM issue, so I hope anyone with knowledge about it can help. Here's his statement:

                "Kernel-mode driver does not implement preemption, and i am very confised about how to implement it in WDDM. VioGpu3D disables preemption systemwide to workaround lack of preemption implementation, but this is not ideal. Would appreciate some help."
                Last edited by muncrief; 25 March 2024, 12:16 PM.

                Comment


                • #18
                  Originally posted by muncrief View Post
                  While you can do this, and I accidentally have, it will corrupt the bare metal installations drivers and you will have to repair them when trying to boot the bare metal installation again. The best thing to do is create a VM and use a dedicated drive by adding it's PCI controller to enable true raw passthrough. For example my primary Windows 10 VM uses a dedicated NVME drive whose controller is in its own IOMMU group, and I added it as a PCI host device in Virtual Machine Manager. It appears as "0000:04:00:0 Phison Electronics Corporation E12 NVMe Controller", and I simply set it as the primary boot device in "Boot Options."
                  Thanks for the heads up and ideas here, much appreciated!

                  Comment


                  • #19
                    Originally posted by muncrief View Post
                    it will corrupt the bare metal installations drivers and you will have to repair them when trying to boot the bare metal installation again.
                    I'm not sure if you explained poorly, but no, booting a "live" install inside a VM will not "corrupt" any drivers. Windows should detect unfamiliar hardware and build a new hardware profile. You will know this happens because the startup splash will stay up longer and generally say "detecting hardware" or something similar (it's been a long while since I saw it, I can't remember the exact wording). When you reboot "live" to that same install, it will do this process again, but after a hardware profile is built, Windows will select it at boot time. This was designed to support docking stations and the like, if you are actually having drivers damaged/removed by a new hardware profile, something is wrong with your setup.

                    Passing through an NVMe device *is* very nice and highly performant, yes. I do this often, and am working on getting SR-IOV working on an A770 I just added, which is *lovely* for performant VMs. I'm especially looking forward to Windows, macos, and possibly ChromeOS using this. Linux on Linux VMs can do this without SR-IOV or other tricks, just by using virtio devices for graphics.

                    Comment


                    • #20
                      Yes, Windows has no problem with new hardware, outside the activation. Which you can ignore, because Windows works without the activation forever. PS: For running the bare metal Windows from Linux, as a simple user, I used ordinary VirtualBox.

                      Comment

                      Working...
                      X