Announcement

Collapse
No announcement yet.

QEMU 4.2 Released With Many Improvements For Linux Virtualization

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by timofonic View Post

    What about using Intel CPU instead AMD one?
    The reason to use Ryzen is the cores per dollar advantage. Modern Intel CPUs do support nested virtualization, and have for a few generations now. There's some serious memory performance gains with Ryzen 2, TR4, and Epyc 7002 as well, vs anything from Intel in the same class, thanks to the new branch predictor and the reduced NUMA zones. There's no real performance advantage to PCIe4 quite yet but that will appear over the next year. I expect Intel to counter by jumping directly to PCIe5, but with the hardware you can buy today, AMD is pwning Intel for all but a handful of corner case workloads that have nothing to do with hypervisor-based virtualisation.

    If you already own Intel? It's a first-class citizen Qemu/KVM, VMWare. Virtualbox only finally got around to supporting nested HVM a few days ago.

    VMWare has supported every release of Windows from '98 on pretty much immediately and with performance in the 80%+ range for most apps other than 3D, since 1999, on both AMD and Intel. I ran WinXP in VMWare with ZoneAlarm, no AV, restore points turned off, but using snapshots to achieve the same result... and I still do that with Win10 and MacOS, currently on Intel hardware at home. Worry less about Intel/AMD or Qemu/VirtualBox/VMWare - get yourself a large amount of RAM. I firmly believe 32GB is the entry point if you want good performance and you want to run any non-trivial workload in a guest and your host, or in 2 guests or more.

    Or if you're starting out on a server, ProxMox is a brilliant starting point. Heck it's even a great starting point for a desktop OS if you're comfortable installing Debian from a text-mode installer, and you like the idea of having an HTML5 Web App as your virtualisation GUI, being able to mix/match KVM and LXC containers, live-migrate guests between your host nodes, have a zero-cost virtual KVM/IP setup for your KVM guests, and can live without hardware 3D acceleration in your KVM guests (maybe can turn it on manually after the fact... never tried that.)
    Last edited by linuxgeex; 18 December 2019, 06:28 AM.

    Comment


    • #22
      Originally posted by linuxgeex View Post

      VMWare is the long-term incumbent, with VSphere. Anyone who complains about VMWare's performance isn't using it right, or they're just using Player and thinking that alone is indicative of the paid product. Every other virtualisation project is basically playing catch-up with VMWare. Full disclosure, I've been a VMWare licencee since 1999.

      QEMU has achieved very good performance since KVM was adopted into the kernel. It's also benefited a lot from VirtualBox, VMWare, and Xen kernel contributions, which mostly improved QEMU's network and display performance, and relatively recently brought USB and PCIe passhrough. These days QEMU can do everything that VMWare can do on a host-by-host basis, but it lacks the infrastructure for massive deployment and management that VMWare has.
      RHEV/oVirt is the KVM/QEMU-based alternative to VSphere (Clustered hypervisors, live migration, vGPU etc). I'm interested to see if you can mix it with Gluster to get something like Vsan...

      Comment

      Working...
      X