Announcement

Collapse
No announcement yet.

QEMU 0.13 Final Is Ready With New Features

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by gilboa View Post
    Please type the following on your host:
    $ brctl show

    Plus, what's your qemu command line?
    [root@spaceman ~]# brctl show
    bridge name bridge id STP enabled interfaces
    br0 8000.02a20f46e11e no vnet3
    tor0 8000.0030671a7817 no eth1
    virbr0 8000.000000000000 yes
    I just add this (change MAC) to my configs for guests
    <interface type='bridge'>
    <source bridge='br0'/>
    <mac address='00:16:3e:1a:b3:4a'/>
    </interface>
    I haven't configured directly from virsh, but from the manpage all you need to do is use attach-interface.

    Comment


    • #12
      OK. As far as I can see, your eth0 (real NIC) isn't shared with the VM, hence, there's no need for tap device.
      In-order to share a single hardware NIC with one, or more VM's you'll need to use bridge-tap setup.
      oVirt-HV1: Intel S2600C0, 2xE5-2658V2, 128GB, 8x2TB, 4x480GB SSD, GTX1080 (to-VM), Dell U3219Q, U2415, U2412M.
      oVirt-HV2: Intel S2400GP2, 2xE5-2448L, 120GB, 8x2TB, 4x480GB SSD, GTX730 (to-VM).
      oVirt-HV3: Gigabyte B85M-HD3, E3-1245V3, 32GB, 4x1TB, 2x480GB SSD, GTX980 (to-VM).
      Devel-2: Asus H110M-K, i5-6500, 16GB, 3x1TB + 128GB-SSD, F33.

      Comment


      • #13
        E.g. (Argh! 1 minute edit limit...!):

        Code:
        $ brctl show
        bridge name     bridge id               STP enabled     interfaces
        br0             8000.00e081b0fbae       no              eth0
                                                                tap20
        br1             8000.00e081b0fbaf       no              eth1
                                                                tap21
        br2             8000.000e2e7c2ef4       no              eth2
                                                                tap22
                                                                tap32
        br3             8000.00effeef32fe       no
        br4             8000.00effeef33fe       no
        br5             8000.00effeef34fe       no
        pan0            8000.000000000000       no
        oVirt-HV1: Intel S2600C0, 2xE5-2658V2, 128GB, 8x2TB, 4x480GB SSD, GTX1080 (to-VM), Dell U3219Q, U2415, U2412M.
        oVirt-HV2: Intel S2400GP2, 2xE5-2448L, 120GB, 8x2TB, 4x480GB SSD, GTX730 (to-VM).
        oVirt-HV3: Gigabyte B85M-HD3, E3-1245V3, 32GB, 4x1TB, 2x480GB SSD, GTX980 (to-VM).
        Devel-2: Asus H110M-K, i5-6500, 16GB, 3x1TB + 128GB-SSD, F33.

        Comment


        • #14
          Originally posted by gilboa View Post
          OK. As far as I can see, your eth0 (real NIC) isn't shared with the VM, hence, there's no need for tap device.
          In-order to share a single hardware NIC with one, or more VM's you'll need to use bridge-tap setup.
          I don't understand why you would want to share the physical device. The simplest solution would seem to be:

          1) Create a bridge device (i.e. br0)
          2) Assign an IP address to it
          3) Setup NAT for the bridge and internet device (i.e. eth0)
          4) Attach guests to the bridge and vnetX will be automatically created
          5) Have guests use br0's IP address as the gateway.

          This setup works perfectly for me. I SSH into the host and can connect to my guests, and the internet is available for updates, etc.

          I agree that tap/tun isn't easy to setup. I tried setting it up for a different project a few months ago, and there is practically zero documentation that doesn't relate to OpenVPN.

          Comment


          • #15
            Originally posted by jbrown96 View Post
            I don't understand why you would want to share the physical device. The simplest solution would seem to be:
            There's are a number of reasons to give the VM tap/bridge based access to the host NIC.
            1. Maintain VM access to the host NIC. (E.g. Red and DMZ firewalls sitting on the internet connected host NIC).
            2. Maintain VM support for promisc access to the network (Read: Recording traffic).
            3. Better performance. (Using NAT will reduce the performance by 1-10%, depending on the host and guest configurations)

            I can continue...

            - Gilboa
            oVirt-HV1: Intel S2600C0, 2xE5-2658V2, 128GB, 8x2TB, 4x480GB SSD, GTX1080 (to-VM), Dell U3219Q, U2415, U2412M.
            oVirt-HV2: Intel S2400GP2, 2xE5-2448L, 120GB, 8x2TB, 4x480GB SSD, GTX730 (to-VM).
            oVirt-HV3: Gigabyte B85M-HD3, E3-1245V3, 32GB, 4x1TB, 2x480GB SSD, GTX980 (to-VM).
            Devel-2: Asus H110M-K, i5-6500, 16GB, 3x1TB + 128GB-SSD, F33.

            Comment


            • #16
              Originally posted by gilboa View Post
              There's are a number of reasons to give the VM tap/bridge based access to the host NIC.
              1. Maintain VM access to the host NIC. (E.g. Red and DMZ firewalls sitting on the internet connected host NIC).
              2. Maintain VM support for promisc access to the network (Read: Recording traffic).
              3. Better performance. (Using NAT will reduce the performance by 1-10%, depending on the host and guest configurations)

              I can continue...

              - Gilboa
              Then don't NAT. Attach your host NIC to a bridge. Use the birdge address for host connections and the guests don't need any configuration.

              1. I don't understand what you mean. Any firewall will be sitting on the host (i.e. iptables) or upstream somewhere. The traffic won't be firewalled any differently, no matter where the connection gets bridged (unless you purposefully msiconfig the host's firewall).
              2. This is a valid argument, but doesn't make much sense in a real-world environment. Just use the host to capture traffic.
              3. See my recommendation to bridge the host NIC.

              The idea of better automatic configuration is fine, but it's only ever going to work for common use patterns. The three situations you outlined above are niche, and are unlikely to ever be supported in an easy-to-configure manner. Besides the first point, which I don't understand, the other two are sufficiently niche enough to warrant manual configuration in almost every circumstance.
              #2 would be a special-purpose request that would likely be temporary to diagnose a problem, so it would definitely require manual configuration. It's also likely to be done at the host level because the problem is most likely related to guest networking problems.
              Performance issues would only relate to very high numbers of connections and/or low-latency applications. Iptables NAT has no problems with throughput, but under high-load (and low-end hardware) could have performance issues matching related connections. That's not going to happen on virtual desktops. It would be most likely to happen on web servers, but then, we're talking about professionals configuring virtual machines and manual configuration is expected.

              Niche problems require niche solutions, get over it.

              Comment


              • #17
                Originally posted by jbrown96 View Post
                Niche problems require niche solutions, get over it.
                ... If you consider performance problems, or raw network problems niche, be my guest.
                -However-, your rude answer has no place in this forum.

                - Gilboa
                oVirt-HV1: Intel S2600C0, 2xE5-2658V2, 128GB, 8x2TB, 4x480GB SSD, GTX1080 (to-VM), Dell U3219Q, U2415, U2412M.
                oVirt-HV2: Intel S2400GP2, 2xE5-2448L, 120GB, 8x2TB, 4x480GB SSD, GTX730 (to-VM).
                oVirt-HV3: Gigabyte B85M-HD3, E3-1245V3, 32GB, 4x1TB, 2x480GB SSD, GTX980 (to-VM).
                Devel-2: Asus H110M-K, i5-6500, 16GB, 3x1TB + 128GB-SSD, F33.

                Comment


                • #18
                  Originally posted by gilboa View Post
                  ... If you consider performance problems, or raw network problems niche, be my guest.
                  -However-, your rude answer has no place in this forum.

                  - Gilboa
                  Looking back through your posts shows just as many "rude" responses. You are not a moderator, and this is the internet, so get off your high horse.

                  The situations where performance would be an issue are in professional applications where manual configuration is expected. NAT is easy and recommended for casual use of VMs because it doesn't take much configuration and can be largely automated since it doesn't make significant changes that can unexpectedly break things.

                  I still fail to see why you would want to use a virtual machine to diagnose network problems. It's like asking a doctor to diagnose a patient with photographs. Yes, it can be done, but there's no self-respecting professional that would prefer it. You're adding another layer of abstract to a problem that is difficult to diagnose because of layered abstractions. Furthermore, VM hosts are single-purpose machines: they host virtual guests. Therefore, any networking problem is clearly interwoven between the guests, host, and network. Using host-based tools fits nicely into the middle and provides the best method of diagnoses.

                  The fact is that bridged networking works very well and is not difficult to setup manually. It cannot be done in automatically on a general desktop or server because it will likely break network connectivity, especially if network-manager is used. NAT is a much more sane default than advanced bridging.

                  Comment


                  • #19
                    Originally posted by jbrown96 View Post
                    ...and this is the internet, so get off your high horse.
                    ... *

                    - Gilboa
                    * I wonder what made you type the rest of your comment... Oh well.
                    oVirt-HV1: Intel S2600C0, 2xE5-2658V2, 128GB, 8x2TB, 4x480GB SSD, GTX1080 (to-VM), Dell U3219Q, U2415, U2412M.
                    oVirt-HV2: Intel S2400GP2, 2xE5-2448L, 120GB, 8x2TB, 4x480GB SSD, GTX730 (to-VM).
                    oVirt-HV3: Gigabyte B85M-HD3, E3-1245V3, 32GB, 4x1TB, 2x480GB SSD, GTX980 (to-VM).
                    Devel-2: Asus H110M-K, i5-6500, 16GB, 3x1TB + 128GB-SSD, F33.

                    Comment


                    • #20
                      Originally posted by gilboa View Post
                      ... *

                      - Gilboa
                      * I wonder what made you type the rest of your comment... Oh well.
                      Because I thought you might have an interest in discussing the topic, rather than trolling. I was wrong. I'm ignoring your posts.

                      Comment

                      Working...
                      X