[lug] kvm drive i/o - nfs vs direct mount

Lee Woodworth blug-mail at duboulder.com
Tue Apr 5 19:49:42 MDT 2011


On 04/05/2011 03:33 PM, Michael J. Hammel wrote:
> I'm running an Ubuntu 9.10 image under KVM on F13.  I mount a directory
> in the guest from the host via NFS and perform builds from the guest on
> that mounted directory.  Performance is very bad.  Unpacking the Linux
> kernel source takes over an hour.  
> 

When I switched a LVM-based VM to use VIRTIO_NET, network speed increased around 100%
vs using the emulated network card. SCP copies can run as fast 30MBytes/s
for large files.

I don't use libvirt or such to configure or launch the VM, so here are some
things to compare with your setup (this an AMD64):

   Guest:
       Linux kernel config settings (2.6.38):
           CONFIG_KVM_GUEST=y
           CONFIG_HAVE_KVM=y
           CONFIG_PARAVIRT_GUEST=y
           CONFIG_PARAVIRT=y
           CONFIG_PARAVIRT_CLOCK=y
           CONFIG_VIRT_TO_BUS=y
           CONFIG_VIRTIO_BLK=m
           CONFIG_VIRTIO_NET=m
           CONFIG_VIRTUALIZATION=y
           CONFIG_VIRTIO=m
           CONFIG_VIRTIO_RING=m
           CONFIG_VIRTIO_PCI=m
           CONFIG_VIRTIO_BALLOON=m
       kernel modules:
           virtio_blk              4871  1
           virtio_pci              5914  0
           virtio_ring             3657  4 virtio_net,virtio_blk,virtio_pci
           virtio                  3357  4 virtio_net,virtio_blk,virtio_pci
           virtio_net             11424  0

       # ethtool -i eth0
       driver: virtio_net
       version:
       firmware-version:
       bus-info: virtio1

   Host:
       Linux kernel config (2.6.38):
           CONFIG_HAVE_KVM=y
           CONFIG_HAVE_KVM_IRQCHIP=y
           CONFIG_HAVE_KVM_EVENTFD=y
           CONFIG_KVM_APIC_ARCHITECTURE=y
           CONFIG_KVM_MMIO=y
           CONFIG_KVM_ASYNC_PF=y
           CONFIG_KVM=m
           CONFIG_KVM_AMD=m
           CONFIG_VIRT_TO_BUS=y
           CONFIG_VIRTUALIZATION=y
           CONFIG_VIRTIO=m
           CONFIG_VIRTIO_RING=m
           CONFIG_VIRTIO_PCI=m
           CONFIG_VIRTIO_BALLOON=m
       kernel modules:
           tun                    15011  5 vhost_net
           vhost_net              17501  0
           kvm_amd                40568  3
           kvm                   210379  1 kvm_amd

   qemu networking options on the host command for
   starting the VM:
       ext_net="-net nic,vlan=0,model=virtio,macaddr=02:aa:bb:cc:dd:ee"
       ext_net="$ext_net -net tap,vlan=0,ifname=ext_tap,script=no,downscript=no"

       dmz_net="-net nic,vlan=1,model=virtio,macaddr=02:ff:gg:hh:ii:jj"
       dmz_net="$dmz_net -net tap,vlan=1,ifname=dmx_tap,script=no,downscript=no"

   This host setup bridges tap devices with real ethernet devices. The external
   ethernet device eth2 is bridged with ext_tap. The DMZ ethernet device eth3
   is bridged with with dmz_tap. The ethX devices are have no addresses assigned
   to them on the host, the host visible address is assigned to the bridge.

   Make sure to keep all the ethernet macs separate on the host and guest sides.

> Is there a way to mount the host's disk directly into the guest?  I've
> tried the following snippet in the /etc/libvirt/qemu/<image>.xml but it
> doesn't appear to show up when the guest is running:
> 
> <disk type='block' device='disk'>
>   <source dev='/dev/sdb1'/>
>   <target dev='vdb' bus='scsi'/>
                           ^^^^
Seems like you are asking for an emulated scsi host/disk setup. Is that
what you really want?

> </disk>
> 
> The drive is a SATA drive at /dev/sdb that is currently mounted on the
> host as /home2 for the ext4 partition on /dev/sdb1.  I also tried a
> source dev of "/dev/sdb" in the xml but that didn't change anything.
> 

On Gentoo, I switch some LVM volumes between the host & guest. Only one OS
can have the volume mounted at any time. The VIRTIO_BLK helps here, no
emulated controller used. Below, change the /dev/mapper/xxxx to /dev/sdb1
for your case. (Caveat - the guest & host kernel versions must both support
the same VIRTIO_BLK implementation as the kernel config notes say it is not
a stable interface. Make both kernels the same if possible).

    Host-side qemu options for making the volume available as a complete
    disk in the VM (the guest needs to have the virtio_blk driver compiled in
    or loaded as a module):

       tmp_disk="-drive if=virtio,media=disk,format=raw,file=/dev/mapper/big_temp"


    On the guest, these appear as /dev/vdX, starting with a for the first
    -drive option (e.g. -drive ... on host ==> /dev/vda on guest). Subsequent
    -drive options continue with /dev/vdb, /dev/vdc,....

    On the host the mount looks like (for an ext4 file system):
       mount -t ext4 /dev/mapper/big_temp /mnt/extra_temp
    On the guest:
       mount -t ext4 /dev/vda /mnt/extra_temp

I don't know how the Ubuntu tools set things up, so you will have to experiment.
On the guest: ls /dev/vd* to see if VIRTIO_BLK devices are already setup. Another
check is to use lsmod and see if virtio_blk is loaded.

HTH



More information about the LUG mailing list