[lug] kvm drive i/o - nfs vs direct mount

Michael J. Hammel mjhammel at graphics-muse.org
Tue Apr 5 15:33:02 MDT 2011


I'm running an Ubuntu 9.10 image under KVM on F13.  I mount a directory
in the guest from the host via NFS and perform builds from the guest on
that mounted directory.  Performance is very bad.  Unpacking the Linux
kernel source takes over an hour.  

Is there a way to mount the host's disk directly into the guest?  I've
tried the following snippet in the /etc/libvirt/qemu/<image>.xml but it
doesn't appear to show up when the guest is running:

<disk type='block' device='disk'>
  <source dev='/dev/sdb1'/>
  <target dev='vdb' bus='scsi'/>
</disk>

The drive is a SATA drive at /dev/sdb that is currently mounted on the
host as /home2 for the ext4 partition on /dev/sdb1.  I also tried a
source dev of "/dev/sdb" in the xml but that didn't change anything.

Barring being able to directly access the drive/partition on the host
from the guest to improve disk i/o performance for the guest, does
anyone have any suggestions on how to improve NFS and/or network
performance of the guest?
-- 
Michael J. Hammel <mjhammel at graphics-muse.org>




More information about the LUG mailing list