[lug] kvm drive i/o - nfs vs direct mount
David Ahern
dsahern at gmail.com
Tue Apr 5 17:44:41 MDT 2011
On 04/05/11 17:29, Michael J. Hammel wrote:
> On Tue, 2011-04-05 at 15:53 -0600, David Ahern wrote:
>> Are you using virtio for networking? What kind of CPU do you have for
>> the host?
>
> Yes, virtio is used for networking in the guest configuration. The CPU
> on the host is a quad core AMD. The guest has been given access to all
> four cores.
>
>> Any idea where the slowdown is? try running 'perf top' in both.
>
> Not yet. I think its NFS itself, but can't be certain yet. What am I
> looking for in perf top? The host (fedora) appears to spend most of its
> time in "native_safe_halt [kernel.kallsyms]" but that wasn't while the
> unpacking was happening. I'll try it again tonight while the build is
> running.
If I/O in the guest is slow it has to be spending cycles somewhere. I
would expect top in the guest and host to show some kind of load. From
there perf can show you where it is spending time. I suggest starting
with 'perf top' and see what pops up. From there you can run 'perf
record -ga' for some period of time during the slowness to get stack traces.
If vmstat is showing a lot of context switches then you can use the
context switch event (or a trace event depending on the age of the
kernel/perf you are running). e.g., perf record -e cs -c 1 -ga
>
> FYI - Ubuntu's default install doesn't include perf and apt-get doesn't
> know about perf. Geez. Does it include *anything* useful to a
> developer? Ugh. Any idea what package its in?
>
never run Ubuntu, so can't help you there.
More information about the LUG
mailing list