[lug] Comparing Clouds; A trivial test.
Glenn Murray
glenn.murray at gmail.com
Fri Oct 25 23:13:01 MDT 2013
Hi Maxwell,
This is very interesting, though I have to wonder how repeatable it is.
You mentioned Google Compute Engine, but I don't see a test for it, did you
do one? Also, was you C program using more than one processor? You didn't
mention the clock speed on your control.
Thanks,
Glenn
On Fri, Oct 25, 2013 at 11:02 PM, Maxwell Spangler <
maxlists at maxwellspangler.com> wrote:
> **
> I spent some time working with Amazon, Rackspace and Google Compute Engine
> clouds this week in order to port a little Linux script I'm working on.
>
> I decided to do a simple, somewhat trivial experiment to compare cloud
> quality on a very low-end. I wanted to learn how each cloud would handle
> the smallest of activities. Let me stress that large, sophisticated cloud
> applications may see very different results.
>
> So I wrote a simple program in C to do several billion simple addition
> computations. No storage, no networking, no systems calls. This is all
> about how much time my cloud's virtual machine gets to do its job.
>
> Control: Debian 7.2 virtual machine in KVM virtual machine on my local
> workstation. The workstation is a 2010-era 4-core AMD Phenom II CPU with
> 12G of RAM and no other significant workloads.
>
> root at debian72:~# time ./cputest
> real 134m15.134s [2.23 hours]
> user 134m9.707s
> sys 0m0.020s
> ------------------------------
>
> First test: Rackspace. Unknown server with AMD Opteron 2.1GHz 4170 HE
> "Lisbon" processor. Similar to a 6-core version of my Phenom II.
>
> [root at rackfree ~]# time ./cputest
> real 192m41.034s [3.20 hours]
> user 192m8.815s
> sys 0m2.852s
>
> Not bad! Let's assume I'm on a shared machine with other VMs competing for
> time and therefore cluttering up the CPU caches, causing context switches
> and the hypervisor is taking IRQ activity for other system network and IO
> calls.
>
> ------------------------------
>
> Second test: Amazon AWS. Unknown server with Intel Sandy Bridge E5-2650
> CPU @ 2.0 Ghz.
>
> 89501.70user 6.95system *24:55:26elapsed* 99%CPU (0avgtext+0avgdata
> 1440maxresident)k
> 0inputs+0outputs (0major+124minor)pagefaults 0swaps
>
> OUCH! The first time I attempted this, it hadn't finished nearly a day
> later and my connection got dropped. So I ran it again using 'nohup' and
> caught the output.
>
> The classic Unix 'sar' utility catches what's going on. I hadn't seen the
> "%steal" column before, but this was a perfect case where you'd want to
> monitor it. From the man page:
>
> *%steal *Percentage of time spent in involuntary wait by the virtual CPU
> or CPUs while the hypervisor was servicing another virtual processor."
>
>
> Linux 3.4.62-53.42.amzn1.x86_64 (ip-999-999-999-999) 10/24/2013 _x86_64_(1 CPU)
>
> 09:52:58 PM CPU %user %nice %system %iowait %steal %idle
> 09:53:58 PM all 11.58 0.00 0.01 0.00 88.40 0.00
> 09:54:58 PM all 24.46 0.00 0.03 0.00 75.51 0.00
> 09:55:58 PM all 7.62 0.00 0.00 0.00 92.38 0.00
> 09:56:58 PM all 13.71 0.00 0.00 0.00 86.29 0.00
> 09:57:58 PM all 12.00 0.00 0.02 0.00 87.99 0.00
>
> ------------------------------
>
>
> This is exploratory testing of using a cloud for small workloads, not
> rigorous scientific testing.
>
> However, it's a simple and easy way to make observations about using a
> cloud resource instead of something you control:
>
> * Some cloud resources will definitely be over-committed and your
> performance will vary greatly.
>
> * Two similar virtual machine sizes on two different cloud providers may
> provide vastly different results.
>
> I hope you enjoyed this. I did!
>
> Cheers,
>
> --
> Maxwell Spangler
> ========================================================================
> Linux System Administration / Virtualization / Development / Computing
> Services
> Photography / Graphics Design / Writing
> Fort Collins, Colorado
> http://www.maxwellspangler.com
>
> _______________________________________________
> Web Page: http://lug.boulder.co.us
> Mailing List: http://lists.lug.boulder.co.us/mailman/listinfo/lug
> Join us on IRC: irc.hackingsociety.org port=6667 channel=#hackingsociety
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lug.boulder.co.us/pipermail/lug/attachments/20131025/7e00637a/attachment.html>
More information about the LUG
mailing list