[lug] Modern day benchmarking and sizing

Davide Del Vento davide.del.vento at gmail.com
Tue Jul 13 13:51:39 MDT 2010


I won't do the comparison virtual vs dedicated server, but I think it works.

I would compare CPU with GPU: for a few hundreds dollars and a
standard power supply, you can have a 1 Teraflop GPU, something that
in "raw power" is about 20 times faster than the fastest CPU
available! Compared to the GPU, the 20 CPU would "suck"  an incredible
amount of power from the grid and of money from your wallet. So, it's
the GPU better? Well, depends on what you have to do. The GPU has
quite a lot of this raw computing power, but it is a huge pain with
memory access. So much of a pain that the design of algorithms must be
completely changed: instead of saving stuff in memory and use them
later, just toss them away and recomputed them from scratch if you
need them later, since computing power is there and good memory
latency/bandwidth is not. Use your old CPU program on the GPU, and it
will likely perform 10 times slower, not 20 faster... But if you are
writing a new one, and the problem "fits" the GPU strengths, then, you
are ready to go! Compare truly GPU-made OpenGL with CPU-simulated and
you know what I mean: you'd really need several CPUs to match that
speed.

You cannot ignore the details, of the problem you are solving, period.

HTH,
Dav

On Tue, Jul 13, 2010 at 13:32, Maxwell Spangler
<maxlists at maxwellspangler.com> wrote:
> On Tue, 2010-07-13 at 11:17 -0600, Stephen Queen wrote:
>
>> My question was general. I will have more than one application in the
>> future. The question is how does a person go about determining what
>> cpu/main board is best for their app. Not is this particular cpu
>> appropriate for this particular app. It is probably to general of a
>> question to ask on a mailing list, and expect a reasonable answer.
>
> This is a general question, but a very important question in the era of
> virtualization that we're entering into.
>
> In the past, when users were dissatisfied with performance, we threw
> faster hardware at the problem.  But we reached ceilings in how fast we
> could get cpu, memory and disk systems to perform and we ended up with
> lots of power-hungry, heat-producing servers taking up a lot of space.
>
> The trend being promoted for the future is to have those
> high-performance systems available but used only when needed so that
> applications can be consolidated to few servers in non-peak periods and
> expanded to many servers during demanding periods.  This lets machines
> be powered off to save energy, reduced cooling needs and replaces bulky
> single and multi-processor systems with multi-core systems that may take
> up less physical space.
>
> But in order to work with systems dynamically, you need to understand
> more than ever how an application works and how to monitor it for
> satisfactory performance.  If you can give it only what it needs yet
> still achieve satisfying performance, you can consolidate more servers
> onto fewer machines.  But if you do that too aggressively, you choke the
> applications of resources and fail to satisfy your users.
>
> So in this way I see attempting to choose between Atom processors and
> desktop/server processors as similar to choosing between virtual
> machines and dedicated machines.  It's easy just to throw as much power
> as you can afford at the problem, but there are tremendous benefits to
> being efficient when you do it right.
>
> I approach it like this:
>
> 1. Understand the basic technology: is this app a file server, a web
> server, a database server?  A general idea of what the app is sets your
> perspective for further investigations.
>
> 2. What are the technical details of it?  Is it an interpreted Python
> program or a compiled C program?  Does it use a lot of disk activity?
> Does it manipulate huge chunks of memory?  These direct the path to
> success such as: If you need to manipulate 6GB graphics images, skip
> 32bit processors and go straight to 64bit processors that can handle
> that amount of memory easily.
>
> 3. What does the community know about it?  One of the biggest changes in
> computing in the last ten years is the move to collaborate openly, by
> default.  So find out what others already know such as guidelines for
> sizing, what attempts fail so you can skip trying them, etc.  But don't
> trust these as absolutes, just things to consider.
>
> 4. Finally, do your own benchmarking and testing.  The only benchmark
> results that will ever really matter are those of the problem you're
> trying to solve.  So apply the ideas from steps 1-3, benchmark different
> configurations, and share with the community to revise your tests.
> Repeat until you reach satisfactory results, if possible.
>
> I'm eager to hear what others have to say on this.
>
> --
> Maxwell Spangler
> ========================================================================
>        Linux, Unix and Database Administration
>        Currently: Boulder, Colorado
>        LinkedIn: http://www.linkedin.com/in/maxwellspangler
>
>
>
> _______________________________________________
> Web Page:  http://lug.boulder.co.us
> Mailing List: http://lists.lug.boulder.co.us/mailman/listinfo/lug
> Join us on IRC: irc.hackingsociety.org port=6667 channel=#hackingsociety
>



More information about the LUG mailing list