[lug] Huge stack size--any reason to avoid?

Lori Reed lorireed at lightning-rose.com
Tue Oct 17 19:58:33 MDT 2006


Vince Dean wrote:

> I'm working with some C++ code which puts very large 
> data structures on the stack, large enough that we have 
> to increase the allowable stack size to 512 megabytes:
> 
>   ulimit -Ss 512000

eep!

> Are there any practical arguments against putting
> big data structures on the stack?  

Are you doing this in C or C++? C++ methodology favors objects that are 
created in the heap. What's described here concerns me a bit.

> We are running Suse
> Linux on an Itanium system.  Am I likely
> to run into a Unix or Linux desktop or server machine 
> where it is not possible to set the stack so large?

Sure, you betcha! :)

Any machine with less than a GB of RAM, for example.

It may be possible to run on a fairly small machine using swap space, 
but you're going to get a major performance hit.

But based on your email addy, it seems you're developing a fairly 
specialized piece of software and there's no reason you can't specify 
the machine specs.

> In the end, it is all (virtual) memory, with the
> stack being allocated at one end of the address
> space and the heap on the other end.  Is there any
> reason for me to be concerned about this coding style?

A half gig-o-ram or a half gig-o-heap, it doesn't seem to make much 
difference to me, but perhaps you could you redefine the problem to use 
multiple computers in a distributed net application. More small machines 
each working on a chunk of the problem may be a better solution. (think 
SETI at home, for example)

BTW, If you're still in the design phase, or possibly even the 
implementation phase, I may be available to help.

Lori




More information about the LUG mailing list