[lug] Huge stack size--any reason to avoid?
Vince Dean
vdean at ucar.edu
Tue Oct 17 16:54:03 MDT 2006
I'm working with some C++ code which puts very large
data structures on the stack, large enough that we have
to increase the allowable stack size to 512 megabytes:
ulimit -Ss 512000
The developers point out that variables on the stack are
managed automatically, minimizing the risk of
memory management errors. If the lifetime of the
objects is such that they can be managed conveniently
on the stack, I can't disagree with that argument, but
something still makes me uncomfortable.
I understand that there are many cases where the sequence
of creating and destroying objects makes it essential to
manage memory dynamically on the heap. This is not such
a case. Stack allocation is perfectly reasonable here.
The only issue is the very large size.
Are there any practical arguments against putting
big data structures on the stack? We are running Suse
Linux on an Itanium system. Am I likely
to run into a Unix or Linux desktop or server machine
where it is not possible to set the stack so large?
In the end, it is all (virtual) memory, with the
stack being allocated at one end of the address
space and the heap on the other end. Is there any
reason for me to be concerned about this coding style?
Thanks,
Vince
--
Vince Dean
University of Colorado
Center for Lower Atmospheric Studies
3450 Mitchell Lane, Rm FL0-2514
Boulder, CO 80301
Phone: (303) 497-8077
Email: vdean at ucar.edu
More information about the LUG
mailing list