[lug] Huge stack size--any reason to avoid?
Zan Lynx
zlynx at acm.org
Wed Oct 25 14:18:28 MDT 2006
On Wed, 2006-10-25 at 13:32 -0600, Lee Woodworth wrote:
> There is also the issue of blowing out the CPU on-board cache.
> Your cache miss rate could be way higher with a fragmented
> stack than with a normal-sized stack-frames.
As they say, don't optimize prematurely. Profile first. OProfile is a
pretty neat tool for Linux. Intel makes some great stuff with VTune.
I suspect that the stack isn't fragmented at all. A stack can't be,
really. The big memory use is probably allocated somewhere high up the
call stack, like in "main()" and then just sits there.
malloc/new is much worse for fragmentation. You cannot deallocate a bit
out of the middle of a stack, after all, but you can with "free()".
Unless you mean something like physical / virtual address fragmentation.
That can happen. The Linux fix for that is to grab your memory with a
mmap() off a file in hugetlbfs. On systems supporting hugetlbfs, a few
4MB VM pages instead of a loose pile of 4K pages can greatly improve
performance of large-memory applications.
/usr/src/linux/Documentation/vm/hugetlbpage.txt has the details.
(Whee! IA-64 supports up to 256MB pages. wow.)
--
Zan Lynx <zlynx at acm.org>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 189 bytes
Desc: This is a digitally signed message part
URL: <http://lists.lug.boulder.co.us/pipermail/lug/attachments/20061025/be0a46f5/attachment.pgp>
More information about the LUG
mailing list