[lug] Process dying without core dump
Alan Robertson
alanr at bell-labs.com
Wed Sep 15 13:50:32 MDT 1999
Sean Reifschneider wrote:
>
> On Wed, Sep 15, 1999 at 01:03:16PM -0600, Alan Robertson wrote:
> >The symptoms are that one process of 4 (all forked w/o exec) dies
> >without leaving a core file.
>
> Are you sure that you have the limits set up to LEAVE a core file?
>
> sylvia:nospam$ ulimit -a
> core file size (blocks) 1000000
> [...]
Looks like I do:
core file size (blocks) 1000000
Or, running from sudo (like I usually do):
> [alanr at just 6538]$ sudo sh -c ulimit -a
> unlimited
I tried looking in /proc, but didn't see anything there that obviously
told me what the actual running process' ulimit was...
> Some people like to set that to 0 as a default to prevent leaving core
> files laying around.
>
> You might want to try making a version that doesn't fork, so you can
> run from the command-line and see what's happening, and/or start
> catching signals and report that (the latter probably being the
> better idea).
It has to fork. It needs 4 processes. The one that's dying isn't the
parent of the others, it's one of the children. But, catching signals
sounds better. Is there a way to *force* a core dump? For example,
would abort(3) work? I suppose I could just block all signals and do a
pause, and wait for some human to come look me over too...
Thanks!
___________________________________________________
Alan Robertson mailto:alanr at bell-labs.com
http://www.henge.com/~alanr/
___________________________________________________
More information about the LUG
mailing list