[lug] Redhat doesn't support users that compile their own kernels.
Ed Hill
ed at eh3.com
Sun Nov 4 15:33:20 MST 2001
Hi D. Smits,
Ok, I spent a little time looking at your bug report and description
and have some things to report:
1) The script that you were complaining about is a "SysV-style" init
script. Unfortunately, I cannot locate a specification on the web
specifically for "SysV-style" init scripts. I'm sure that AT&T
has (or had) some sort of spec, but I can't find one (for free,
anyway) on the web.
2) We are dealing with a Linux box so its appropriate to consult the
Linux Standard Base (LSB). Admittedly, the LSB is evolving and
Red Hat, at this time (AFAIK), makes no guarantees concerning LSB
compliance. That said, LSB is a good first-cut description of
how things *should* work. You can see the spec for init scripts
at:
http://www.linuxbase.org/spec/gLSB/gLSB/iniscrptact.html
3) Accordiung to the LSB v1.0.1-011030, there is *NO* requirement
that any of the init scripts return an "[ OK ]" or "[FAILED]"
as you demand. There *ARE* requirements about the return types
of the scripts, but the output sent to stdout as you desire is
clearly not a requirement.
4) Further anecdotal evidence of correctness: Of the 52 Sys-V-style
init scripts (/etc/rc.d/init.d/) in one of my recent installs of
RH v7.2, I ran the following:
grep "exit 0" /etc/rc.d/init.d/* | gawk -F ':' '{print $1}' | uniq | wc -l
to verify that 42 out of 52 contain "exit 0" clauses. Of course,
the ipchains and iptables scripts were among the 42.
So the vast majority of the init scripts can, in theory at least,
exit without ever emitting an '[ OK ]' or '[FAILED]' statement
to stdout. This appears to be normal and perfectly acceptable
behavior.
So the short answer is: the behavior you describe *AIN'T* a bug!
And the long answer is: read the LSB spec and then lets talk about
compliance! ;-)
Admittedly, the spec is fast-evolving and Red Hat does not (yet, AFAIK)
make any guarantees concerning compliance. But they probably *will* in
future versions if enough customers demand it...
hth,
Ed
ps - I *will* be demanding LSB compliance and I hope others will, too!
As a coder who wants things to easily config/install/run on the
maximum number of linux distros, I'm a big fan of the LSB!
D. Stimits wrote:
> Ed Hill wrote:
>
>>D. Stimits wrote:
>>
>>
>>>They seem to be refusing to consider this unless I run a RH kernel,
>>>which I cannot do since I have XFS. However, other people have also
>>>looked at this, and the scripts do not properly check return values.
>>>Since I'm not much of a bash programmer, and since I can't prove this is
>>>the same thing under any kernel without reformatting my root partition,
>>>I'm not going to tell this guy to check return values. It irks me when I
>>>suggest someone check return values for error, and they give me a reply
>>>to the effect that "we don't need to check return values, you must've
>>>done something wrong". This was not a kernel issue, and has nothing to
>>>do with redhat versus non-redhat kernels, but it will successfully bury
>>>the issue. Oh well, end of rant.
>>>
>>>
>>Ok, I have some bash experience and, if you describe the problem
>>to me in greater detail (i *think* i understand at this point,
>>but i wanna be sure) I'll try to whip up a patch tonight.
>>Basically, you say that the ipchains/iptables startup scripts
>>will bomb if the *other* kernel module is already loaded?
>>
>>Please emial me off-list with the commands that you used and
>>any other helpful info you can provide...
>>
>
> Normally one can work with the following commands, which are similar to
> what init does automatically during bootup:
> /etc/rc.d/init.d/ipchains stop
> /etc/rc.d/init.d/ipchains start
> /etc/rc.d/init.d/ipchains status
> /etc/rc.d/init.d/ipchains restart
>
> And during bootup, if ipchains is scheduled to be started, it should
> print a message with a green "[OK]" or a red "[FAILED]", depending on
> whether the service succeeded or not. If ipchains fails due to bad rules
> in the config files, it correctly indicates with a red failed message.
> On the other hand, if it fails because iptables was loaded, or because
> no support was programmed into the kernel for ipchains, it does not
> display any message at all. No note of failure. I found one has to
> manually run the real "ipchains -L -n" to see if it runs or not, the
> script "OK" and "FAIL" are not reliable under conditions where the
> kernel does not support ipchains.
>
> So as a basic test, one could manually unload ipchains modules, then
> load iptables, to mutually exclude ipchains loading. You'll see an error
> message if you then run "ipchains -L -n". If you instead do
> "/etc/rc.d/init.d/ipchains start", I believe you will find no red FAIL
> message. About a month or two back, someone else from BLUG said he
> looked closely at the scripts, and determined that a return value was
> not being checked at a location that was critical to this. The goal
> would be to have a red "FAIL" message even if it fails due to iptables
> module denying the load of ipchains module. I know a bit of bash
> scripting, but this is beyond my current expertise to debug.
>
> The way I discovered it was when someone generated login attempts on
> various vulnerable ports that had been denied by ipchains rules. I
> discovered that I'd been having similar login attempts that should not
> have occurred, so I checked my ipchains. This is when I discovered the
> problem, I had to go through a lot of effort to be certain the system
> had not been rooted. None of this would have happened if the iptables
> module had not loaded when I didn't want it to; or if the failure had
> been noted during bootup messages.
>
> D. Stimits, stimits at idcomm.com
--
Edward H. Hill III, PhD
Post-Doctoral Researcher | Emails: <ed at eh3.com>, <ehill at mines.edu>
Division of ESE | URL: http://www.eh3.com
Colorado School of Mines | Phone: 303-273-3483
Golden, CO 80401 | Fax: 303-273-3311
More information about the LUG
mailing list