[lug] Redhat doesn't support users that compile their own kernels.

D. Stimits stimits at idcomm.com
Sun Nov 4 16:55:00 MST 2001


Ed Hill wrote:
> 
> Hi D. Smits,
> 
> Ok, I spent a little time looking at your bug report and description
> and have some things to report:
> 
>    1) The script that you were complaining about is a "SysV-style" init
>       script.  Unfortunately, I cannot locate a specification on the web
>       specifically for "SysV-style" init scripts.  I'm sure that AT&T
>       has (or had) some sort of spec, but I can't find one (for free,
>       anyway) on the web.
> 
>    2) We are dealing with a Linux box so its appropriate to consult the
>       Linux Standard Base (LSB).  Admittedly, the LSB is evolving and
>       Red Hat, at this time (AFAIK), makes no guarantees concerning LSB
>       compliance.  That said, LSB is a good first-cut description of
>       how things *should* work.  You can see the spec for init scripts
>       at:
> 
>         http://www.linuxbase.org/spec/gLSB/gLSB/iniscrptact.html
> 
>    3) Accordiung to the LSB v1.0.1-011030, there is *NO* requirement
>       that any of the init scripts return an "[  OK  ]" or "[FAILED]"
>       as you demand.  There *ARE* requirements about the return types
>       of the scripts, but the output sent to stdout as you desire is
>       clearly not a requirement.

In this case the init script does have an OK or FAIL indicator. While
this is not required by a standard, I consider that since RH added it,
it should work correctly. The OK versus FAILED notes, once added, *MUST*
not lie. This indicator only works under some conditions, it fails under
others. It isn't an LSB failure, it is a Redhat bug. I suppose I'll have
to learn more about why the init script only works if failure is due to
a misconfigured ipchains rule, versus due to kernel support missing
(most often caused by iptables module blocking the load of ipchains
module).

> 
>    4) Further anecdotal evidence of correctness:  Of the 52 Sys-V-style
>       init scripts (/etc/rc.d/init.d/) in one of my recent installs of
>       RH v7.2, I ran the following:
> 
>         grep "exit 0" /etc/rc.d/init.d/* | gawk -F ':' '{print $1}' | uniq | wc -l
> 
>       to verify that 42 out of 52 contain "exit 0" clauses.  Of course,
>       the ipchains and iptables scripts were among the 42.
> 
>       So the vast majority of the init scripts can, in theory at least,
>       exit without ever emitting an '[  OK  ]' or '[FAILED]' statement
>       to stdout.  This appears to be normal and perfectly acceptable
>       behavior.
> 
> So the short answer is:  the behavior you describe *AIN'T* a bug!

It isn't an LSB bug. It *is* a redhat bug that they have a success/fail
indicator, but that it only answers correctly under some conditions, and
not all. RH did choose to add this as one of their selling points, but
they didn't finish the job, and "some of the time" just does not seem to
be acceptable when it can result in your machine being rooted, data
destroyed, or having it used for someone else's illegal purposes (within
minutes or hours of connecting to the Internet). When this happened to
me, I was lucky, I later discovered that there were all kinds of script
kiddie attempts, and that the only reason I wasn't rooted was because I
also had current packages against the particular exploits. Redhat's
script is a liability because it led me to believe nothing was wrong.

> 
> And the long answer is:  read the LSB spec and then lets talk about
> compliance!  ;-)
> 
> Admittedly, the spec is fast-evolving and Red Hat does not (yet, AFAIK)
> make any guarantees concerning compliance.  But they probably *will* in
> future versions if enough customers demand it...
> 
> hth,
> Ed
> 
> ps - I *will* be demanding LSB compliance and I hope others will, too!
>       As a coder who wants things to easily config/install/run on the
>       maximum number of linux distros, I'm a big fan of the LSB!

My frustration is that Redhat won't fix it's "optional" OK/FAIL
indicator. I might expect a call for windows XP support on browser
issues to be rejected because I have gimp installed (an unrelated GNU
package, MS agreements are being changed to make it void all warranty if
any GNU products are installed), but kernel version has nothing to do
with this particular issue, and redhat is rejecting it as a bug not
because it isn't a bug, but because they won't work with a system that
has a non-redhat (read: buggy or insecure with everything, including the
kitchen sink, supported) kernel. It does not matter what kernel is
installed, it has nothing to do with the problem, but redhat is acting
like microsoft now. Sure, it won't make your machine crash, all it can
do is result in a rooted machine with loss of all data and snooping of
personal information, e.g., credit card transactions, passwords, illegal
storage of other people's mp3's, participation in DDoS attacks so
on...but I think those are important. Redhat's extension to the init
scripts is faulty. I wish I knew more about bash debugging, and the
difference between failure due to bad rules and failure due to an
ipchains module that won't load (apparently redhat doesn't understand
this either).

D. Stimits, stimits at idcomm.com

> 
> D. Stimits wrote:
> 
> > Ed Hill wrote:
> >
> >>D. Stimits wrote:
> >>
> >>
> >>>They seem to be refusing to consider this unless I run a RH kernel,
> >>>which I cannot do since I have XFS. However, other people have also
> >>>looked at this, and the scripts do not properly check return values.
> >>>Since I'm not much of a bash programmer, and since I can't prove this is
> >>>the same thing under any kernel without reformatting my root partition,
> >>>I'm not going to tell this guy to check return values. It irks me when I
> >>>suggest someone check return values for error, and they give me a reply
> >>>to the effect that "we don't need to check return values, you must've
> >>>done something wrong". This was not a kernel issue, and has nothing to
> >>>do with redhat versus non-redhat kernels, but it will successfully bury
> >>>the issue. Oh well, end of rant.
> >>>
> >>>
> >>Ok, I have some bash experience and, if you describe the problem
> >>to me in greater detail (i *think* i understand at this point,
> >>but i wanna be sure) I'll try to whip up a patch tonight.
> >>Basically, you say that the ipchains/iptables startup scripts
> >>will bomb if the *other* kernel module is already loaded?
> >>
> >>Please emial me off-list with the commands that you used and
> >>any other helpful info you can provide...
> >>
> >
> > Normally one can work with the following commands, which are similar to
> > what init does automatically during bootup:
> > /etc/rc.d/init.d/ipchains stop
> > /etc/rc.d/init.d/ipchains start
> > /etc/rc.d/init.d/ipchains status
> > /etc/rc.d/init.d/ipchains restart
> >
> > And during bootup, if ipchains is scheduled to be started, it should
> > print a message with a green "[OK]" or a red "[FAILED]", depending on
> > whether the service succeeded or not. If ipchains fails due to bad rules
> > in the config files, it correctly indicates with a red failed message.
> > On the other hand, if it fails because iptables was loaded, or because
> > no support was programmed into the kernel for ipchains, it does not
> > display any message at all. No note of failure. I found one has to
> > manually run the real "ipchains -L -n" to see if it runs or not, the
> > script "OK" and  "FAIL" are not reliable under conditions where the
> > kernel does not support ipchains.
> >
> > So as a basic test, one could manually unload ipchains modules, then
> > load iptables, to mutually exclude ipchains loading. You'll see an error
> > message if you then run "ipchains -L -n". If you instead do
> > "/etc/rc.d/init.d/ipchains start", I believe you will find no red FAIL
> > message. About a month or two back, someone else from BLUG said he
> > looked closely at the scripts, and determined that a return value was
> > not being checked at a location that was critical to this. The goal
> > would be to have a red "FAIL" message even if it fails due to iptables
> > module denying the load of ipchains module. I know a bit of bash
> > scripting, but this is beyond my current expertise to debug.
> >
> > The way I discovered it was when someone generated login attempts on
> > various vulnerable ports that had been denied by ipchains rules. I
> > discovered that I'd been having similar login attempts that should not
> > have occurred, so I checked my ipchains. This is when I discovered the
> > problem, I had to go through a lot of effort to be certain the system
> > had not been rooted. None of this would have happened if the iptables
> > module had not loaded when I didn't want it to; or if the failure had
> > been noted during bootup messages.
> >
> > D. Stimits, stimits at idcomm.com
> 
> --
> Edward H. Hill III, PhD
> Post-Doctoral Researcher    |  Emails:   <ed at eh3.com>, <ehill at mines.edu>
> Division of ESE             |  URL:      http://www.eh3.com
> Colorado School of Mines    |  Phone:    303-273-3483
> Golden, CO  80401           |  Fax:      303-273-3311
> 
> _______________________________________________
> Web Page:  http://lug.boulder.co.us
> Mailing List: http://lists.lug.boulder.co.us/mailman/listinfo/lug



More information about the LUG mailing list