[lug] Steganography (was: newbie question - rc.sysinit)

Chris Riddoch socket at peakpeak.com
Sat Jul 14 00:09:14 MDT 2001


"D. Stimits" <stimits at idcomm.com> writes:
> > Chris Riddoch wrote:...
> > ...But the problem is, you've got to have the code that
> > will perform the steganography somewhere in the kernel. That could be
> > analyzed to see what exactly the kernel does in order to get the
> > signature.
> 
> Getting the signature gathering code, if it could be altered in place in
> memory, would defeat it unfortunately.

That's the whole problem with the idea of these signatures. It doesn't
*matter* where or how your the signature and key have been hidden,
because the kernel *will* need to access it.  If the kernel can access
it, so can anything else that's watching what the kernel is doing.

> The main concern I'd have is getting the private signature. If you
> manage some means to defeat actual overwrite of the kernel on the
> drive and the boot sector, then any crack would be successful only
> until the next reboot.

Not necessarily. If you manage to track down what the kernel is doing
when it loads the module, through each and every step, then the hacker
grabs the signature (which has been revealed by examining the running
kernel) and simply signs her module with the same key you took such
pains to protect.  Puts it in the appropriate place, changes some
files, and now the module looks just as secure as all the rest because
it was signed with the very same signature.  No reboots necessary.

> Interesting to me is that there has been recent kernel devel list
> talk about what would be required to install new kernels without
> rebooting...nobody really wants to go through the pain of making
> that possible, so I doubt it would ever happen, but it would make
> for interesting security problems.

The running kernel doesn't need to be changed. It just needs to be
examined, so the hacker can find the key and signature.

> ...like most security, [read-only memory] is one in a series of
> difficulties to overcome, and no one measure is ever enough to be
> guaranteed. Having to first find, and then alter, code that is
> buried in memory, is a significant barrier to script kiddies (and
> with steganography, the location of burial would be different from
> one kernel to the next, possibly even from one reboot to the next).

Well, there are things that could help with that, actually.

If the functions are known that find and extract the key from wherever
it is in your system, then you first look up those functions in the
running kernel's System.map to get a pointer to the code that finds
the key.  In fact, if you'd like to make this steganography a feature
of the mainstream kernel, chances are that everybody would be using
the *same code* to perform the steganography.  The only difference
between one kernel and another, when running, would be the parameters
that are given to the code that finds the key...

Unless you're planning on writing different code to hide it for each
kernel build, or even self-modifying code that would be different in
each kernel build.  If you do the same thing twice, it's not really
steganography, and people can find the code that does it.

And even in that case, the hacker breaks out one of those little
hardware emulators like Bochs or VMWare and runs it on the
self-modifying code to see what happens.  Or, if you've got time, sit
around and watch the appropriate areas of memory until the kernel
loads a module.  It'll reveal its true form just as it gets the key.

All you need to know is what the steganography code does when it is
run, to get the key.

But God help you if you're planning to write kernel-level
self-modifying code. ;)

> > Well, the more security you have, the more you encroach on usability.
> > The kind of security we're talking about here would rather severely
> > damage the sort of usability you'd expect on a desktop.
> 
> It is worse than that if you consider that without modules, even the
> services that exist as compiled in, would make the kernel so huge that
> you then get boot difficulties from shear size.

Well, are you going to be building sound support into a machine that
needs to be this secure?  I'd expect not.  If the hacker has got root,
there's a problem anyway.

> To flash the bios, you have to sit at the console. You can't do it with
> root privilege from a remote login. Very very few of all cracked
> machines (that I've heard about) in the news (or just in security group
> postings) involve someone physically entering the room and flashing your
> bios (which is no longer a software issue).

Don't be so sure that physical access is necessary. Most bioses are
software-updatable, these days. Mine is, I bet yours is, too. No
hardware twidling. A quick google search found this:

http://www.firmware.com/support/bios/flashchp.htm
   "Flash chips are a relatively new type of non-volatile memory chip
   that can be erased and reprogrammed without being removed from the
   system they are used in....This allows motherboard manufacturers to
   make BIOS updates available electronically (such as through FTP
   sites...)  instead of requiring chips to be removed and replaced."

> > 1) Take firewalling, traffic analysis, security updates, and kernel
> > updates seriously.
> 
> This by itself will stop most every script kiddie in the world. Probably
> even discourage quite a few moderately capable crackers.

And is probably the best solution, for most cases.

> > 3) Unplug yourself from the network.
> 
> Not practical for many situations. It is another case of security
> through crippling capabilities (in some strange way, it is a denial of
> service).

Well, a lot of basic security techniques are.  Turning off any servers
you know you don't need could be seen as "crippling capabilities".
For a long time, (and it might still be the case, but I think it's
changed) a default RedHat installation had quite a few things open to
the world.  Telnet, even, at one point.

I suppose it's a crippling capability to not let myself telnet into my
machines, but I don't know of any machines I use that don't have ssh.

And if I were in some kind of top-secret lab, I really wouldn't blame
the admins if the Important Computers weren't hooked up to the 'net.

> Hmm. That makes me wonder. If a stealth module can hide itself, why not
> make an intentional stealth module that does the pgp signature stuff?
> Make the kernel lie about its presence, turn the stealth module against
> the crackers ("fight fire with fire").

Actually, *that* makes me wonder.  If a stealth module is somewhere in
kernel space, the sysadmin could run a program as root, investigate
the kernel memory, and check for discrepancies between the what the
kernel is doing and what it reports its doing.

I argue, again: *Nothing*, white hat or black hat, can hide from a raw
scan of memory.

> > The best way to hide it would be to send a request to a machine whose
> > only purpose is to check if the kernel is allowed to load the
> > module. It could even *copy* the module to the other computer so it
> > could check the actual module, which would then send a message back
> > granting or denying the ability.  The whole exchange would have to be
> > done in some way to guarantee that each computer is, and is doing,
> > what it claims.
> 
> That seems overly complicated to implement without a lot of holes.

Maybe so.  But the principle I'm applying here is this: "Minimize and
distribute damage. If someone has root on one of your machines, they
shouldn't be controlling your world."

Hmmm. And I was just wondering, this afternoon, what on earth the
purpose of DCOM, CORBA, SOAP, et. al. is if all it does is break the
"application" up into pieces that talk to each other over the network.
I think I just answered my question.

> But it does make me wonder about something else. It would be
> interesting if an independent system could be made to run as a sort
> of watchdog, simultaneously, on the same machine, based on a CD rom
> system that can't be overwritten. A ghost in the machine to watch
> over it.

Well, this is the idea behind chroot(), and virtualizing.  But why put
the watchdog on the same machine where it has a chance of being
discovered for what it is?

> > The basic rule is, you can't trust a computer if it's been rooted.
> 
> One of the goals of better module security is to stop the root in the
> first place.

Correct me if I'm wrong, but someone should either have to (1) Be root
to run modprobe, or (2) Be able to replace a module with another,
which requires root permissions.  This suggests to me that it's
something *else*, like a buffer overflow or an otherwise buggy network
daemon, or in the remote case, a kernel bug in the network stack, that
would let someone get root in the first place.

Or perhaps if Bind were rewritten in a language that doesn't allow
buffer overflows, the number of successful rooting attempts would
halve. ;)

> And if you can't stop it, then make it much harder to hide the root
> kit. Right now there are a *lot* of machines out there which are
> rooted and the owners don't even know it.

And that's what tripwire and similar tools are for.  Except that
Tripwire is made useless if it's tricked into seeing something it's
not, which would be made possible by this infamous hidden kernel
module.

Hmm. Perhaps if a root-owned program could analyze the system call
trace as it goes into the kernel at the same time as tripwire is
running to look for discrepancies... And maybe that could be made part
of tripwire...

> > The other thing is, none of this is necessary for handling your
> > average script kiddie.  It's the determined person whose only target
> > is your computer that you need to be worried about.
>
> I see so many attempts on my machine that I have to disagree.

I don't see any determined people trying to attack my router with
anything other than scripts.  (Heh. Of course, that might change, now
that I'm participating in this thread, and now that I've said that.)

> You have to worry about both. The greatest effort would be to stop a
> determined attacker, but if you have an insecure machine, chances
> are you'll be rooted in a day on many domains (take cable modem
> domains for an example there).

*IF* you have an insecure machine.  My router runs the most basic
debian system, sshd, and pppd.  Very easy to keep up to date, and
minimal cracks to allow people in.

> It isn't unusual that within 10 seconds after bringing my modem
> up I see buffer overflow attempts on port 515 (printer vulnerability),
> or some form of test against 53 or 111. And the scripts these days can
> be quite thorough against known vulnerabilities.

Sure. But that just depends on whether your network daemons are
current.  apt-get update, apt-get upgrade.  Or krud2date, depending on
your flavor of choice.

And here's my proposal: have a look at the "capabilities" attribute in
the kernel.  It's rather Un-Unix-like, but it does away with the idea
of "Root can do everything, some programs can be granted root
abilities while still letting users run them safely".  The Unix Way
is, in principle, not so bad, but the whole idea could be changed...

User A is allowed to tweak with modules.  No other users are.  User B
is allowed to listen on sockets on ports below 1024.  Your network
daemons run as user B.  Cracker gets in on a buffer overflow, ends up
as user B, and can't tweak with modules because user B can't do that.
Only user A can do that.  All the hacker can do is, well, listen on
sockets.  (And whatever is allowed by all users)

The idea is, you assign more specific labels to the privileges of the
users on the system in order to minimize the scope of access provided
by a breach, and to grant some access to normal users without letting
them take over the server.  At least, that's my understanding of it.
Someone correct me if I'm misrepresenting it.  I haven't actually
taken the Operating Systems class yet.

--
Chris Riddoch         |  epistemological
socket at peakpeak.com   |  humility



More information about the LUG mailing list