[lug] Steganography (was: newbie question - rc.sysinit)

Chris Riddoch socket at peakpeak.com
Sun Jul 15 02:50:03 MDT 2001


"D. Stimits" <stimits at idcomm.com> writes:
> I was under the impression (maybe falsely) that if the cmos was set to
> require a password, and if it also was set with virus protection against
> boot sector alterations, that it would not be modifiable without local
> access.

Well, the code that checks for both the password and the virus
protection is in the bios software.  A modified version of the bios
software might conveniently lack those features.

> It sounds like even this is easily defeated? In theory the code
> to protect, when virus and pass are set to be required, would also
> require physically cutting the battery backup to the cmos if the pass is
> ever forgotten. If other access is possible, then it seems the bios
> password and virus portions are defective.

Well, there's a trade-off. If you're a company whose main interest is
in selling a lot of bioses, you can afford to spend less development
time on debugging if you know that your users can upgrade to a less
(or differently) buggy version without needing a physical piece to
shove into the board.  Software bioses allow for mediocre bioses, for
one thing... if the hardware bios is broken, you're pretty much out of
luck.  If the software bios is broken, you're expected to accept it as
a fact of life and upgrade.  This doesn't apply to bioses alone, but I
suppose I'm showing my cynicism, aren't I?

> Some filesystems do support ACL's (access control lists) for extended
> mandatory access, but that too is part of the kernel...if the kernel
> does not want to honor it, it won't help.

And the distribution. I don't know if the current distributions
include the userspace software for setting such configurations, or
have capabilities built into their kernels.

> It seems that there is a need to arm the kernel with an absolute
> shield section that out of single user mode cannot be modified or
> viewed. Once you have that, the options increase dramatically.

Once you have that, root is no longer root. The other thing is, the
purpose of the kernel is to provide services for userspace - if the
kernel can't be configured at runtime, which is what an "absolute
shield" would do, you couldn't use modules, couldn't use /proc, you
couldn't set firewalling rules, you couldn't change the networking
configuration, or use X, or use USB devices... or anything that would
involve system calls... which, now that I think of it, would include
any kind of I/O.  And *that* would make the computer entirely useless.
On the other hand, your userspace programs couldn't interact with the
kernel, and you'd have pretty effective security.  About as good as
unplugging from the network.

> If this couldn't be done, perhaps there would be a way to
> transparently do strong encryption of that block of memory, making
> observation useless.

Again, the same old problem: it has to get decrypted somewhere. When
it is, you're back to the same question I brought up before... where's
the software that does the decryption, and where's the key stored to
do it - both of which can be determined with a debugger.

> With encrypted memory, there could still be problems with someone
> modifying it;

Or corrupting, or decrypting.

> it would be interesting if cpu's came with hardware support of
> mandatory memory encryption for specified blocks of ram.

That seems a rather intuitively difficult proposition.  I can't really
explain why just now, since I'd like to sit back and dig through a
book I have on the kernel and my old notes on assembly language, but
I'm quite sure that this would be much more complicated than doing it
in software.

The problems might be:

Where is the key and how does the CPU determine whether a given
piece of code should be able to access it?

Can the encrypted area of memory be read or written to without the use
of the encryption access functions (is it in main memory where
programs or hardware devices could access and examine it)?

Is the algorithm hard-wired into the CPU?  If so, how could the
algorithm be changed, if flaws are discovered or a different algorithm
is needed?

How would you handle sharing keys between processors on a
multi-processor system?

Unencrypted data would need to be put somewhere once it's encrypted or
decrypted in order to be used... how would you prevent access to it
before encryption is performed, or after decryption is performed?

How much more circuitry would be added to a CPU in order to facilitate
this?  How would the demands of encryption affect pipelining and
performance?

How would you prevent access of encrypted data that's been swapped to
disk?

I'm sure there's more problems, but these are things I'd want to know
about any plan for including encryption on the CPU level.

Hmmm. Maybe I should get some sleep tonight.

--
Chris Riddoch         |  epistemological  | Send spam to: uce at ftc.gov
socket at peakpeak.com   |  humility         | 



More information about the LUG mailing list