[lug] Steganography (was: newbie question - rc.sysinit)

D. Stimits stimits at idcomm.com
Sun Jul 15 23:23:57 MDT 2001


Ok, this is big and long, ignore it now if you don't like technical and
subjective fiction. I'll try to avoid further long responses.

Chris Riddoch wrote:
> 
> "D. Stimits" <stimits at idcomm.com> writes:
> > I was suggesting that a modified version should not be possible to
> > install remotely, if virus and password protection are enabled.
> 
> That'd be a good idea, but how should the bios know whether the user
> is sitting in front of the computer?  The bios can realistically only
> know that it is being changed - Bios manufacturers usually give .EXE
> files on their websites, meaning you usually have to have Windows
> installed to update the bios, but linux software can do it too.
> Someone could just as easily break into a Windows machine and run
> bios-updating software.

The normal procedure if one forgets the password would be to cut the
battery power and empty the cmos memory as far as settings go. I was
under the impression though that update floppies had to be in the floppy
drive *during* bootup, like a boot disk. I had assumed that this would
fail if run while actually in windows or any other program. If that is
not the case, I can't understand why a wide range of far more
destructive virus payloads don't exist. In fact, I thought the main
reason why password and virus boot sector protection was introduced to
the bios was for just that very reason. How many bioses can be updated
while windows or the other o/s is running, assuming the program is not
in a floppy that takes over during boot?

> 
> The bios can't really even "know" that it's being updated - it's just
> a spot of flash memory that gets executed, much less make any attempt
> to determine the validity of the change.

But the bios can know about the boot sequence, and make some parts of
itself visible only during hardware detection. My assumption is that
there is something different during the POST and subsequent bios
bootstrap. Certainly cutting power from the battery backup is something
that requires physical presence. Thus I don't think it is a question of
"knowing" that it is being updated, so much as the requirement of doing
whatever triggers flash update during the bootstrap, and not being
possible once bootstrap is done.

> 
> > But if the bios does not honor this protection, except from the
> > normal "hit DEL key during boot" sort of access, then it is a big
> > problem.
> 
> Well, the idea of this protection is to keep people from rebooting
> computers that they're standing in front of and twiddling with the
> BIOS settings.  It's not perfect - if you're not in front of the
> computer, you replace the bios.  If you are, you open it up and yank
> the jumper that resets the bios to its original configuration.
> 
> > My question is more like this: If a modern bios has password and
> > virus protection engaged, must the o/s also provide password to
> > alter these things?
> 
> Through what interface?

Through any interface capable of writing to the bios code areas. If I'd
created the bios, I'd disable any update right in the chips that are the
bios, whenever a pass is set. The bios should act as a firewall to
incoming changes to its code, and not honor it if the pass is set and no
pass is accepted. It does make me wonder. I may write to some of the
motherboard manufacturers and ask them this question.

> 
> > Or does the bios only enforce this when entered via the "hit DEL key
> > during boot" phase?
> 
> That's my understanding, yes.

That sucks :(

> 
> > It seems logical that a bios should be able to block updates from
> > purely software means, but then again, manufacturers often don't
> > care about the logic of those situations, only the cost and quick
> > shipping.
> 
> The convenience of being able to update the bios through software does
> seem to overrule the security you'd get from the alternative.  It
> would be great if my 486 router could boot off a CD, if it had a
> CD-drive, but its bios can't be updated by software and doesn't
> support bootable CDs anyay.  Sucks to be my router.
> 
> > My direction is towards whether a normal bios can password protect
> > against the o/s.
> 
> That would be hard to do, considering how it works:
> 
> >From http://www.wimsbios.com/HTML1/faq.html#q32:
> 
>    When you turn on your computer, several events occur automatically:
> 
>    1. The CPU "wakes up" (has power) and reads the x86 code in the BIOS chip.
> 
>    2. The code in the BIOS chip runs a series of tests, called POST
>    for Power On Self Test ...[skipping details of POST]...
> 
>    3. During POST, the BIOS compares the system configuration data
>    obtained from POST with the system information stored on a CMOS
>    memory chip located on the motherboard. (This CMOS chip, which is
>    updated whenever new system components are added, contains the
>    latest information about system components)
> 
> If you're asking for the CPU to disallow modification of the BIOS,
> that's a change in the CPU, not a change in the BIOS.  This FAQ makes
> it sound like the BIOS makes use of the CPU to execute its code,
> rather than being a self-contained co-processor like I assumed it was.
> I guess I need to re-think my understanding of BIOS interrupt
> routines as well, then...

I suspect it takes a good understanding of each chipset too, not just
the bios software, firmware, or hardware.

> 
> If somebody has resources on this, (which would also enlighten us to
> certain aspects of the kernel) please provide links.  Google isn't
> helping me much, becasue I don't know what exactly to look for.

I suspect each motherboard behaves differently.

> 
> > If that is the case, then the bios could not be altered without
> > either flashing it, or giving the pass.
> 
> This suggests that the BIOS would be treated as though some function
> inside it should be called before the instruction to overwrite any
> area of the BIOS is executed, and the return value of the function
> would specify that it is or isn't allowed.  Problem is, the return
> value would either be left in a register of the CPU, like most
> instructions do (but could be changed by a simple "MOV 1, AX" or
> wherever), or in a particular location of the BIOS memory (chicken and
> egg problem), or someplace else (where else could be protected)?

I would assume that if the cpu ignores the error, and tries to write,
the write would itself return failure. Thus no chicken/egg dilemma. If
the password is set, I would think the natural point at which non-write
begins is when the MBR of a non-floppy drive is found, or else when
control is otherwise transferred to a partition. Or something roughly
along those lines.

> 
> Perhaps there could be a register in the CPU that could only be
> modified by the instruction that verifies the password, but that
> brings up a whole mess of other problems.

Can't it just act as a hard wired device on a bus? And unless it is in
writeable condition, it could not be written to? It's odd how far a
simple question can go about whether a bios password really protects
anything...the whole issue is rather complicated.

> 
> How *would* it authenticate, anyway?

I would assume it just has a register like any other bus device,
possibly a serial port. There would be a handshake scenario...ask the
bios for write, bios checks for pass in proper location, returns a "no
pass entered" failure value; then as a result the program places a pass
in the right register or serial port, and asks again...in which case it
returns success or invalid pass. If it returns success, then any sort of
write interface would be active, and write could be asserted. Somewhere
in the cmos is a simple area of ordinary ram (battery backed) which
contains a plain text copy of the acceptable pass.

> 
> > It sounds like the the password is only mandatory from the "hit DEL
> > key during boot" sort of physical access...that linux, win, or other
> > o/s's, completely bypass the bios protections during normal
> > operations (e.g., without a floppy during boot, instead during
> > regular system run once bootup is complete).
> 
> I'm not convinced that bios protections really exist once the OS is
> loaded - I remember a snippet of a conversation once at BLUG in which
> someone said, "Linux ignores the BIOS for that, the BIOS is usually
> wrong."  Which suggests that the BIOS doesn't actually have much
> control over operating systems, in practice.

Aha! But the trick here is that you are bypassing the bios to access
non-bios hardware, like the hard drive. I doubt that it is possible to
bypass the bios to write to the bios. This is the big question I am
wondering...is it possible the bios can deny access to itself under some
circumstances, presumably after some sort of password code is activated
inside of it.

> 
> > >From what has been said, it should be easy for root to completely
> > wipe out the bios, even if password protection is set, since the
> > kernel can bypass and alter bios memory at will (previously I had
> > assumed that the bios could block updates via password, other than
> > when power is cycled during physical access).
> 
> Yes, that's my understanding.  I suspect it isn't limited to
> kernel-level code, even.

This would be quite worrisome to me...I already do not open email from
windows, and I firewall it. But the ability to go in and modify the bios
itself is the ability to destroy hard drives and other parts of the
system. Until now I was fairly certain that setting a bios pass and
virus protection would stop both bios modification and MBR modification.
I know from testing by accident that trying to run lilo will fail on
some systems when password is left on. The question then becomes, "how
easy is it to ignore the error and write to MBR anyway?" (at least for
the boot sector virus protection portion of it, which can be made active
without requiring a pass)

> 
> > > > Some filesystems do support ACL's (access control lists) for extended
> > > > mandatory access, but that too is part of the kernel...if the kernel
> > > > does not want to honor it, it won't help.
> > >
> > > And the distribution. I don't know if the current distributions
> > > include the userspace software for setting such configurations, or
> > > have capabilities built into their kernels.
> >
> > ACL's are great for fine-grain control of security on a non-compromised
> > system. They also provide a way to further refine just who is given
> > access, which is a kind of security in itself, but I consider it totally
> > inadequate once the kernel itself has been replaced or altered.
> 
> Naturally.  Again, if they manage to get root, they can do anything
> they want, and nothing can really be trusted.

Strong ACL support, such as in the NSA linux version mentioned earlier,
can be close to a guarantee *if and only if* the kernel is not modified.
So I suppose combining ACL with kernel protection could be quite
effective. I think if I ever put up a gateway or router, separate from
the rest of my setup, I'd install the NSA modified linux, probably with
XFS filesystem (it has strong support for ACL's).

> 
> > It seems I inadequately described "absolute shield". I did not mean
> > to say the whole kernel, nor all of memory. What I did imply is that
> > it would be nice if, during single user stage, various portions of
> > memory could be altered to be readable and modifiable ONLY from the
> > code that runs and resides within that memory...
> 
> If the *memory* is set to be unreadable, the CPU can't get to it to
> run it.  Linux permissions don't really line up with the permissions
> of the hardware - in order to execute code, the CPU has to be able to
> get to it.  If your program is set executable but not readable, that
> just prevents other programs from examining its contents on a
> filesystem level, it doesn't prevent the kernel from loading it.

This is the big point...it would require a modified cpu itself. The
memory would be unreadable *except* by code that is run from within the
memory itself...it would have access to its own internal state. One
could specify what interfaces are available for function calls to the
protected memory. You could modify and read the memory, but *only* to
the extent to which public interfaces are made available. If you had one
function to compare a presented password to a stored pass, it could
return true or false, but the process and data within the memory would
not be visible to anything outside. There are cpu's now that almost
support this...there are cpu's that allow multiple concurrent threads
within themselves...a single cpu that acts like SMP. It would require a
further modification to allow private interfaces to portions of memory.
It would also require private registers and such...everything a minimal
cpu environment needs, in a separate, segregated area. It seems that
this could be a logical extension of current cpu tech if security really
gets to be a big enough issue. Intel is already talking about
simultaneous multithreading (multiple processes with concurrent
execution in a single cpu) for their next generation of cpu, while
Alphas have been doing it for quite a while.

> 
> Besides if your list of installed modules is what you want to protect,
> (to prevent against modules modifying kernel memory and placing
> themselves in the kernel, basically doing the work of insmod itself)
> you'd have to allow the memory to be modifiable when a new module is
> being *legitimately* loaded.  At which point, the rogue
> module-installer could take that moment to install itself in the
> legitimate module's place, by interrupting the execution of the
> module-loading code.

It just seems that there needs to be a more general mechanism for
internal protection. Once upon a time, there were people who suggested
that good programming should not need protected memory. But complexity
makes it a good idea. Since security has become such a vicious problem,
it seems like it is time for hardware to help with security tools now.
In this case, the idea is that the memory is only modifiable by itself,
that it is 100% private from the rest of the system, except for
interfaces that are registered during bootup...once you are satisfied
that your security is in place, you assert to activate it, and then
there would be no more access to it except for pre-allocated functions
(which are themselves hidden from view for internal purposes).

In terms of module signing, the big argument is that one can bypass the
code if it can be read and analyzed. I'm suggesting hardware enforced
hiding of this code; it would be possible to see what data is passed to
it, and what it returns, but there would be no possibility of seeing how
it did what it does. Nor would there be a possibility of overwriting it.

> 
> And if you don't want the memory to be modifiable *ever* after the
> system leaves single-user mode, you might as well not enable loadable
> modules, and modify the existing pieces of the kernel that currently
> can only be installed as modules to allow them to be built in.

This would not stop loadable modules from being loaded. It only is
unmodifiable for designated portions of memory. I'm suggesting the
portion that does disk driver and signature checking would be protected.
This means you can still load modules if the private code does the
job...some other software would have to request the load, through a
published interface. The location that it loads it could be elsewhere,
or it too could be in private areas; but only the private code could
load it within a private area. To say it is an absolute shield means
only an absolute shield against direct outside manipulation...code that
is private could still self-manipulate its private content.

> 
> > In a way I guess you could say it is inspired by Alpha cpu's and
> > other cpu's that can run simultaneous threads directly in the cpu,
> > almost like a single chip SMP system.
> 
> This seems rather off-topic from the subject of securing one's system,
> but pipelining and branch prediction does this to some degree in a lot
> of architectures - including x86.

I suppose it has a lot in common, as a kind of dedicated sub process.
Simultaneous, concurrent multithreading does exist now for some cpu's,
where the various threads are independent. I guess there is an extreme
advantage for cache efficiency when done this way.

> 
> > Now if an area like this was the hard disk driver, and requests
> > could be sent to it, or if the area monitored an outside request
> > pool, you could still feed it garbage, but it would have the right
> > to refuse. Further, any attempt to write to it directly should be
> > 100% denied.
> 
> > This is what I mean by an absolute shield on some small subset of
> > ram. Modules and all else would work just fine. Module loader code
> > could be placed in such an area as well, so it could hide what it
> > does...no strace or other means could view its internals, only its
> > talking to the outside world could be snooped.
> 
> Now this is where you'll run into some other problems. The CPU can
> always send to devices directly - it's up to what's running on the CPU
> to decide whether it should or not.  And on the operating system
> level, keep in mind that the OS doesn't *prevent* programs from
> accessing hardware directly.  It just prevents programs run by
> non-root users: have a look at the definition and comments for
> sys_iopl() in arch/i386/kernel/ioport.c, for an example.

It would require modification to the cpu to do it. Basically it would be
an extension of the protected memory schemes that already exist. But it
would give the private process, at the time it boots, the right to hog
access to memory (and probably irq); only the code in the private area
could give up the right to that irq or memory; anything else trying to
use that memory or device would have to generate an exception. It seems
like it is a logical extension to current hardware if security continues
to be such a problem (script kiddies, and the need for security, do not
seem to be a fading issue based on purely software means).

> 
> If you prevent it from root, you block off any way of accessing
> hardware the kernel doesn't specifically know about.  It *is* possible
> to have userspace hardware drivers.

If you mark the memory related to the hardware as private, you'd better
have all the access functions you need before you assert the "go
private" pin. But if you have a complete interface to do full hardware
access (despite having sanity or security checking), then this would not
be a loss. It is analogous to hardware support of public/private class
member data and methods.

> 
> > Yes, it would be just about impossible to do :(
> 
> If the design of such a beast wouldn't prevent the system from
> functioning in the first place, yeah.

The trick is to make complete and functional public interfaces to any
private memory before you assert the "privacy" pin. Once "privacy" is
asserted, *nothing* would have the ability to reach in from outside that
memory region and modify it, short of passing parameters to declared
public functions. What those function do might itself modify the memory,
but there is an assumption that bounds checking and other forms of
sanity checks (in this case, pgp signatures) would also be run,
invisible to prying eyes.

> 
> > > Again, the same old problem: it has to get decrypted somewhere. When
> > > it is, you're back to the same question I brought up before... where's
> > > the software that does the decryption, and where's the key stored to
> > > do it - both of which can be determined with a debugger.
> >
> > That is another reason why there needs to be a truly "private" memory,
> > that can provide function interfaces, but cannot be snooped or examined
> > by anything once it passes beyond single user mode.
> 
> I don't think private memory is the answer.  The only real security
> seems to be first in preventing someone from getting into the system,
> and then preventing a user from increasing their privileges on the
> system.

This is the efficient way, yes. But it requires complex systems to be
overly clever. And it requires programmers to make very very few
mistakes. It would be nice if the next generation of cpu's had better
hardware support of security, it could go a long way in aiding the fight
against human error and "black box" attitudes (like "I installed the
o/s, shouldn't it be secure without me having to set up a firewall?").

> 
> > > > it would be interesting if cpu's came with hardware support of
> > > > mandatory memory encryption for specified blocks of ram.
> > >
> > > That seems a rather intuitively difficult proposition.  I can't really
> > > explain why just now, since I'd like to sit back and dig through a
> > > book I have on the kernel and my old notes on assembly language, but
> > > I'm quite sure that this would be much more complicated than doing it
> > > in software.
> >
> > Not really, even ethernet cards can be purchased with hardware for real
> > time encryption. The eepro100 has an "S" version, as one example. Plus
> > several of the wireless modem or network cards now have some form of
> > encryption that is hardware generated.
> 
> That's slightly different. The encryption is self-contained in the
> processing on the ethernet card.  Data comes in from the network, is
> cached in a little memory on the card, is decrypted by the processing
> work on the card, and then is put in the buffer for sending to the
> system.  Card throws an interrupt, and the CPU grabs the *already
> decrypted* chunk of data and puts it in main memory, on a network
> queue.

Why not add that little ability to the cpu? I'm thinking of a general
mechanism to do just this, in hardware. Keeping it private from
software.

> 
> In this case, the encryption is on the ethernet card, not in the main
> CPU.  Ethernet cards aren't as complicated as CPUs, and can afford to
> do a little more logic.  Adding encryption instructions to a CPU's
> instruction set is downright excessive - it would be the combination
> of a lot of other instructions anyway, couldn't be updated, and don't
> make sense for general-purpose computing.

Sure it would work, and if you want to update it, just plug in a
different rom chip. It is no more unreasonable than adding MMX or SIMD.
With the hardware random number generation of the PIII, part of the job
is already done.

> 
> Off-topic, though, if you're really interested in hardware-level
> encryption, though, a system like Deep Crack
> (http://www.eff.org/descracker) has a rather different design.

It's always an interesting topic (at least for me, I'm weird, I played
3D chess as a child).

> 
> > It's already basic tech, just not
> > added to a cpu that I know of. I'd like to see it available hard wired
> > to x86. It could be provided its algo via flash memory, or even a
> > separate rom plugin chip. Physical access could defeat it, but I am only
> > talking about protection against people who have remote access.
> 
> Again, it doesn't *matter* where the code came from, when it's being
> run, and there's no way for the CPU to know.  If it's the proper
> instructions, the CPU can run it.  It's not like the instructions are
> any different by the time they get to the hardware.  You'd have the
> same luck trying to ask the CPU to figure out what programming
> language the instructions were produced from, *after* it's been
> compiled.

I'm talking about new hardware, that *is* different...something like a
separate AGP bus, but this would be for isolation purposes, and *not*
use the core cpu facilities...it would be pseudo independent of the
normal functionality. Not touching the regular facilities in any way
that allows the basic cpu to see inside, once a pin is asserted. After
that pin is asserted, the bus for writing and reading of private areas
would be cut off. Only a query/result bus would remain connected, and
that particular bus would behave something like the Crusoe chip,
downloadable during boot. Hmm. Makes me think how much fun it could be
to play with one of those chips...I wonder if the "plastic" nature of
the Crusoe is flexible enough to do some sort of proof-of-concept?

> 
> > > The problems might be:
> > >
> > > Where is the key and how does the CPU determine whether a given
> > > piece of code should be able to access it?
> >
> > In ROM. Direct physical connection, similar to an encrypted ethernet
> > card (existing tech rearranged).
> 
> See above for description of ethernet cards implementing encryption.
> 
> I should clarify: How should the CPU distinguish between legitimate
> kernel code and other code?  The CPU doesn't *care*, and can't tell
> the difference by the time it gets in the instruction stream.
> Non-root users can't write to areas of memory owned by root or the
> kernel, and that's the basis for permissions.  You can't expect the
> kernel to prevent *itself* from reading or writing anywhere in memory.

This is a separate, simultaneous thread, in isolated hardware. It
doesn't matter what the CPU thinks or distinguishes, because it isn't
the main cpu thread that matters. What matters is a simultaneous, second
thread, in physically isolated hardware, that has cut its own bus.

> 
> > > Can the encrypted area of memory be read or written to without the use
> > > of the encryption access functions (is it in main memory where
> > > programs or hardware devices could access and examine it)?
> >
> > With current cpus, I doubt it could be forced to protect the area.
> 
> It can't.
> 
> > Integration of an encryption unit (think of EU like FPU) could be hard
> > wired such that once enabled, the area *cannot* be taken out of
> > encryption mode, nor could the algo it runs under be viewed by any means
> > (without reboot).
> 
> Hard wired.  You mean, the CPU can't access that memory at all,
> without going through the EU?  That means the EU has to be the
> intermediary of all memory access.  There'll certainly be some
> overhead on that, and it would throw a huge wrench into caching
> mechanisms.

Correct. At least not once the security pin has been asserted. It would
retract its own bus, other than for published function interfaces. What
goes on inside would truly be a black box. But not for all memory, only
for memory that is designated similar to memory mapping or even malloc.
Any process that requires going through this would be bottlenecked to
the extent that it must wait for this black box...but this is something
that could be refined also, and made faster, just like MMX and SIMD and
caching has changed over the years. Creating it in hardware would do
wonders to making it lower overhead (something like the NVidia graphics
chips, specialization can lead to huge speed gains...in the areas that
these video chips are designed, there isn't a cpu in the world that can
come close to their performance, at any price...but they don't do
everything under the sun). As far as caching goes, it wouldn't have any
effect, it would be an independent piece of hardware, despite being on
the cpu. The main cpu thread would still be using the same caching
mechanism it uses now. The extended hardware is not "symmetric"
multiprocessing, and it does not share with the main cpu (how could
it...it is a black box). One always has to consider operations as
causing a cache hit or miss, and if SMP, whether one of the other cpu's
has a more up-to-date and accurate cache value, but that is only an
issue if one cpu might take over work that another cpu has modified (or
when memory is read directly that requires trashing cache). If the
private areas are read, and in cache, then they are in cache no
differently than if it had been read from main memory.

> 
> Okay, so the instructions that *used* to be in FPUs are now pretty
> standard in the instruction set, and have gone even further with MMX
> and various other such things.  But the EU, since it would mediate
> access to main memory, would mess with caching performance.  It would
> also need to have memory of its own in order to keep track of the
> memory locations that are supposed to be protected.  How do you keep
> that from being modified?

It wouldn't be messing with main memory. Portions of main memory would
become isolated, with this as the only i/o to the hidden memory. Cache
is on the other side of it. It would not touch any part of main memory,
except what is allocated via something similar to memory mapping prior
to assertion of the security pin. *ONLY* that portion of memory would be
delegated to this hardware. If you allocate 16 MB, and run your disk
drivers and pgp code within it, and have 256 MB total memory, you'd have
240 MB that never sees or touches the black box. The black box would
essentially require chipset and cpu support to carve a chunk out of the
address space of memory. The main cpu hardware wouldn't care about the
isolated chunk, as far as it is concerned, that chunk does not exist,
but the public interface to the hardware does (which reminds me more of
the Crusoe chip, since it can mutate itself...the black box code would
almost be like a custom instruction set, very limited in size, to do
special tasks like validate a pgp signature, or read from a hard drive).
The black box would be something of an entire, but simple, independent
thread, with chipset cooperation. Once the assert is done, the cpu and
chipset would not allow the secure area geometry to be modified again
until reboot.

> 
> > > How would you handle sharing keys between processors on a
> > > multi-processor system?
> >
> > I would suggest that each cpu would have to have the same keys stored in
> > each. Drop in the right ROM chip, voila.
> 
> And if you wanted to change the keys because someone got into the
> system and managed to run the instruction that reads the ROM (since
> they've got privileges to do that, if it's kernel-level code), you'd
> have to reboot and find some different ROMs.

No privilege since the chipset and cpu retract access (hardware
enforced) to black box areas, except by the black box interface. Use
tristate logic, literally cutting it if you want. The type of ROM you
are imagining is connected to the public bus, but I am suggesting a
private, separate bus. As far as the main memory bus is concerned, the
code and data in the black box area might as well be on a separate
machine somewhere else in the world...absolutely the only link between
main cpu and main memory, once the security pin is asserted, would be
through black box functions (reminiscent of Crusoe for adaptability).
This is a hardware solution that extends things in a "non-traditional"
design. But it really wouldn't be that hard for someone like Intel to do
this (it would be expensive, just like any new cpu or chipset, but the
concept would be simple enough).

> 
> > > Unencrypted data would need to be put somewhere once it's encrypted or
> > > decrypted in order to be used... how would you prevent access to it
> > > before encryption is performed, or after decryption is performed?
> >
> > Regions of memory to be protected would be mapped in single user mode.
> > Any code or data inside that area would be initialized and started in
> > single user mode.
> 
> Which means you couldn't load or unload modules, or change whatever is
> protected at any other time.  Why not just disable module support
> entirely?

No, it does not mean that. You are assuming modules would be on the
"outside" trying to get "in". I'm saying you could ring the doorbell at
the black box, shout into the intercom "load the scsi module", but only
the hidden code inside of the black box would be able to do the loading.
The privacy is not necessarily a double-edged sword. Only the black box
can alter its private parts, but like all other kernel code, it *would*
have the ability to work with main memory and the rest of the system. If
you ever used C++, think about a private class method altering a global
variable...the private method can do this, but a global function cannot
do the reverse and alter the private variables.

> 
> By the way, "single user mode" refers to runlevel 1, an init thing.
> Your rogue root user simply changes what happens in your system
> scripts when runlevel 1 is reached on boot, and your protection could
> be conveniently commented out.

Well, this is true. But you need a point in setup when you trust it to
at least some degree. But imagine that some of the black box code is the
hard disk driver, and that it is aware of the files used for init. And
assume single user mode is not network accessible. The black box could
very well be designed to protect anything, even the init files...in
multiuser mode, *nobody* could modify init. If the crafty villain came
up with a way to run a new program that automatically kicked in only
during single user mode, then there would be a way around it. What makes
this particularly nice is that single user mode is just a suggestion of
when the code kicks in, making maintenance easy. You could just as
easily make the code kick in at the very earliest possible moment, and
make security maintenance require running from a rescue disk. Or connect
it to a lock (literally, a key on the front panel of the case), that it
engages early on unless the key is turned.

> 
> > Then a hardware pin would be asserted, and short of reboot,
> > *nothing* could do anything but use the existing functions and
> > interfaces of that memory block. What is outside of the block,
> > unencrypted, would not be protected. Anything that is generated from
> > inside the block would *never* be visible to the outside world,
> > without the block allowing it. In the case of a disk driver, for an
> > encrypted filesystem, this would do wonders.
> 
> In which case, you've encrypted something on bootup and never give
> yourself the ability to decrypt it.  What's the point of encrypting it
> if you can't decrypt it to be used?

It is like a C++ class, you have public interfaces...you can pass
arguments and get return values...what you can't do is examine the code
inside of it, or its data...you absolutely MUST rely on whatever
functionality was present at the time of assert. It would be a bad idea
to have an interface for changing passwords in the black box...you could
add such an interface...but it would be a big risk. There is absolutely
no handicap in having a public interface to decrypt, while denying
anyone access to the code taking place during the decryption.

> 
> This evades the original question, though.  You have something that
> needs to be encrypted - it's plaintext.  If, before it's encrypted,
> someone manages to get the plaintext, the encryption hasn't done a bit
> of good.  Then, in order to be used later, the ciphertext has to be
> decrypted.  It's decrypted, and used by the kernel, but before the
> plaintext is wiped from memory... a rogue interrupt is thrown, and the
> plaintext is copied and sent over the network by the interrupt
> handler.

Quite true. But if you set the system up to initialize the security
environment in a mode that is not yet connected to the outside world,
and after that, there is absolutely no way possible to snoop it, it is
rather hard for a remote script kiddie, even with root snooping, to get
around it. But the plain text of the private key would *never* be
visible once the assert pin is activated. The only possible time this
could be snooped is while it is being loaded into the black box, or
after load but prior to assert of the feature. Assuming this is done in
single user mode, before much of anything is running, it would be a very
very difficult thing to get around. And it would work for the average
person who knows nothing about firewalls or security. The average person
would still have to keep their private key secret, assuming a key system
is used. This is really a double issue, one being a way to enforce
security software, and the other issue being what the security software
itself does. The black box is the hardware to make possible the true
hiding of secure algorithms and data on a small, limited segment of ram.
The pgp signed modules are just a form of security that could make use
of a black box. Disk drivers and ethernet drivers could also make use of
it (by mapping the drivers and marking the public interfaces to black
box before asserting the security pin).

> 
> This really isn't that simple a problem.

While I believe it would be expensive to add such a black box, it's
certainly within the abilities of most cpu manufacturers even today. The
only question is how long will it take for security to become such an
issue that hardware support is something buyers will pay extra for. The
idea of black box protected space means you could do
encryption/decryption in software, but it seems to me that if you are
going to add the black box support, it wouldn't be that bad to offer
hardware encryption support with it (they are mutually beneficial).

> 
> > > How much more circuitry would be added to a CPU in order to facilitate
> > > this?  How would the demands of encryption affect pipelining and
> > > performance?
> >
> > Don't know. But since the tech exists now for network cards, it should
> > be reasonable (as long as the economy wants it). Network cards and
> > wireless adapters with encryption slow down, I think that is
> > unavoidable. How much it slows would depend on how it is used and how
> > the hardware is designed.
> 
> I think it'd be rather prohibitive, myself...  You'd slow down *every*
> memory access.  Maybe you'd be okay with your PC133 memory acting like
> it's 66, but it seems a diminishing return for the cost.

If I access my modem, it won't slow down my 100 Mbit/sec ethernet card.
The only slowdown might be if you access the black box, but only the
items specifically allocated to it would run there. And if you had
something like hardware encryption (which you would map to the black
box), it might be faster than current encryption anyway (being hardware
assisted). I remember recently on BLUG there was mention of how much web
traffic a web site can handle without SSL, compared to purely SSL
traffic. There was mention that hardware accelerator cards were
available for SSL. If a hardware encryption module was part of a cpu, it
could be a huge bonus to performance of an SSL web server, even if it
must go through the black box. The black box is a simultaneous thread,
so it adds cpu power, rather than slowing it down. It is possible that
the software or other operations could run faster outside of the black
box, but the black box doesn't suck anything out of the main hardware.
Assuming you wanted to *only* run one process, it would be faster to map
your hardware encryption support to the main cpu thread, but if you
expect to do anything else (like hard drive access), it pays to
distribute the load.

> 
> > > How would you prevent access of encrypted data that's been swapped to
> > > disk?
> >
> > The drivers themselves could be in an encrypted/protected segment of
> > memory.
> 
> And then couldn't be used.  Do you want to decrypt the module powering
> your ethernet card *every* time a packet comes in and the card throws
> an interrupt, and then re-encrypt it after you're done queuing the
> packet?

It wouldn't be a bad idea if dedicated hardware existed, so far as
encryption goes. On the other hand, simply isolating a driver in a black
box that doesn't allow strangers to poke around in does not drop
performance in any way. The reason I mention it is because every time
encryption is mentioned, someone always points out that root can read
it, evaluate it, and manipulate it. Those faults would be gone if you
had the encryption running inside of a black box unit. You still have
the performance questions for all that encryption activity, but such
questions exist regardless of whether prying eyes watch the process or
not. Once you have a black box though, it becomes more lucrative to add
hardware encryption support also (the probability of losing value of the
hardware due to snooping goes way down when nobody can view its internal
code).

> 
> > The reads/writes could enforce that everything going to the disk
> > is encrypted.
> 
> This is really a whole different problem. You've just copied your
> protected segment of memory to the hard disk. How do you plan on
> making sure the hacker can't read the swap space?  Start marking
> sectors on the hard drive as "protected" and require the hard drive to
> check every access to see if it's allowable or not?

It's another example is all. Encrypted partitions are nice, but as other
people have pointed out, then you have to worry about the ram being
snooped, or the swap space being snooped. Once you can map drivers and
pci space to the black box area, all of those arguments go away. Having
a means to enforce security with hardware would be an enormous
advantage, and this is one case that would benefit greatly with very
little modification to the actual encrypted partition code. It is
another reason why hardware support for security might pay off.

> 
> > The process of encrypting and decrypting would be private, and
> > inaccessible, as enforced by hardware. The trick is to (a) create
> > the hardware support for mapping areas of ram as enforced for access
> > blocking, (b) create the hardware to do encryption on-the-fly (or
> > else use software within the protected area), and (c) map your
> > relevant driver or other libraries into this area before asserting
> > the pin that says "you are now independent, nobody has authority
> > over you but you".
> 
> I'm having a hard time following just what you mean by this.  I'm also
> losing the motivation to explain why this general plan is impractical
> and infeasable... perhaps you should finish your design and then BLUG
> folks can examine the idea.

I'm not a manufacturer. I just enjoy the subject, and trying to predict
what the next generation of tech will bring. But people have been
arguing that hardware assisted security is without merit, and I'm saying
it has a lot of merit. But the big (and very long) reply above should
explain the idea, for the patient. I suggest that the ability to do the
equivalent of C++ hiding of private data, while still showing a public
interface, but designed to match a simultaneous/independent cpu thread
(thus requiring cpu and chipset modification), could go a long ways to
aid security, even on complicated systems with users that don't know
what they are doing. The motivation of the suggestion is a reaction to
the weaknesses of module signing that were pointed out.

> 
> > > I'm sure there's more problems, but these are things I'd want to know
> > > about any plan for including encryption on the CPU level.
> >
> > I never said it would be easy. But it sounds like it is the next step in
> > the evolution of more "transparent" security that does not cripple
> > normal use. "Normal use" is critical, anyone can cut off the legs to
> > keep something from walking away.
> 
> Anything truly effective would damage normal use, particularly if you
> include the valid administrator's needs to access any part of the
> system possible.  Security isn't transparent.

I basically agree with this. But I also suggest that when an
administrator says "ok, the security is now set up, don't allow anyone
to alter it during this bootup period", this too should be possible.
Current hardware seems to make it impossible. In the past, this wouldn't
be an issue. But as security gets to be a bigger problem, which will not
go away, and especially as more untrained or poorly trained
administrators become active, it seems likely hardware security support
could have value.

D. Stimits, stimits at idcomm.com

> 
> --
> Chris Riddoch         |  epistemological  | Send spam to: uce at ftc.gov
> socket at peakpeak.com   |  humility         |
> _______________________________________________
> Web Page:  http://lug.boulder.co.us
> Mailing List: http://lists.lug.boulder.co.us/mailman/listinfo/lug



More information about the LUG mailing list