No subject


Tue Jun 4 12:17:20 MDT 2013


   When you turn on your computer, several events occur automatically:

   1. The CPU "wakes up" (has power) and reads the x86 code in the BIOS chip.

   2. The code in the BIOS chip runs a series of tests, called POST
   for Power On Self Test ...[skipping details of POST]...

   3. During POST, the BIOS compares the system configuration data
   obtained from POST with the system information stored on a CMOS
   memory chip located on the motherboard. (This CMOS chip, which is
   updated whenever new system components are added, contains the
   latest information about system components)

If you're asking for the CPU to disallow modification of the BIOS,
that's a change in the CPU, not a change in the BIOS.  This FAQ makes
it sound like the BIOS makes use of the CPU to execute its code,
rather than being a self-contained co-processor like I assumed it was.
I guess I need to re-think my understanding of BIOS interrupt
routines as well, then...

If somebody has resources on this, (which would also enlighten us to
certain aspects of the kernel) please provide links.  Google isn't
helping me much, becasue I don't know what exactly to look for.

> If that is the case, then the bios could not be altered without
> either flashing it, or giving the pass.

This suggests that the BIOS would be treated as though some function
inside it should be called before the instruction to overwrite any
area of the BIOS is executed, and the return value of the function
would specify that it is or isn't allowed.  Problem is, the return
value would either be left in a register of the CPU, like most
instructions do (but could be changed by a simple "MOV 1, AX" or
wherever), or in a particular location of the BIOS memory (chicken and
egg problem), or someplace else (where else could be protected)?

Perhaps there could be a register in the CPU that could only be
modified by the instruction that verifies the password, but that
brings up a whole mess of other problems.

How *would* it authenticate, anyway?

> It sounds like the the password is only mandatory from the "hit DEL
> key during boot" sort of physical access...that linux, win, or other
> o/s's, completely bypass the bios protections during normal
> operations (e.g., without a floppy during boot, instead during
> regular system run once bootup is complete).

I'm not convinced that bios protections really exist once the OS is
loaded - I remember a snippet of a conversation once at BLUG in which
someone said, "Linux ignores the BIOS for that, the BIOS is usually
wrong."  Which suggests that the BIOS doesn't actually have much
control over operating systems, in practice.

> >From what has been said, it should be easy for root to completely
> wipe out the bios, even if password protection is set, since the
> kernel can bypass and alter bios memory at will (previously I had
> assumed that the bios could block updates via password, other than
> when power is cycled during physical access).

Yes, that's my understanding.  I suspect it isn't limited to
kernel-level code, even.

> > > Some filesystems do support ACL's (access control lists) for extended
> > > mandatory access, but that too is part of the kernel...if the kernel
> > > does not want to honor it, it won't help.
> > 
> > And the distribution. I don't know if the current distributions
> > include the userspace software for setting such configurations, or
> > have capabilities built into their kernels.
> 
> ACL's are great for fine-grain control of security on a non-compromised
> system. They also provide a way to further refine just who is given
> access, which is a kind of security in itself, but I consider it totally
> inadequate once the kernel itself has been replaced or altered.

Naturally.  Again, if they manage to get root, they can do anything
they want, and nothing can really be trusted.

> It seems I inadequately described "absolute shield". I did not mean
> to say the whole kernel, nor all of memory. What I did imply is that
> it would be nice if, during single user stage, various portions of
> memory could be altered to be readable and modifiable ONLY from the
> code that runs and resides within that memory...

If the *memory* is set to be unreadable, the CPU can't get to it to
run it.  Linux permissions don't really line up with the permissions
of the hardware - in order to execute code, the CPU has to be able to
get to it.  If your program is set executable but not readable, that
just prevents other programs from examining its contents on a
filesystem level, it doesn't prevent the kernel from loading it.

Besides if your list of installed modules is what you want to protect,
(to prevent against modules modifying kernel memory and placing
themselves in the kernel, basically doing the work of insmod itself)
you'd have to allow the memory to be modifiable when a new module is
being *legitimately* loaded.  At which point, the rogue
module-installer could take that moment to install itself in the
legitimate module's place, by interrupting the execution of the
module-loading code.

And if you don't want the memory to be modifiable *ever* after the
system leaves single-user mode, you might as well not enable loadable
modules, and modify the existing pieces of the kernel that currently
can only be installed as modules to allow them to be built in.

> In a way I guess you could say it is inspired by Alpha cpu's and
> other cpu's that can run simultaneous threads directly in the cpu,
> almost like a single chip SMP system.

This seems rather off-topic from the subject of securing one's system,
but pipelining and branch prediction does this to some degree in a lot
of architectures - including x86.

> Now if an area like this was the hard disk driver, and requests
> could be sent to it, or if the area monitored an outside request
> pool, you could still feed it garbage, but it would have the right
> to refuse. Further, any attempt to write to it directly should be
> 100% denied.


> This is what I mean by an absolute shield on some small subset of
> ram. Modules and all else would work just fine. Module loader code
> could be placed in such an area as well, so it could hide what it
> does...no strace or other means could view its internals, only its
> talking to the outside world could be snooped.


Now this is where you'll run into some other problems. The CPU can
always send to devices directly - it's up to what's running on the CPU
to decide whether it should or not.  And on the operating system
level, keep in mind that the OS doesn't *prevent* programs from
accessing hardware directly.  It just prevents programs run by
non-root users: have a look at the definition and comments for
sys_iopl() in arch/i386/kernel/ioport.c, for an example.

If you prevent it from root, you block off any way of accessing
hardware the kernel doesn't specifically know about.  It *is* possible
to have userspace hardware drivers.

> Yes, it would be just about impossible to do :(

If the design of such a beast wouldn't prevent the system from
functioning in the first place, yeah.

> > Again, the same old problem: it has to get decrypted somewhere. When
> > it is, you're back to the same question I brought up before... where's
> > the software that does the decryption, and where's the key stored to
> > do it - both of which can be determined with a debugger.
> 
> That is another reason why there needs to be a truly "private" memory,
> that can provide function interfaces, but cannot be snooped or examined
> by anything once it passes beyond single user mode.

I don't think private memory is the answer.  The only real security
seems to be first in preventing someone from getting into the system,
and then preventing a user from increasing their privileges on the
system.

> > > it would be interesting if cpu's came with hardware support of
> > > mandatory memory encryption for specified blocks of ram.
> > 
> > That seems a rather intuitively difficult proposition.  I can't really
> > explain why just now, since I'd like to sit back and dig through a
> > book I have on the kernel and my old notes on assembly language, but
> > I'm quite sure that this would be much more complicated than doing it
> > in software.
> 
> Not really, even ethernet cards can be purchased with hardware for real
> time encryption. The eepro100 has an "S" version, as one example. Plus
> several of the wireless modem or network cards now have some form of
> encryption that is hardware generated.

That's slightly different. The encryption is self-contained in the
processing on the ethernet card.  Data comes in from the network, is
cached in a little memory on the card, is decrypted by the processing
work on the card, and then is put in the buffer for sending to the
system.  Card throws an interrupt, and the CPU grabs the *already
decrypted* chunk of data and puts it in main memory, on a network
queue.

In this case, the encryption is on the ethernet card, not in the main
CPU.  Ethernet cards aren't as complicated as CPUs, and can afford to
do a little more logic.  Adding encryption instructions to a CPU's
instruction set is downright excessive - it would be the combination
of a lot of other instructions anyway, couldn't be updated, and don't
make sense for general-purpose computing.

Off-topic, though, if you're really interested in hardware-level
encryption, though, a system like Deep Crack
(http://www.eff.org/descracker) has a rather different design.

> It's already basic tech, just not
> added to a cpu that I know of. I'd like to see it available hard wired
> to x86. It could be provided its algo via flash memory, or even a
> separate rom plugin chip. Physical access could defeat it, but I am only
> talking about protection against people who have remote access.

Again, it doesn't *matter* where the code came from, when it's being
run, and there's no way for the CPU to know.  If it's the proper
instructions, the CPU can run it.  It's not like the instructions are
any different by the time they get to the hardware.  You'd have the
same luck trying to ask the CPU to figure out what programming
language the instructions were produced from, *after* it's been
compiled.

> > The problems might be:
> > 
> > Where is the key and how does the CPU determine whether a given
> > piece of code should be able to access it?
> 
> In ROM. Direct physical connection, similar to an encrypted ethernet
> card (existing tech rearranged).

See above for description of ethernet cards implementing encryption.

I should clarify: How should the CPU distinguish between legitimate
kernel code and other code?  The CPU doesn't *care*, and can't tell
the difference by the time it gets in the instruction stream.
Non-root users can't write to areas of memory owned by root or the
kernel, and that's the basis for permissions.  You can't expect the
kernel to prevent *itself* from reading or writing anywhere in memory.

> > Can the encrypted area of memory be read or written to without the use
> > of the encryption access functions (is it in main memory where
> > programs or hardware devices could access and examine it)?
> 
> With current cpus, I doubt it could be forced to protect the area.

It can't.

> Integration of an encryption unit (think of EU like FPU) could be hard
> wired such that once enabled, the area *cannot* be taken out of
> encryption mode, nor could the algo it runs under be viewed by any means
> (without reboot).

Hard wired.  You mean, the CPU can't access that memory at all,
without going through the EU?  That means the EU has to be the
intermediary of all memory access.  There'll certainly be some
overhead on that, and it would throw a huge wrench into caching
mechanisms.

Okay, so the instructions that *used* to be in FPUs are now pretty
standard in the instruction set, and have gone even further with MMX
and various other such things.  But the EU, since it would mediate
access to main memory, would mess with caching performance.  It would
also need to have memory of its own in order to keep track of the
memory locations that are supposed to be protected.  How do you keep
that from being modified?

> > How would you handle sharing keys between processors on a
> > multi-processor system?
> 
> I would suggest that each cpu would have to have the same keys stored in
> each. Drop in the right ROM chip, voila.

And if you wanted to change the keys because someone got into the
system and managed to run the instruction that reads the ROM (since
they've got privileges to do that, if it's kernel-level code), you'd
have to reboot and find some different ROMs.

> > Unencrypted data would need to be put somewhere once it's encrypted or
> > decrypted in order to be used... how would you prevent access to it
> > before encryption is performed, or after decryption is performed?
> 
> Regions of memory to be protected would be mapped in single user mode.
> Any code or data inside that area would be initialized and started in
> single user mode.

Which means you couldn't load or unload modules, or change whatever is
protected at any other time.  Why not just disable module support
entirely?

By the way, "single user mode" refers to runlevel 1, an init thing.
Your rogue root user simply changes what happens in your system
scripts when runlevel 1 is reached on boot, and your protection could
be conveniently commented out. 

> Then a hardware pin would be asserted, and short of reboot,
> *nothing* could do anything but use the existing functions and
> interfaces of that memory block. What is outside of the block,
> unencrypted, would not be protected. Anything that is generated from
> inside the block would *never* be visible to the outside world,
> without the block allowing it. In the case of a disk driver, for an
> encrypted filesystem, this would do wonders.

In which case, you've encrypted something on bootup and never give
yourself the ability to decrypt it.  What's the point of encrypting it
if you can't decrypt it to be used?

This evades the original question, though.  You have something that
needs to be encrypted - it's plaintext.  If, before it's encrypted,
someone manages to get the plaintext, the encryption hasn't done a bit
of good.  Then, in order to be used later, the ciphertext has to be
decrypted.  It's decrypted, and used by the kernel, but before the
plaintext is wiped from memory... a rogue interrupt is thrown, and the
plaintext is copied and sent over the network by the interrupt
handler.

This really isn't that simple a problem.

> > How much more circuitry would be added to a CPU in order to facilitate
> > this?  How would the demands of encryption affect pipelining and
> > performance?
> 
> Don't know. But since the tech exists now for network cards, it should
> be reasonable (as long as the economy wants it). Network cards and
> wireless adapters with encryption slow down, I think that is
> unavoidable. How much it slows would depend on how it is used and how
> the hardware is designed.

I think it'd be rather prohibitive, myself...  You'd slow down *every*
memory access.  Maybe you'd be okay with your PC133 memory acting like
it's 66, but it seems a diminishing return for the cost.

> > How would you prevent access of encrypted data that's been swapped to
> > disk?
> 
> The drivers themselves could be in an encrypted/protected segment of
> memory.

And then couldn't be used.  Do you want to decrypt the module powering
your ethernet card *every* time a packet comes in and the card throws
an interrupt, and then re-encrypt it after you're done queuing the
packet?

> The reads/writes could enforce that everything going to the disk
> is encrypted.

This is really a whole different problem. You've just copied your
protected segment of memory to the hard disk. How do you plan on
making sure the hacker can't read the swap space?  Start marking
sectors on the hard drive as "protected" and require the hard drive to
check every access to see if it's allowable or not?

> The process of encrypting and decrypting would be private, and
> inaccessible, as enforced by hardware. The trick is to (a) create
> the hardware support for mapping areas of ram as enforced for access
> blocking, (b) create the hardware to do encryption on-the-fly (or
> else use software within the protected area), and (c) map your
> relevant driver or other libraries into this area before asserting
> the pin that says "you are now independent, nobody has authority
> over you but you".

I'm having a hard time following just what you mean by this.  I'm also
losing the motivation to explain why this general plan is impractical
and infeasable... perhaps you should finish your design and then BLUG
folks can examine the idea.

> > I'm sure there's more problems, but these are things I'd want to know
> > about any plan for including encryption on the CPU level.
> 
> I never said it would be easy. But it sounds like it is the next step in
> the evolution of more "transparent" security that does not cripple
> normal use. "Normal use" is critical, anyone can cut off the legs to
> keep something from walking away.

Anything truly effective would damage normal use, particularly if you
include the valid administrator's needs to access any part of the
system possible.  Security isn't transparent.

--
Chris Riddoch         |  epistemological  | Send spam to: uce at ftc.gov
socket at peakpeak.com   |  humility         | 



More information about the LUG mailing list