[lug] Steganography (was: ne

rm at mamma.varadinet.de rm at mamma.varadinet.de
Mon Jul 16 04:10:01 MDT 2001


On Sun, Jul 15, 2001 at 06:30:25PM -0600, Chris Riddoch wrote:
[...]
> If you're asking for the CPU to disallow modification of the BIOS,
> that's a change in the CPU, not a change in the BIOS.  This FAQ makes
> it sound like the BIOS makes use of the CPU to execute its code,
> rather than being a self-contained co-processor like I assumed it was.
> I guess I need to re-think my understanding of BIOS interrupt
> routines as well, then...

On inteloid Hardware this is the case -- real hardware is dfferent:
On my Alpha i can shut down the CPU from the bios and still chat with
the bios itself (it has some builtin shell), same on a PPC G4 (that's
a cool one, it basically is a builtin forth interpreter, so it makes
for the most snobish, cool desk calculator ;-)

> If somebody has resources on this, (which would also enlighten us to
> certain aspects of the kernel) please provide links.  Google isn't
> helping me much, becasue I don't know what exactly to look for.

"Understanding the Linux Kernel" gives a good intro into what's going
on during the bootup. There's also the OpenBIOS project (webpage at
http://www.freiburg.linux.de/OpenBIOS/, but it seems the server is down
right now). 

i[...]
> I'm not convinced that bios protections really exist once the OS is
> loaded - I remember a snippet of a conversation once at BLUG in which
> someone said, "Linux ignores the BIOS for that, the BIOS is usually
> wrong."  Which suggests that the BIOS doesn't actually have much
> control over operating systems, in practice.

That comment probably refered to hardware detection. But yes, the 
BIOS doesn't have any control over the os. It acts as an mini-
os until the real on is up and running.

> Yes, that's my understanding.  I suspect it isn't limited to
> kernel-level code, even.

Any direct harware access needs to be done by the kernel (on
real OS, that is ;)

> Naturally.  Again, if they manage to get root, they can do anything
> they want, and nothing can really be trusted.

Yes, but with capabilites even root can't do certain things (like
altering the kernel). This seems to be the only way to harden a system.
Have a look at NSAs secure linux.

> If the *memory* is set to be unreadable, the CPU can't get to it to
> run it.  Linux permissions don't really line up with the permissions
> of the hardware - in order to execute code, the CPU has to be able to
> get to it. 

Hmm, not neccessarily. There's a layer of abstraction between physical
hardware addresses and the address space a process sees. One could
actually mark certain pages of memory as unexecutable after a certain
time (runlevel etc.) and hence dissable modification of the kernel itself.
Not an easy task but doable i guess.

> If your program is set executable but not readable, that
> just prevents other programs from examining its contents on a
> filesystem level, it doesn't prevent the kernel from loading it.
> 
> Besides if your list of installed modules is what you want to protect,
> (to prevent against modules modifying kernel memory and placing
> themselves in the kernel, basically doing the work of insmod itself)
> you'd have to allow the memory to be modifiable when a new module is
> being *legitimately* loaded.  At which point, the rogue
> module-installer could take that moment to install itself in the
> legitimate module's place, by interrupting the execution of the
> module-loading code.
> 
> And if you don't want the memory to be modifiable *ever* after the
> system leaves single-user mode, you might as well not enable loadable
> modules, and modify the existing pieces of the kernel that currently
> can only be installed as modules to allow them to be built in.
 
Unless you _have_ to use modules :-( Then the above is the way to go.

> > In a way I guess you could say it is inspired by Alpha cpu's and
> > other cpu's that can run simultaneous threads directly in the cpu,
> > almost like a single chip SMP system.
> 
> This seems rather off-topic from the subject of securing one's system,
> but pipelining and branch prediction does this to some degree in a lot
> of architectures - including x86.
> 
> > Now if an area like this was the hard disk driver, and requests
> > could be sent to it, or if the area monitored an outside request
> > pool, you could still feed it garbage, but it would have the right
> > to refuse. Further, any attempt to write to it directly should be
> > 100% denied.
> 
> 
> > This is what I mean by an absolute shield on some small subset of
> > ram. Modules and all else would work just fine. Module loader code
> > could be placed in such an area as well, so it could hide what it
> > does...no strace or other means could view its internals, only its
> > talking to the outside world could be snooped.
> 
> 
> Now this is where you'll run into some other problems. The CPU can
> always send to devices directly - it's up to what's running on the CPU
> to decide whether it should or not.  And on the operating system
> level, keep in mind that the OS doesn't *prevent* programs from
> accessing hardware directly.  It just prevents programs run by
> non-root users: have a look at the definition and comments for
> sys_iopl() in arch/i386/kernel/ioport.c, for an example.
> 
> If you prevent it from root, you block off any way of accessing
> hardware the kernel doesn't specifically know about.  It *is* possible
> to have userspace hardware drivers.
> 
> > Yes, it would be just about impossible to do :(
> 
> If the design of such a beast wouldn't prevent the system from
> functioning in the first place, yeah.
> 
> > > Again, the same old problem: it has to get decrypted somewhere. When
> > > it is, you're back to the same question I brought up before... where's
> > > the software that does the decryption, and where's the key stored to
> > > do it - both of which can be determined with a debugger.
> > 
> > That is another reason why there needs to be a truly "private" memory,
> > that can provide function interfaces, but cannot be snooped or examined
> > by anything once it passes beyond single user mode.
> 
> I don't think private memory is the answer.  The only real security
> seems to be first in preventing someone from getting into the system,
> and then preventing a user from increasing their privileges on the
> system.
> 
> > > > it would be interesting if cpu's came with hardware support of
> > > > mandatory memory encryption for specified blocks of ram.
> > > 
> > > That seems a rather intuitively difficult proposition.  I can't really
> > > explain why just now, since I'd like to sit back and dig through a
> > > book I have on the kernel and my old notes on assembly language, but
> > > I'm quite sure that this would be much more complicated than doing it
> > > in software.
> > 
> > Not really, even ethernet cards can be purchased with hardware for real
> > time encryption. The eepro100 has an "S" version, as one example. Plus
> > several of the wireless modem or network cards now have some form of
> > encryption that is hardware generated.
> 
> That's slightly different. The encryption is self-contained in the
> processing on the ethernet card.  Data comes in from the network, is
> cached in a little memory on the card, is decrypted by the processing
> work on the card, and then is put in the buffer for sending to the
> system.  Card throws an interrupt, and the CPU grabs the *already
> decrypted* chunk of data and puts it in main memory, on a network
> queue.
> 
> In this case, the encryption is on the ethernet card, not in the main
> CPU.  Ethernet cards aren't as complicated as CPUs, and can afford to
> do a little more logic.  Adding encryption instructions to a CPU's
> instruction set is downright excessive - it would be the combination
> of a lot of other instructions anyway, couldn't be updated, and don't
> make sense for general-purpose computing.
> 
> Off-topic, though, if you're really interested in hardware-level
> encryption, though, a system like Deep Crack
> (http://www.eff.org/descracker) has a rather different design.
> 
> > It's already basic tech, just not
> > added to a cpu that I know of. I'd like to see it available hard wired
> > to x86. It could be provided its algo via flash memory, or even a
> > separate rom plugin chip. Physical access could defeat it, but I am only
> > talking about protection against people who have remote access.
> 
> Again, it doesn't *matter* where the code came from, when it's being
> run, and there's no way for the CPU to know.  If it's the proper
> instructions, the CPU can run it.  It's not like the instructions are
> any different by the time they get to the hardware.  You'd have the
> same luck trying to ask the CPU to figure out what programming
> language the instructions were produced from, *after* it's been
> compiled.
> 
> > > The problems might be:
> > > 
> > > Where is the key and how does the CPU determine whether a given
> > > piece of code should be able to access it?
> > 
> > In ROM. Direct physical connection, similar to an encrypted ethernet
> > card (existing tech rearranged).
> 
> See above for description of ethernet cards implementing encryption.
> 
> I should clarify: How should the CPU distinguish between legitimate
> kernel code and other code?  The CPU doesn't *care*, and can't tell
> the difference by the time it gets in the instruction stream.
> Non-root users can't write to areas of memory owned by root or the
> kernel, and that's the basis for permissions.  You can't expect the
> kernel to prevent *itself* from reading or writing anywhere in memory.
> 
> > > Can the encrypted area of memory be read or written to without the use
> > > of the encryption access functions (is it in main memory where
> > > programs or hardware devices could access and examine it)?
> > 
> > With current cpus, I doubt it could be forced to protect the area.
> 
> It can't.
> 
> > Integration of an encryption unit (think of EU like FPU) could be hard
> > wired such that once enabled, the area *cannot* be taken out of
> > encryption mode, nor could the algo it runs under be viewed by any means
> > (without reboot).
> 
> Hard wired.  You mean, the CPU can't access that memory at all,
> without going through the EU?  That means the EU has to be the
> intermediary of all memory access.  There'll certainly be some
> overhead on that, and it would throw a huge wrench into caching
> mechanisms.
> 
> Okay, so the instructions that *used* to be in FPUs are now pretty
> standard in the instruction set, and have gone even further with MMX
> and various other such things.  But the EU, since it would mediate
> access to main memory, would mess with caching performance.  It would
> also need to have memory of its own in order to keep track of the
> memory locations that are supposed to be protected.  How do you keep
> that from being modified?
> 
> > > How would you handle sharing keys between processors on a
> > > multi-processor system?
> > 
> > I would suggest that each cpu would have to have the same keys stored in
> > each. Drop in the right ROM chip, voila.
> 
> And if you wanted to change the keys because someone got into the
> system and managed to run the instruction that reads the ROM (since
> they've got privileges to do that, if it's kernel-level code), you'd
> have to reboot and find some different ROMs.
> 
> > > Unencrypted data would need to be put somewhere once it's encrypted or
> > > decrypted in order to be used... how would you prevent access to it
> > > before encryption is performed, or after decryption is performed?
> > 
> > Regions of memory to be protected would be mapped in single user mode.
> > Any code or data inside that area would be initialized and started in
> > single user mode.
> 
> Which means you couldn't load or unload modules, or change whatever is
> protected at any other time.  Why not just disable module support
> entirely?
> 
> By the way, "single user mode" refers to runlevel 1, an init thing.
> Your rogue root user simply changes what happens in your system
> scripts when runlevel 1 is reached on boot, and your protection could
> be conveniently commented out. 
> 
> > Then a hardware pin would be asserted, and short of reboot,
> > *nothing* could do anything but use the existing functions and
> > interfaces of that memory block. What is outside of the block,
> > unencrypted, would not be protected. Anything that is generated from
> > inside the block would *never* be visible to the outside world,
> > without the block allowing it. In the case of a disk driver, for an
> > encrypted filesystem, this would do wonders.
> 
> In which case, you've encrypted something on bootup and never give
> yourself the ability to decrypt it.  What's the point of encrypting it
> if you can't decrypt it to be used?
> 
> This evades the original question, though.  You have something that
> needs to be encrypted - it's plaintext.  If, before it's encrypted,
> someone manages to get the plaintext, the encryption hasn't done a bit
> of good.  Then, in order to be used later, the ciphertext has to be
> decrypted.  It's decrypted, and used by the kernel, but before the
> plaintext is wiped from memory... a rogue interrupt is thrown, and the
> plaintext is copied and sent over the network by the interrupt
> handler.
> 
> This really isn't that simple a problem.
> 
> > > How much more circuitry would be added to a CPU in order to facilitate
> > > this?  How would the demands of encryption affect pipelining and
> > > performance?
> > 
> > Don't know. But since the tech exists now for network cards, it should
> > be reasonable (as long as the economy wants it). Network cards and
> > wireless adapters with encryption slow down, I think that is
> > unavoidable. How much it slows would depend on how it is used and how
> > the hardware is designed.
> 
> I think it'd be rather prohibitive, myself...  You'd slow down *every*
> memory access.  Maybe you'd be okay with your PC133 memory acting like
> it's 66, but it seems a diminishing return for the cost.
> 
> > > How would you prevent access of encrypted data that's been swapped to
> > > disk?
> > 
> > The drivers themselves could be in an encrypted/protected segment of
> > memory.
> 
> And then couldn't be used.  Do you want to decrypt the module powering
> your ethernet card *every* time a packet comes in and the card throws
> an interrupt, and then re-encrypt it after you're done queuing the
> packet?
> 
> > The reads/writes could enforce that everything going to the disk
> > is encrypted.
> 
> This is really a whole different problem. You've just copied your
> protected segment of memory to the hard disk. How do you plan on
> making sure the hacker can't read the swap space?  Start marking
> sectors on the hard drive as "protected" and require the hard drive to
> check every access to see if it's allowable or not?
> 
> > The process of encrypting and decrypting would be private, and
> > inaccessible, as enforced by hardware. The trick is to (a) create
> > the hardware support for mapping areas of ram as enforced for access
> > blocking, (b) create the hardware to do encryption on-the-fly (or
> > else use software within the protected area), and (c) map your
> > relevant driver or other libraries into this area before asserting
> > the pin that says "you are now independent, nobody has authority
> > over you but you".
> 
> I'm having a hard time following just what you mean by this.  I'm also
> losing the motivation to explain why this general plan is impractical
> and infeasable... perhaps you should finish your design and then BLUG
> folks can examine the idea.
> 
> > > I'm sure there's more problems, but these are things I'd want to know
> > > about any plan for including encryption on the CPU level.
> > 
> > I never said it would be easy. But it sounds like it is the next step in
> > the evolution of more "transparent" security that does not cripple
> > normal use. "Normal use" is critical, anyone can cut off the legs to
> > keep something from walking away.
> 
> Anything truly effective would damage normal use, particularly if you
> include the valid administrator's needs to access any part of the
> system possible.  Security isn't transparent.
> 
> --
> Chris Riddoch         |  epistemological  | Send spam to: uce at ftc.gov
> socket at peakpeak.com   |  humility         | 
> _______________________________________________
> Web Page:  http://lug.boulder.co.us
> Mailing List: http://lists.lug.boulder.co.us/mailman/listinfo/lug



More information about the LUG mailing list