[lug] socket programming/kernel question

Michael J. Hammel mjhammel at graphics-muse.org
Wed Mar 5 12:03:06 MST 2003


On Wed, 2003-03-05 at 12:17, Bear Giles wrote:
> There's a key server daemon, which knows the pid of callers from 
> the SCM_CREDENTIALS information attached to the Unix socket 
> message.  The clients who know nothing about the daemon - they 
> just open the socket connection and send stuff through it.
> 
> There's no logical connection between the processes, so having the 
> server act as a factory isn't a practical solution.  (The server 
> will probably be in C, the clients java applets triggered by 
> asynchronous messages.  That also makes it difficult for the 
> client to use a gatekeeper.)

There is a logical connection if the server needs to know about the
clients death.

I implemented something like this at my last job (the bastards who laid
me off a little over a week ago, grrrrr....).  I built a test harness
with clients running on blade servers that launched tests that would
report status up through the client back to a remote server.  The server
would then forward, when requested, status to remote UI's, either X or
Web based.  This allowed me to watch factory tests being run in Taiwan
using a GTK+ based UI running in Houston.

The status reporting went upstream from diags to clients to server to
UIs.  Command processing went in both directions, though primarily
downward.  Upstream commands were sent as status and interpreted by the
client/server for handling.

In summary, the UDP version of this worked horribly.  The TCP version
solved the problem of lost clients on the blades (blades being rebooted,
dying a hardware death or the software itself dying).  Software deaths
were easy to capture since the clients (and diags) could catch signals
and send off one last status message about the death before actually
dying.  Status mesages were not ACKed.

> The more I think about it, the problem isn't detecting when the 
> connection drops, it's detecting that the other process has died.

A dead process *has* to close the connection if its TCP based.  That was
one of the problems I had to solve in my project, and one of the reasons
I dropped UDP for TCP.
 
>   Information needs to go down the rabbit hole.  Stream protocols 
> would tell me this, but require me to keep the stream open for the 
> lifetime of the process even if days pass without use.

Depends on your runtime environment, but why would keeping these open
for such a long time be a problem?  Granted, on the surface UDP might
look better because of the delays in message passing, but the need to
maintain state information about clients says "TCP".

> It's also useful for cases where the requests are idempotent and 
> have no logical connection to other requests, and that's the case 
> here.

I'd disagree based on your description.  The server's need to know about
the death of the client is a logical connection.

> I haven't worked through all of the details (since I'm also 
> considering other approaches), but the idea is that you hit the 
> server with a one object and it echos a modified object, or it 
> remains silent and the client times out.  If the client misses the 
> response (unlikely on Unix sockets :-), they can resubmit the 
> request without causing any problems to the server.

TCP can drop packets, but it's pretty minimal.  If the server doesn't
care about each request sent to it - it just responds with the modified
request - and the client can handle multiple receipt of the same
response, then the retry design works fine.  

Keep in mind that this retry stuff is much easier to handle with TCP. 
With UDP, you'll have to have multiple levels of retries, first at the
packet level (since UDP doesn't retransmit lost packets) and then at the
service request level.

> The twist is that authentication is necessary, but nothing the 
> client provides can be trusted.  Hence Unix sockets and the 
> SCM_CREDENTIALS - it's not much, but it's a start.

I'm not very good with authentication stuff.  I'd build it into the
protocol itself, or push the data through ssh tunnels.

-- 
Michael J. Hammel           |
The Graphics Muse           |  If quitters never win, and winners never quit, 
mjhammel at graphics-muse.org  |  what fool came up with "Quit while you're 
http://www.graphics-muse.com   ahead"?



More information about the LUG mailing list