[lug] Tom Cargill's Go talk

Tom Christiansen tchrist at perl.com
Sun Jan 17 20:54:04 MST 2010


Here's my write-up of Tom Cargill's excellent talk on the Go programming
language this past Thursday night.

It's not intended to retread simple matters of syntax that one can easily
pick up on one's own via web sources.  Rather, it's my take on some of 
the issues that arose during his talk, mostly matters of by-design semantics.

Errors are entirely my own, not Tom's: apologies if I've misunderstood
what Tom said, or misconstrued what may follow from the same.

Corrections welcome.

--tom

-=--=--=--=--=--=--=--=--=--=--=--=--=--=--=--=--=--=--=--=--=--=--=--=--=-

SYNOPSIS

    My take is that Go addresses problems in C/C++ like bounds checking,
    memory management, and the dread #include file recompilation
    bottlenecks.  But it doesn't cover things other languages have come to
    do, such as exceptions and thread cancellation.  For probably obvious
    reasons, Go's union style of method aggregation doesn't bother me so
    much as it does some people, but I'm unconvinced its concurrency
    protections are all they could or should be.

EXPOSITION

It was nice to see Hoare's '78 CSP paper referenced in the synchronization
model Go uses; unfortunately (to my mind), Go doesn't enforce this.  Go's
channels do provide a safe and clean interface for communication, just as
Thread::Queue does for us in Perl.  Sure, nothing stops you from enqueuing a
pointer across to some another thread, but at least there you *see* that
you're doing something risky.  That one can do that seems insufficient to
justify allowing all threads access to the same shared stack and heap areas.

Another dubious-to-me decision was that (apparently) for performance
reasons, maps, which are Go's %hashes, are not automatically mutex-
protected.  What's worse is that access conflicts in maps don't just
give you bogus data; they can crash the entire system.  Oops.

Not having thread cancellation is a problem for several reasons.  One is
that when a thread blows up with say, an array out-of-bounds error or
other sort of panic, failure of one thread terminates the entire process.

Another problem occurs on calculations that need first-one-wins semantics.
The concrete example Tom gave was matching many patterns against many records.
This is different from some mass-rendering problem, because all sub-problems'
computations in a large rendering need to run to completion for a correct
answer, but with pattern matches, you sometimes might just want the first,
or the first few, hits.

Although it's not *too* hard to arrange a shared channel (e.g., with a Perl
Thread::Queue object) to feed results back in an orderly fashion, there's
no way to cancel running goroutines once you've received enough results.
You don't want to waste processors by letting them keep on computing
results you won't use.  So because there's no thread ID exposed to the user
to async intraprocess signalling, you're reduced to lame busy-wait polling
an I'm-done semaphore of some sort.  Ick.  Check too often, you lose.
Don't check often enough and you still lose.  What would Goldilocks do?

Tom actually implemented his massive pattern matching problem to see how
it would work out.  The only way he came up with to figure out how best
to partition the problem amongst processors was to calculate average
performance with empirical benchmarks first.  This isn't a great solution
for several reasons that I'm sure you can think of readily enough yourself.

The Go designers sometimes seem to be making safety/convenience take a
backseat to performance.  Otherwise, why aren't maps access-protected?
This attitude may have been reasonable in C, being a symbolic assembler of
sorts, but is perplexing in a modern programming language with higher-level
programming constructs.

In other places, the Go designers seem to bend over the other direction.
For example, even constrained loops iterating through valid indices of an
array generate compiled code that includes full bounds checking.  Tom said
that this is the very sort of optimization that Ken's especially sensitive
to, and so that he's a bit mystified why it isn't there.

I've thought about this a bit since the talk.  It occurred to me that
perhaps one reason why even code deemed "by definition" safe might still
include bounds checks may be that an array could change its size during the
loop because a concurrent goroutine diddled it, or because you can't tell
from code analysis whether a variable that has "escaped" out of your static
scope (passing a pointer, etc) might not have that happen to it even within
the same thread.  Such problems are often Halting-Problem indeterminable.

Something that annoyed Tom about Go was its lack of any sort of exception
mechanism, which Java, Perl, Python, and many others all support.  This
means your code is constantly littered with checking return codes:

    if result, ok := func(); !ok {
        panic("bollocks");
    }

Apparently, because the Go designers hadn't [haven't?] come with a good
model for how exceptions would work, they preferred to omit one altogether
over putting in one they could see problems with.  That *does* sound like
Ken, doesn't it, that it's better to do nothing than to do something badly?
Yeah ok, but it still needs fixing.

It's almost as though their attitude were the old chestnut that runs 
"When *I* was wee lad, Sonny my boy, doncha know we a-called our
functions with

            MOV (SP)+,  $LATER
            MOV PC,     $FUNC
     LATER:

so if that's not good enough for ya, doncha be programming!"  

Ok, so that may be too harsh, but you get the idea.  I guess I'm glad they
didn't choose to do a *bad* thing.  But in a way, they *have*, since I'm
one of those who thinks exceptions can make code both cleaner (easier to
write and read) and more robust (safer failure behavior), and thus more
easily maintained.

I'm amused that Google directs C++ programmers not to use exceptions:

    http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml#Exceptions

Although a few of their reasons may seem a bit silly, many are
rooted in the idiocy inherent to the C++ exception model.  For
example, their first con is right on the money:

 * When you add a throw statement to an existing function, you must
   examine all of its transitive callers. Either they must make at least
   the basic exception safety guarantee, or they must never catch the
   exception and be happy with the program terminating as a result. For
   instance, if f() calls g() calls h(), and h throws an exception that
   f catches, g has to be careful or it may not clean up properly.

Oh how I love languages that can't be bothered with memory management!

It's refreshing to see parallel and synchronization constructs built right
into a programming language, but we've not made much progress in--well, the
last generation or more.  Java apparently uses as its model the monitors
Hoare charted out in his '74 paper, but remember that Hoare's CSP paper
came only four years later than that.

I read both Hoare's papers during CS grad school in '85-87.  I wonder whether
if one took that same course today, how much would be new theories, models,
and results, and how much would revisit the same research we looked at more
than twenty years ago?

(Damian Conway, who *is* a CS prof, recently answered me that question with,
"Nowadays your graduate CS course would largely consist of endless reviews
of advanced Java libraries for scalable enterprise web apps."  *SIGH*)

A question was asked about whether Go was currently being used inside
Google.  The sense was that it (probably?) isn't yet being used for any
production systems.  That makes sense given that it's still in an
experimental stage.  Then again, wasn't it Google who invented the in-
perpetuity beta release?  I can buy the experimental bit; while much is
there, some (to me) important pieces are still missing.

After the talk broke up, I asked Tom whether he'd looked at parallel Haskell
and if so, what he thought of it.  He said that yes, he had looked at it.
He thought the purely functional stuff *might* work ok, but enough unsolved
problems remain in cleanly implementing the necessary interprocessor
communications for partitioning the problem across processors that he
thought it's still pretty much all so much snake oil.

That's too bad; I guess I was taken in by Simon Peyton Jones's exciting
talk about parallel Haskell at O'Reilly's OSCON in San Jose last July.

What I took away from Tom's talk is that although Go offers many wins over
C/C++, there's a lot it does *not* do (yet?).  Some of those areas have been
addressed in newer programming languages; others remain unsolved problems.

When Hoare wrote his stuff back in the 70s, there wasn't the strong pressure
that exists today to make good use of multiprocessor cores.  One might hope
our current necessity would mother profitable invention, but progress just
hasn't kept up with the marching calendar.

"Where's my jet pack?", indeed.

--tom

PS: Apparently, the Go debugger is (or shall be) called ogle--whose names
    do catenate rather felicitously for their corporate sponsor. :)




More information about the LUG mailing list