[lug] Tom Cargill's Go talk

Rob Nagler nagler at bivio.biz
Sun Jan 17 21:49:54 MST 2010


> Otherwise, why aren't maps access-protected?

Because it doesn't solve problems that we have and causes more defects
than it prevents.  Yes, it might protect people who don't really
understand shared-memory multi-processing, but that's irrelevant.
Anybody building systems which actually *require* shared memory
probably know what they are doing, and will add locking at the
appropriate level.

>  (Damian Conway, who *is* a CS prof, recently answered me that question with,
>  "Nowadays your graduate CS course would largely consist of endless reviews
>  of advanced Java libraries for scalable enterprise web apps."  *SIGH*)

I don't think this is true.  Grad school curricula looks pretty much
what it was when I went to grad school:

http://www.drexel.com/online-degrees/engineering-degrees/ms-cs/curriculum.aspx
http://www.ecs.baylor.edu/computer_science/graduate/index.php?id=43910
http://www.stanford.edu/dept/registrar/bulletin/current/pdf/compsci.pdf
http://www.eecs.mit.edu/ug/NC-MEng_checklist1-2_09-15-09.pdf

What's sad to me about this (as well as Go) is the lack of imagination
and feedback from the real world of programming.  While it's quite
important to understand the theory of computation, where do you learn
how to write tests and why you might want to?  How do you build APIs
that will scale across multiple and disparate applications?  Google
doesn't know (Go is the perfect example of API-escapism), and it seems
not many CS profs have made headway on this issue.

>  communications for partitioning the problem across processors that he
>  thought it's still pretty much all so much snake oil.

Sad but true.  Then again, I think we know what problems are
partitionable, and which ones aren't.  It's something I studied in
grad school, and the theory was pretty solid way back then.  I don't
think they've disproven any of it other than the fact that we have
*much* faster computers which allow us to solve many NP hard problems
for most (if not all) practical purposes.  Automatic partitioning has
lagged behind the heuristics, and probably will continue to do so for
the forseeable future.

>  When Hoare wrote his stuff back in the 70s, there wasn't the strong pressure
>  that exists today to make good use of multiprocessor cores.

There was even more pressure then, because the computers were a lot
slower.  You actually needed multiple processes and shared memory to
solve most of the problems our cell phones can solve today.  More
important, the computers of yesteryear were unreliable so you had to
think about automatic recovery of long-running computations.

Today, computers are so reliable that when there is a problem (like a
computer spewing garbage on the net), most software seizes up.  This
behavior is particularly annoying to me having grown up on diskless
workstations which ran fine whether the network was up or down.  Yes,
you'd get an "abort, retry, or ignore", but at least you had the
opportunity to make that choice instead off watching the spinning
pizza without any way of stopping it except by pressing the off
button.   And, this relates to my initial comment about why it's
important to not have automatic locking in multi-processing systems.

Rob



More information about the LUG mailing list