[lug] Re: recycling code [WAS Fwd: NICHOLAS PETRELEY: "The Open Source" ]

D. Stimits stimits at idcomm.com
Thu Mar 22 12:41:57 MST 2001


Matt Clauson wrote:
> 
> On Thursday 22 March 2001 11:49, you wrote:
> > Matt Clauson wrote:
> > > You miss the point I'm trying to make here.  I contend that by re-using
> > > someone else's still functional code instead of rewriting all of it by
> > > scratch (even after looking at someone else's to understand it -- I'll
> > > cover that below) you can save MASSIVE amounts of time and effort, and
> > > devote time
> >
> > That depends on how the code was designed. Not just quality, but
> > consider some programs are designed for OOP, others procedural. Some
> > code depends on libraries that are desirable to avoid. Sometimes the
> > goal isn't just the functionality, but a more modular means of doing
> > something for later extension. Sometimes reinventing the wheel is just
> > for one's own fun when not doing things for money. Sometimes it is
> > necessary to rewrite something just to understand it better. One of my
> > favorite versions of Murphy's laws is: "Interchangeable parts DON'T".
> > From a programmer's point of view, there are usually goals that the end
> > user doesn't see, that go far beyond the end appearance or functionality
> > that the user will see.
> 
> All of these are valid points.  However, making something more modular, or
> converting it from OOP to procedural, does not necessarily mean that you have
> to rewrite the damn thing from scratch.  Yes, some items will have to be
> rewritten.  That's an acceptable casualty.  But it doesn't mean having to
> rewrite from the ground up now, does it?  Splitting a class up into several

No, it doesn't mean you have to rewrite it from ground up. But I've
found that depending on the code style, and complexity of the subject,
it's often much faster to rewrite from ground up than it is to
understand then chop up the prior existing code. Now if this is
something like an in-house library or something that is expected to be
used again in its original form (at least to some extent), then the
effort to understand even complex code is probably worthwhile (depending
on the skill of the original author). If time is the key factor, it is
often more of a gamble to reuse parts of poorly written or poorly
documented code than it is to start from scratch...especially if you are
aware of weaknesses in the original code (in which case you would have
to not only dig out the parts you like, but also fix the weaknesses).
Consider for example something that is a rather vague
criterion...security. There are a few things that are easier to do in
coding standards during original design and coding, and nearly
impossible to find in someone else's code when it is complex...but of
course if the old code is widely distributed and well-tested, then the
argument falls the other way, it is better to use something old and
tested in a complex situation (even this has its pitfalls, there are
quite a few "gotchas" to security if you don't use a whole app, but
instead only use pieces of it).

I'm not saying that there are not a lot of foolish decisions made in
reinventing the wheel, but I am saying that many of the apparently
"foolish" reinventions had a purpose that isn't obvious to the user.
Then there is simply the resistance of changing one's own way of
working, another hump to get over.

> procedures, as well as the data...  Ah, hell, I forget the term now... (kids:
>  don't use Perl too the exception of C -- it rots your brain)  ah, yes,
> structures.  Anyway, splitting up the class into its components doesn't
> necessarily mean than you have to rewrite supermassive amounts of code from
> the ground up.  Some rewriting is needed, yes....  But do we have to rewrite
> Pine from scratch to make PIMP?
> 
> Rewriting code to avoid bad libraries (either in code, licensing, or some
> other issue) is also something that can't be avoided sometimes, and falls in
> the category of 'acceptable casualty'.  Sometimes that happens.  A good
> example of this would be the entire KDE vs. Gnome debacle.  The QT library
> had some issues, and still does.  The competition is healthy, and Gnome is
> making progress.  Nautilus looks pretty damn sweet overall, and I'd probably
> choose it over kfm fairly rapidly.  But my interests and needs have changed.
> 
> Nautilus is (as I understand it -- I may need to recheck my facts) pretty
> much all-GPL software.  I could take a huge chunk of it and port it into kfm,
> should I want to.  I could also submit KDE functionality patches to Eazel for
> inclusion into Nautilus.  Both are acceptable.  Both are easy.  Both don't
> require re-writing something from scratch.  It doesn't mean I have to rewrite
> the entire damn thing from scratch.  And projects for the pure hack value,
> while cool and sometimes VERY worthwhile, are taking programmers and eyes
> from projects that have much more real usage value AS OF RIGHT NOW.
> 
> Anyway, back to my original argument, which was KDE/Qt becoming the "Gold
> Standard" over Gnome/GTK+...
> 
> Where KDE has features and integration, and is rapidly approaching
> "ready for prime time" status, I see Gnome foundering.  Why?  App bloat.
> Everyone and his borther is putting out apps, and designs...  But they aren't
> getting finished.  Scratching an itch, writing a Gnome mailer in Perl because
> of the hack value, this is all good and well...  But the 'user friendliness'
> of Gnome has suffered -- I find apps that are somewhat unstable (I averaged a
> crash with Balsa about 3 times a day.  Not catastrophic, no data loss, just
> forcing me to reload the app), don't have massive chunks of functionality
> (the multiple personality thing still isn't in the massively released code,
> and the KDE functionality works better), and just seems somewhat 'unpolished'.
> 
> The Gnome UI still feels 'kludgy', and has some rendering glitches.  It's
> still a major "developmer's platform".  This isn't a bad thing, because the
> people who use it will fix bugs that annoy them. Gnome doesn't feel like it's
> progressing as fast as KDE did in the same stage.  There's lots of apps still
> in the 0.x stage in the code...  And Gnome is trying to push v2.0 out the
> door.  The backend framework is there, but the USABLE apps that
> [(boss)|(parents)|(kids)|(joe sixpack)] can use JUST ISN'T there.
> Admittedly, with KDE, the entry barrier that you encounter with Unix
> (multi-user over single, different concepts, etc) is still there...  But
> still, I find KDE much easier to use and more friendly, especially when
> looking at it more from a novice perspective.
> 
> You bring up good points about the need to rewrite code, or even the desire
> to.  However, this is costing Gnome a lot in user share, and the app bloat
> that Gnome is seeing may eventually cost the project dearly.  Multiple apps
> are a good thing given the right circumstances...  Scratching an itch or
> doing something for the 'hack value' is a good thing, given the right
> circumstances...  But doing to excess  may destroy the goals you REALLY want
> to achieve, just for the sake of 'being cool'.

Bloat does suck. Bloat is one of the reasons to reuse old code, but it
is also one of the reasons to not reuse old code....yep, the paradox is
intended. Bloat is a better reason to redesign though.

> 
> --mec

D. Stimits, stimits at idcomm.com



More information about the LUG mailing list