[ale] OT: you can't get there with algorithms

Pete Hardie pete.hardie at gmail.com
Sun Jun 11 16:19:28 EDT 2006


On 6/11/06, Jim Philips <briarpatchkid at bellsouth.net> wrote:
> I don't usually post things that are on Digg or Slashdot, because I assume
> people have already been exposed to them. But this article, I think, gets at
> the heart of what is wrong with computer programming as it stands today and
> that's no small event. In a recent article published here:
>
> http://www-inst.eecs.berkeley.edu/~maratb/readings/NoSilverBullet.html
>
> the author asserted that as programs grow in complexity, we will have more
> bugs and that there is no "silver bullet" to eradicate them. The author of
> the later article argues that there is indeed a silver bullet, but we can
> only find it by approaching the whole task of programming in a different way.
> As examples, he points out that as human brains handle increasingly complex
> tasks their rate of failure doesn't go up by anything like the magnitude it
> will go up when you increase the complexity of a program. He also points out
> that hardware systems have grown in complexity without raising the failure
> rate dramatically. He sees the fault in the approach taken with the universal
> Touring machine (UTM). Here is a short breakdown of how he sees the problem:
>
> "Unfortunately for the world, it did not occur to early computer scientists
> that a program is, at its core, a tightly integrated collection of
> communicating entities interacting with each other and with their
> environment. As a result, the computer industry had no choice but to embrace
> a method of software construction that sees the computer simply as a tool for
> the execution of instruction sequences. The problem with this approach is
> that it forces the programmer to explicitly identify and resolve a number of
> critical communication-related  issues that, ideally, should have been
> implicitly and automatically handled at the system level. "
>
> Anyway, I could go on for longer, but here is a link to the article. It makes
> fascinating reading:
>
> http://www.rebelscience.org/Cosas/Reliability.htm

It does make for fun reading, but it sets off my Bogometer when it
claims that one simple thing could eradicate nearly all bugs.

He also makes some fast and loose comparisons with neurology that the
little I've read seem to be iffy.

And finally, he glosses over the long debugging time the human brain
has had - one "program" that has been selectively refined for
millions, perhaps billions, of years.


My personal opinion of why software is buggy?  Nobody wants to pay for
bug-free software, because they think its easy to write - it's only
typing stuff in, right?  Hardware designs are expected to take a long
time to develop, and they have dedicated "debugging" runs, since it
woudl be expensive to burn a million chips and then find a bug.  At my
employer, it's a major thing to make changes in the silicon, but we
burn full system releases in 6 weeks.


-- 
Better Living Through Bitmaps



More information about the Ale mailing list