[ale] Open Fire on Windows Viruses
Michael B. Trausch
mike at trausch.us
Thu Feb 18 21:27:06 EST 2010
So, before you read my reply below, I have just some minor commentary on
your writing. I'd suggest capitalizing "Internet" when referring to
_the_ Internet, since it is a proper noun. I would also suggest not
capitalizing things that are not proper nouns (such as "operating
system"). As far as other non-proper nouns go, if you want to emphasize
them, do, but don't do it with initial caps because it doesn't quite
read the way you'd expect when you're just coming to the topic. If
every Important Concept (hah, see what I did there? ;-)) were
capitalized, it gets to be a bit difficult to read or gives the
appearance of obvious bias, instead of nudging the reader in the
intended direction.
Those are just my 2¢ there. More follows, though.
On 02/18/2010 05:13 PM, arxaaron wrote:
> I try to
> address Mike's other critiques by clarifying that the
> issues being addressed by Open Source and Free
> and Freedom Friendly Software are practical and self
> evident levels of trust WORTHINESS, and not an
> expectation absolute or automatic "TRUST". As a
> general rule, I think any exchange of goods or services
> will be more Worthy of Trust the further that ulterior
> motives of greed and secrecy are removed from the
> transaction.
I was able to finally remember what I was trying to say the other night.
The notion of trust and the whole discussion had me thinking that I
had read something on it a while back. Indeed, it was Ken Thompson's
Turing Award lecture, "Reflections on Trusting Trust".
In particular, he says, "You can't trust code that you did not totally
create yourself. (Especially code from companies that employ people
like me.) No amount of source-level verification or scrutiny will
protect you from using untrusted code."
In this passage, what he is discussing is the C compiler that he rigged
such that, when fed the non-rigged compiler source code, the temporary
rigged compiler would generate a compiler that was also rigged. It
would also bug the system's login program such that it would accept a
backdoor password for any valid system login. It also rigged the
disassembler such that looking for these issues would result in the
affected code blocks being hidden.
He also makes the statement just under the title on the paper, "Perhaps
it is more important to trust the people who wrote the software." This
brings me back to the argument that I think I made somewhat
ineffectively. In order to trust, say, GNOME, you would not only have
to be aware of every line of code written that is itself GNOME, but you
have to know and trust the people that put their time and effort into
GNOME, a good majority of the GNU stack (including GCC and its 9.5+
million lines of source code and the GNU libc 1+ million lines of source
code), the Linux kernel (8.2+ million lines of source code), and the
firmware and hardware on the computer system that you are running.
I'm not sure that I trust everything in the stack all that terribly
wonderfully. I cannot say with any level of certainty, for example,
that there is not something in my computer's firmware that logs my
keystrokes and sends them off somewhere when it sees IP traffic going
through my network interfaces. After all, it's entirely within the
realm of possibility, and the ROM BIOS in this system has enough space
to have code that does that, in my estimation.
Then again, I don't know that I can trust any of the software components
I mentioned above, just because they are so large. I won't go to the
trouble to try to figure out just what all of GNOME and its dependencies
on a GNU/Linux system are, but already I've enumerated about 18.7+
million lines of source code. If I could read 1 KLOC per day, it would
take me 18,700 days to audit all of it. And I haven't even mentioned
any of the base source packages that make GNU/Linux UNIX-like, nor am I
including misc. additional dependencies of the Linux kernel, or any of that.
Do I trust it enough to use it? Sure. Do I make any sort of assumption
that any of my data is truly private? No. If I wanted data to be
*truly* private, I would not store it in a computer system at all, even
encrypted, unless that computer system _never_ had outbound
communication via any medium. To my knowledge, it's the only way that
one can be absolutely sure that a system isn't rigged in the way that
Thompson rigged his C compiler and it's subsequent compiled output.
Because after all, having the source is pretty much irrelevant (at least
when it comes to this).
Now, should this whole mail have any influence on what you write?
Probably not. These are pretty in-depth issues that most people---at
least most people that I know---have less than zero interest in thinking
about.
--- Mike
--
Michael B. Trausch ☎ (404) 492-6475
More information about the Ale
mailing list