[ale] little math

Jim Popovitch jimpop at gmail.com
Fri Feb 12 15:33:27 EST 2010


On Fri, Feb 12, 2010 at 15:06, Brian Pitts <brian at polibyte.com> wrote:
> On 02/12/2010 01:58 PM, Jim Popovitch wrote:
>> On Fri, Feb 12, 2010 at 11:37, JK <jknapka at kneuro.net> wrote:
>>> It's a lot easier to mount an attack on an encrypted data store if you
>>> can identify which data is important.  The idea is to force the attacker
>>> to analyze the entire 1TB drive, rather than being able to concentrate
>>> on the 2GB of actual encrypted data.  This is also why really secure data
>>> links transmit random data continuously -- an attacker has no idea which
>>> data is real and which is just noise, so they have to waste a lot of
>>> energy analyzing random junk and hope to get lucky.
>>
>>
>> JK, You might be on to something there... how about an ALE
>> presentation on the flaws and errors in present day Linux
>> whole-disk-encryption because the disk is not constantly writing
>> spurious data across the whole spectrum of sectors?
>
> Huh?

You missed the sarcasm.

> Writing random data to the entire disk before using it as an encrypted
> data store should [0] be equivalent to a network link transmitting
> random data continuously. The point is that if an attacker steals your
> disk (or sniffs your network connection) and examines the blocks (or
> packets) they can't tell what is meaningful data and what is noise.

Right.  The previous analogy of transmitting random data continuously
over data links does not apply here.

The big question is whether the whole disk needs to contain entirely
random data, or if a set of random data can be repeatedly applied to
each sector/track/space/area/section/platter/etc.  And if so, what
would be the minimally acceptable size of a random set of data before
re-use.

-Jim P.



More information about the Ale mailing list