[ale] Dealing with really big log files....

Greg Freemyer greg.freemyer at gmail.com
Sun Mar 22 10:34:47 EDT 2009


One more idea is to use dd with the seek and count parameters.



On 3/22/09, Kenneth Ratliff <lists at noctum.net> wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> I need to extract data from a mysql log in a 12 hour window. The
> particular problem here is that the log file is 114 gigs, and goes
> back to november 2008 (yes, someone screwed the pooch with the log
> rotation on this one, already fixed *that* particular problem, but
> still have the resulting big log file!)
>
> Now, my normal methods of parsing through a log file take a really
> really long time due to it's size.
>
> I know about what line number the data I want begins on. Is there an
> easy way to just chop off all the lines before that and leaving
> everything else intact? Obviously, due to the size of the file, I
> can't load it in vi to do my usual voodoo for this crap.
>
> I'm thinking of running sed -e '1,<really big number>d' mysql.log
>
> against it, but does anyone know of a better method to just chunk out
> a big section of a text file (and by better, I mean faster, it takes
> upwards of 3 hours to process this damn thing)
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v2.0.9 (Darwin)
>
> iEYEARECAAYFAknGQCAACgkQXzanDlV0VY7CHACgsfmnV4YuXSFbQyBV2gTsa/r5
> 29cAn0ZlZcz7YSnSw6WbNHH4is2GXpHp
> =2JWk
> -----END PGP SIGNATURE-----
>
> _______________________________________________
> Ale mailing list
> Ale at ale.org
> http://mail.ale.org/mailman/listinfo/ale
>

-- 
Sent from my mobile device

Greg Freemyer
Head of EDD Tape Extraction and Processing team
Litigation Triage Solutions Specialist
http://www.linkedin.com/in/gregfreemyer
First 99 Days Litigation White Paper -
http://www.norcrossgroup.com/forms/whitepapers/99%20Days%20whitepaper.pdf

The Norcross Group
The Intersection of Evidence & Technology
http://www.norcrossgroup.com


More information about the Ale mailing list