[ale] / 70% full
David A. De Graaf
dad at datix.us
Fri Sep 26 12:07:20 EDT 2014
On Thu, Sep 25, 2014 at 11:57:16AM -0400, Paul Cartwright wrote:
> On 09/25/2014 11:10 AM, Lightner, Jeff wrote:
> > Usually I do a 'find / -name core" first to make sure there aren't any large core dumps files from aborted processes. Doing "file core" on any found will show you what it was that aborted and give the signal (usually sigsegv) that caused the abort. If it is fairly old I usually just delete it. Anything more recent I might delve into to figure out why it died.
> >
> > Also I do a 'find / -name "*.tar"' to see if there are any large tar bundles. Often running gzip or other compression on them will get back space.
> >
> > After that I usually look in /tmp and /var/tmp first to be sure there aren't old temporary files that can go away.
> >
> > Next I look to see if any logs have gotten unusually large. (Be sure NOT to delete a log file until you've verified it is not "open" by a process (losf <logfile> will tell you if it is.) In such a case you can truncate but not delete (or you can stop the process, delete then restart the process).
> >
> > Doing the find Leam mentions is a good way to find large files. Just be sure you don't automatically delete anything until you know what it is.
> >
> > On our systems I separate out /tmp, /var, /usr, /opt and any application/database directories so as to avoid filling / itself.
> >
> >
> >
> >
> > -----Original Message-----
> > From: ale-bounces at ale.org [mailto:ale-bounces at ale.org] On Behalf Of leam hall
> > Sent: Thursday, September 25, 2014 10:19 AM
> > To: Paul Cartwright; Atlanta Linux Enthusiasts
> > Subject: Re: [ale] / 70% full
> >
> > yum clean all
> >
> > du -k / | sort -n > /tmp/du.root
> > tail -10 /tmp/du.root
> >
> > find / -size +4000 -exec ls -l {} \;
> >
> >
>
> ok, after umounting my extra partitions ( backups, etc)... the only core
> files I found were all folders, under programs. found 1 tar file,
> cleaned up /tmp & /var/tmp and got it down to 66% used... cleaned up
> about 1GB.. still have 12 GB used.
>
> --
> Paul Cartwright
> Registered Linux User #367800 and new counter #561587
>
The journald system uses a prodigious amount of space for its
binary data files; so much that it can take sometimes 10 minutes (!)
merely to scan from beginning to end, using the special software
that translates these files into something vaguely useful.
There are several ways to limit the damage.
Unfortunately, total removal of journald is not one of them;
total removal makes a Fedora system unbootable.
'man journald.conf' tells of some ways to limit the allowed disk space
by editing /etc/systemd/journald.conf.
The method that works for me is to 'rm -rf /var/log/journal'.
With that directory gone, journald uses a fallback of
/run/log/journal, which is now located in a tmpfs. That is, it wastes
precious RAM instead of disk space - but only for the current run.
Since we seem to be rebooting much more frequently, that's not so
much. ;-(
Useful logging data is still passed to rsyslog, thence to /var/log/* .
List of some recent Linux "improvements":
- pulseaudio
- gnome
- tmpfs
- grub2
- UEFI Secure Boot
- Fedora installer
- systemd
- journald
- ...
We're circling the drain, folks.
--
David A. De Graaf DATIX, Inc. Hendersonville, NC
dad at datix.us www.datix.us
More information about the Ale
mailing list