[ale] How can I delete links that can't be seen by stat?

James Sumners james.sumners at gmail.com
Wed Dec 12 15:35:42 EST 2012


I did try unlink; same results.

On Wed, Dec 12, 2012 at 3:10 PM, Scott Plante <splante at insightsys.com> wrote:
> I'm reaching way back to stuff I learned in the '80s, but it looks like the
> actual file is gone, but the directory entry is still there. As I recall,
> directory entries were file names that pointed to inodes, and the inode had
> pointers to the blocks of the file, permissions, etc. An inode could have
> multiple directory entries and these were "hard links" usually created with
> ln but no "-s" parameter. The file was only deleted when the last hard
> linked directory entry was removed. The number right after the permissions
> in "ls -l" was the number of hard links. Inodes not deleted but with no
> directory entries are what ended up in lost+found. This could happen if you
> deleted an open file then powered off without closing it.
>
> It looks like these are directory entries that somehow ended up remaining
> after the file was deleted. You might try the "unlink" command. I'm not sure
> how this would have happened, though. Is it reproducible?
>
> Scott
> ________________________________
> From: "James Sumners" <james.sumners at gmail.com>
> To: "Atlanta Linux Enthusiasts - Yes! We run Linux!" <ale at ale.org>
> Sent: Tuesday, December 11, 2012 1:00:25 PM
> Subject: [ale] How can I delete links that can't be seen by stat?
>
>
> Check out https://www.dropbox.com/s/moq4wmeas42blu9/broken_links.png
>
> In the screenshot, you'll see a list of links that have no properties
> whatsoever according to `ls`. These are supposed to be hard links.
>
> Here's the scenario:
>
> I have an NFS mount where I send nightly backups. These nightly
> backups use a common "full backup" and a series of differential
> backups. I'm using rsync to do this. At some point, the nightly
> backups failed due to low disk space and got out-of-sync. So I'm
> removing old backups and starting anew. However, after deleting the
> first few "old" backups I encountered this problem where `rm` can't
> remove these files since it can't lstat() them.
>
> Anyone know how I can delete these links?
>
> For reference, my backup script is:
>
> ##################################################
>
> #!/bin/bash
>
> # Pre-execution check for bsfl
> # Set options afterward
> if [ ! -f /etc/bsfl ]; then
>   echo "Backup script requires bsfl (https://code.google.com/p/bsfl/)."
>   exit 1
> fi
> source /etc/bsfl
>
> ### Options ###
>
> # Set to the desired logfile path and name
> LOG_FILE="$(dirname $0)/logs/runbackup-$(date +'%m-%d-%Y').log"
>
> # Set to the file that contains backup exclusions (format = line
> separated paths)
> EXCLUDES="$(dirname $0)/excludes"
>
> # Set to the NFS mount point
> # Be sure to configure /etc/fstab appropriately
> NFS_DIR="$(dirname $0)/clyde"
>
> # Set to test string for testing NFS mount success
> NFS_MOUNT_TEST="^clyde"
>
> # Set to the remote backup container directory
> # Backups will be stored in subdirectories of this directory
> BACKUP_DIR="${NFS_DIR}"
>
> # Set to the email address that will recieve notifications
> # of backup failures
> ERROR_EMAIL_ADDR="your_email_address at mail.clayton.edu"
>
>
> ### Begin actual script ###
>
> function notify {
>   mail -s "Backup failure on $(hostname)" ${ERROR_EMAIL_ADDR} < ${LOG_FILE}
> }
>
> # Turn on bsfl logging support
> LOG_ENABLED="yes"
>
> # We need to be root to 1) read all files and 2) mount the NFS
> USER=$(whoami)
> if [ "${USER}" != "root" ]; then
>   log_error "Backup must be run as root."
>   notify
>   die 2 "Backup must be run as root."
> fi
>
> log "Mounting NFS"
> mount ${NFS_DIR}
>
> NFS_MOUNTED=$(cat /proc/mounts | grep ${NFS_MOUNT_TEST})
> if [ ! $? -eq 0 ]; then
>   log_error "Could not mount NFS."
>   notify
>   umount ${NFS_DIR}
>   die 3 "Could not mount NFS."
> fi
>
> # Let's make sure we have enough room on the remote system
> STAT_INFO=$(stat -f --format='%b %a %S' ${NFS_DIR})
> TOTAL_BLOCKS=$(echo ${STAT_INFO} | awk '{print $1}')
> FREE_BLOCKS=$(echo ${STAT_INFO} | awk '{print $2}')
> BLOCK_SIZE=$(echo ${STAT_INFO} | awk '{print $3}')
> # (1024bytes * 1024kilobytes) / (x bytes) = (1 megabyte [in bytes]) / (x
> bytes)
> # => number of blocks in 1 megabyte = y
> REMOTE_FREE_BYTES=$(echo "${FREE_BLOCKS} / (1048576 / ${BLOCK_SIZE})" | bc
> -l)
> log "Remote free bytes = ${REMOTE_FREE_BYTES}"
>
> STAT_INFO=$(stat -f --format='%b %a %S' /)
> TOTAL_BLOCKS=$(echo ${STAT_INFO} | awk '{print $1}')
> FREE_BLOCKS=$(echo ${STAT_INFO} | awk '{print $2}')
> BLOCK_SIZE=$(echo ${STAT_INFO} | awk '{print $3}')
> LOCAL_USED_BYTES=$(echo "(${TOTAL_BLOCKS} - ${FREE_BLOCKS}) / (1048576
> / ${BLOCK_SIZE})" | bc -l)
> log "Local used bytes = ${LOCAL_USED_BYTES}"
>
> REMOTE_HAS_ROOM=$(echo "${REMOTE_FREE_BYTES} > ${LOCAL_USED_BYTES}" | bc -l)
> if [ ${REMOTE_HAS_ROOM} -eq 0 ]; then
>   log_error "Remote system does not have enough free space for the backup."
>   notify
>   umount ${NFS_DIR}
>   die 4 "Remote system does not have enough free space for the backup."
> else
>   log "Remote system has enough room. Proceeding with backup."
>   log "===== ===== ===== ====="
>   log ""
> fi
>
> if [ ! -d ${BACKUP_DIR} ]; then
>   mkdir ${BACKUP_DIR}
> fi
>
> DIR_READY=0
>
> today=$(date +'%m.%d.%Y')
> sixthday=$(date -d'-6 days' +'%m.%d.%Y')
> if [ -d "${BACKUP_DIR}/${sixthday}" ]; then
>   # Move the sixth day to today
>   log "Moving the oldest backup to be today's backup."
>   mv "${BACKUP_DIR}/${sixthday}" "${BACKUP_DIR}/${today}" 2>&1
> 1>>${LOG_FILE}
>   ln -sf "${BACKUP_DIR}/${today}" "${BACKUP_DIR}/complete_backup" 2>&1
> 1>>${LOG_FILE}
>   log ""
>   DIR_READY=1
> fi
>
> if [ -d ${BACKUP_DIR}/${today} ]; then
>   DIR_READY=1
>   log "Today's backup directory already exists. Will update today's backup."
>   log ""
> fi
>
> if [ ${DIR_READY} -eq 0 ]; then
>   yesterday=$(date -d'-1 days' +'%m.%d.%Y')
>   if [ -d "${BACKUP_DIR}/${yesterday}" ]; then
>     log "Copying yeterday's backup (${yesterday}) into place for
> differential backup."
>     cp -al "${BACKUP_DIR}/${yesterday}" "${BACKUP_DIR}/${today}" 2>&1
> 1>>${LOG_FILE}
>     log ""
>   else
>     last_backup_dir=$(ls -1 ${BACKUP_DIR} | sort -nr | head -n 1)
>     log "Copying most recent backup (${last_backup_dir}) into place
> for differential backup."
>     cp -al "${BACKUP_DIR}/${last_backup_dir}" "${BACKUP_DIR}/${today}"
> 2>&1 1>>${LOG_FILE}
>     log ""
>   fi
>
>   DIR_READY=1
> fi
>
> if [ ${DIR_READY} -eq 1 ]; then
>   rsync --archive --one-file-system --hard-links --human-readable --inplace
> \
>   --numeric-ids --delete --delete-excluded --exclude-from=${EXCLUDES} \
>   --verbose --itemize-changes / "${BACKUP_DIR}/${today}" 2>&1 1>>${LOG_FILE}
> else
>   log_error "Couldn't determine destination backup directory?"
>   notify
> fi
>
> log ""
> log "===== ===== ===== ====="
> log "Backup complete."
>
> umount ${NFS_DIR}
>
> ##################################################
>
> --
> James Sumners
> http://james.roomfullofmirrors.com/
>
> "All governments suffer a recurring problem: Power attracts
> pathological personalities. It is not that power corrupts but that it
> is magnetic to the corruptible. Such people have a tendency to become
> drunk on violence, a condition to which they are quickly addicted."
>
> Missionaria Protectiva, Text QIV (decto)
> CH:D 59
> _______________________________________________
> Ale mailing list
> Ale at ale.org
> http://mail.ale.org/mailman/listinfo/ale
> See JOBS, ANNOUNCE and SCHOOLS lists at
> http://mail.ale.org/mailman/listinfo
>



-- 
James Sumners
http://james.roomfullofmirrors.com/

"All governments suffer a recurring problem: Power attracts
pathological personalities. It is not that power corrupts but that it
is magnetic to the corruptible. Such people have a tendency to become
drunk on violence, a condition to which they are quickly addicted."

Missionaria Protectiva, Text QIV (decto)
CH:D 59


More information about the Ale mailing list