[ale] How can I delete links that can't be seen by stat?

James Sumners james.sumners at gmail.com
Tue Dec 11 13:00:25 EST 2012


Check out https://www.dropbox.com/s/moq4wmeas42blu9/broken_links.png

In the screenshot, you'll see a list of links that have no properties
whatsoever according to `ls`. These are supposed to be hard links.

Here's the scenario:

I have an NFS mount where I send nightly backups. These nightly
backups use a common "full backup" and a series of differential
backups. I'm using rsync to do this. At some point, the nightly
backups failed due to low disk space and got out-of-sync. So I'm
removing old backups and starting anew. However, after deleting the
first few "old" backups I encountered this problem where `rm` can't
remove these files since it can't lstat() them.

Anyone know how I can delete these links?

For reference, my backup script is:

##################################################

#!/bin/bash

# Pre-execution check for bsfl
# Set options afterward
if [ ! -f /etc/bsfl ]; then
  echo "Backup script requires bsfl (https://code.google.com/p/bsfl/)."
  exit 1
fi
source /etc/bsfl

### Options ###

# Set to the desired logfile path and name
LOG_FILE="$(dirname $0)/logs/runbackup-$(date +'%m-%d-%Y').log"

# Set to the file that contains backup exclusions (format = line
separated paths)
EXCLUDES="$(dirname $0)/excludes"

# Set to the NFS mount point
# Be sure to configure /etc/fstab appropriately
NFS_DIR="$(dirname $0)/clyde"

# Set to test string for testing NFS mount success
NFS_MOUNT_TEST="^clyde"

# Set to the remote backup container directory
# Backups will be stored in subdirectories of this directory
BACKUP_DIR="${NFS_DIR}"

# Set to the email address that will recieve notifications
# of backup failures
ERROR_EMAIL_ADDR="your_email_address at mail.clayton.edu"


### Begin actual script ###

function notify {
  mail -s "Backup failure on $(hostname)" ${ERROR_EMAIL_ADDR} < ${LOG_FILE}
}

# Turn on bsfl logging support
LOG_ENABLED="yes"

# We need to be root to 1) read all files and 2) mount the NFS
USER=$(whoami)
if [ "${USER}" != "root" ]; then
  log_error "Backup must be run as root."
  notify
  die 2 "Backup must be run as root."
fi

log "Mounting NFS"
mount ${NFS_DIR}

NFS_MOUNTED=$(cat /proc/mounts | grep ${NFS_MOUNT_TEST})
if [ ! $? -eq 0 ]; then
  log_error "Could not mount NFS."
  notify
  umount ${NFS_DIR}
  die 3 "Could not mount NFS."
fi

# Let's make sure we have enough room on the remote system
STAT_INFO=$(stat -f --format='%b %a %S' ${NFS_DIR})
TOTAL_BLOCKS=$(echo ${STAT_INFO} | awk '{print $1}')
FREE_BLOCKS=$(echo ${STAT_INFO} | awk '{print $2}')
BLOCK_SIZE=$(echo ${STAT_INFO} | awk '{print $3}')
# (1024bytes * 1024kilobytes) / (x bytes) = (1 megabyte [in bytes]) / (x bytes)
# => number of blocks in 1 megabyte = y
REMOTE_FREE_BYTES=$(echo "${FREE_BLOCKS} / (1048576 / ${BLOCK_SIZE})" | bc -l)
log "Remote free bytes = ${REMOTE_FREE_BYTES}"

STAT_INFO=$(stat -f --format='%b %a %S' /)
TOTAL_BLOCKS=$(echo ${STAT_INFO} | awk '{print $1}')
FREE_BLOCKS=$(echo ${STAT_INFO} | awk '{print $2}')
BLOCK_SIZE=$(echo ${STAT_INFO} | awk '{print $3}')
LOCAL_USED_BYTES=$(echo "(${TOTAL_BLOCKS} - ${FREE_BLOCKS}) / (1048576
/ ${BLOCK_SIZE})" | bc -l)
log "Local used bytes = ${LOCAL_USED_BYTES}"

REMOTE_HAS_ROOM=$(echo "${REMOTE_FREE_BYTES} > ${LOCAL_USED_BYTES}" | bc -l)
if [ ${REMOTE_HAS_ROOM} -eq 0 ]; then
  log_error "Remote system does not have enough free space for the backup."
  notify
  umount ${NFS_DIR}
  die 4 "Remote system does not have enough free space for the backup."
else
  log "Remote system has enough room. Proceeding with backup."
  log "===== ===== ===== ====="
  log ""
fi

if [ ! -d ${BACKUP_DIR} ]; then
  mkdir ${BACKUP_DIR}
fi

DIR_READY=0

today=$(date +'%m.%d.%Y')
sixthday=$(date -d'-6 days' +'%m.%d.%Y')
if [ -d "${BACKUP_DIR}/${sixthday}" ]; then
  # Move the sixth day to today
  log "Moving the oldest backup to be today's backup."
  mv "${BACKUP_DIR}/${sixthday}" "${BACKUP_DIR}/${today}" 2>&1 1>>${LOG_FILE}
  ln -sf "${BACKUP_DIR}/${today}" "${BACKUP_DIR}/complete_backup" 2>&1
1>>${LOG_FILE}
  log ""
  DIR_READY=1
fi

if [ -d ${BACKUP_DIR}/${today} ]; then
  DIR_READY=1
  log "Today's backup directory already exists. Will update today's backup."
  log ""
fi

if [ ${DIR_READY} -eq 0 ]; then
  yesterday=$(date -d'-1 days' +'%m.%d.%Y')
  if [ -d "${BACKUP_DIR}/${yesterday}" ]; then
    log "Copying yeterday's backup (${yesterday}) into place for
differential backup."
    cp -al "${BACKUP_DIR}/${yesterday}" "${BACKUP_DIR}/${today}" 2>&1
1>>${LOG_FILE}
    log ""
  else
    last_backup_dir=$(ls -1 ${BACKUP_DIR} | sort -nr | head -n 1)
    log "Copying most recent backup (${last_backup_dir}) into place
for differential backup."
    cp -al "${BACKUP_DIR}/${last_backup_dir}" "${BACKUP_DIR}/${today}"
2>&1 1>>${LOG_FILE}
    log ""
  fi

  DIR_READY=1
fi

if [ ${DIR_READY} -eq 1 ]; then
  rsync --archive --one-file-system --hard-links --human-readable --inplace \
  --numeric-ids --delete --delete-excluded --exclude-from=${EXCLUDES} \
  --verbose --itemize-changes / "${BACKUP_DIR}/${today}" 2>&1 1>>${LOG_FILE}
else
  log_error "Couldn't determine destination backup directory?"
  notify
fi

log ""
log "===== ===== ===== ====="
log "Backup complete."

umount ${NFS_DIR}

##################################################

-- 
James Sumners
http://james.roomfullofmirrors.com/

"All governments suffer a recurring problem: Power attracts
pathological personalities. It is not that power corrupts but that it
is magnetic to the corruptible. Such people have a tendency to become
drunk on violence, a condition to which they are quickly addicted."

Missionaria Protectiva, Text QIV (decto)
CH:D 59


More information about the Ale mailing list