I use BoxBackup in lazy mode to back up my systems. This works well for things like normal files and directories but not so well for databases. I wanted to periodically dump my database to a file and then have BoxBackup come along and do its thing. The only problem was how to name each backup file and how to clean up after so many backups had been done. Since it runs in lazy mode I can't know ahead of time exactly when the backup will happen and which files it will backup.

I looked at generating files with timestamps and then doing some calculations about cleaning up old files but that just seemed messy and I was sure that someone must have run into this type of situation before. Then I thought about log files and how they are backed up. It turns out the the standard Linux logrotate command knows how to rename files and get rid of old ones automagically. It even has script functionality that allows you to run commands before and after the rotate happens. Sounds perfect. Here's what I did.

First I created a directory in my web apps directory to hold all the database information and backup files. I'm using PostgreSQL, but you could do the same thing for MySQL or even Oracle. Just change the names of the directories and commands to something more suitable for your environment.

$ mkdir /var/www/postgres


Next we have to create a status file for logrotate to use. Since we will be running this as a non-root user we don't want to pollute the system status file with our non-log database backup stuff.

$ touch /var/www/postgres/logrotate.status


We also want to create a "fake" dump file so that logrotate will have something to start working with. If this file doesn't exist logrotate will refuse to rotate it. In this case we are backing up a Postgres database called "mydatabase".

$ touch /var/www/postgres/mydatabase.dump


Now we get to the heart of the matter; the logroate script itself. This tells logrotate to keep 10 copies of our dump but don't compress it and dont copy it. It will also complain if there is no original file to backup and it will create a blank "dump" for next time.

# /var/www/postgres/logrotate.pg_dump
# Dump PostgreSQL databases and prepare them for backup
/var/www/postgres/mydatabase.dump {
rotate 10
nomissingok
create
nocompress
nocopy
prerotate
test -x /usr/bin/pg_dump || exit 0
/usr/bin/pg_dump mydatabase -F c > /var/www/postgres/mydatabase.dump
endscript
}


One final step and we're done. The pg_dump command connects to the database as the user who runs it and the postgres user has access to all the databases by default (on Debian anyway). So running the logrotate command as the postgres user allows us to backup any database on the system without a password. And in order to allow the postgres user access to our new setup we have to change the owner on the files in /var/www/postgres.

$ chown -R postgres:postgres /var/www/postgres


Now, whenever you want to generate a new database dump just run the following command. You can put this is a crontab or just run it by hand. Remember to use the -f flag to force the backup to happen.

$/usr/sbin/logrotate -f /var/www/postgres/logrotate.pg_dump -s /var/www/postgres/logrotate.status