I'm hosting three different WordPress blogs for different people (this one included) and recently decided to switch away from my home server to a VPS. My home server has a RAID-Z array and important data is backed up remotely on a regular basis giving me the peace of mind that my data is safe, but I can't be so sure about the VPS. Also, I might switch to a different VPS provider in the future and wanted to make deploying the blogs as easy as possible. I came up with the following solution.
- Daily backups usable for small websites with little traffic
- Backups should be complete: all files, MySQL dump, logs
Using git has many inherent advantages over simply copying files to a remote server.
- Daily backups will only store diffs, taking up little space
- Using a VCS allows you to see history, merge fixes, branch, etc. if you need to (eg. for developing plugins/themes on one computer and easily merging them into your website)
- Push backups to a remote server using http, https, ssh, or git protocols
- Easily exclude files from backups using
- Setting up a new server is just a
- Know your backup succeeded just by browsing your git repository
So, why should we store the database backup and log files in git as well?
- Complete history of your website -- restore the files AND database to any state you want
- The diffs in each commit give you the traffic for the day and which rows were added to your database
- No need for separate backup methods!
Of course, git isn't the perfect solution for all websites. High-traffic websites may find their git repositories quickly grow in size when storing database backups and log files.
To make things easier, I structured each of the sites in the following fashion:
/usr/local/www/ `- example.com/ |- logs/ | `- example.com-access.log | `- example.com-error.log |- root/ # WordPress installation `- sql/ `- example.com.sql # MySQL dump
Also, the MySQL username and database name for each site is the same as the URL, so "example.com" for this case.
Prepare the Remote Repository on the Backup Server
Create an empty repository on the backup server.
$ cd /path/to/git/repositories $ mkdir example.com.git $ cd example.com.git $ git init --bare --shared
Add data to the repository
We can clone the empty repository we just made and add the initial data to it. Do this on the server your site is currently hosted on.
$ git clone http://git.example.com/example.com $ cd example.com $ mkdir logs sql $ touch logs/example.com-access.log logs/example.com-error.log $ mysqldump -uexample.com -pPASSWORD --skip-extended-insert --skip-comments example.com > sql/example.com.sql # Copy the WordPress install $ cp -r /path/to/site/root root
I also made a simple Makefile to automate backing up. Edit it to suit your needs (especially the
MYSQL_ variables) and place it in the root of your repository.
SITE_NAME= $(notdir $(CURDIR)) MYSQL_USER= $(SITE_NAME) MYSQL_PASS= PASSWORD MYSQL_DB= $(SITE_NAME) .PHONY: backup backup: sql_backup git_commit git_push @echo Backup of $(SITE_NAME) finished successfully! .PHONY: sql_backup sql_backup: @echo Backing up the database @mysqldump -u$(MYSQL_USER) -p$(MYSQL_PASS) --skip-extended-insert --skip-comments $(MYSQL_DB) > sql/$(SITE_NAME).sql .PHONY: git_commit git_commit: git_commit_sql git_commit_logs git_commit_root .PHONY: git_commit_logs git_commit_logs: @echo Committing log files @-git commit logs -m "Updating logs" .PHONY: git_commit_root git_commit_root: @echo Committing root @-git add root @-git commit root -m "Updating root" .PHONY: git_commit_sql git_commit_sql: @echo Committing sql @-git commit sql -m "Updating sql dump" .PHONY: git_push git_push: @echo Pushing commit @-git push
Now we can commit changes and push to our backup server. Remember that you can add a
.gitignore to keep files or directories from being backed up.
$ git add . $ git commit -m "initial commit" $ git push origin master
Now when you want to deploy your site, set up the MySQL user and database then checkout the site from git.
$ cd /usr/local/www $ git clone --shared http://git.example.com/example.com $ cd example.com $ mysql -uexample.com -pPASSWORD example.com < sql/example.com.sql
As long as your web server points to
/usr/local/www/example.com, you should now be able to access your site!
If you want to test backups, just generate some log activity or do something which would update the database and run
make backup from
You can automate backups by using
Access your crontab
$ crontab -e
...and add the following line:
@daily cd /usr/local/www/example.com && make backup &> /dev/null
If all goes well, you should see daily commits popping up on your backup server! If you ever lose data, you can go back to the deploying section and go back to your last backup.