There are several possibilities to backup WordPress, depending on your knowledge and user privileges. In this post I want to show you how we can backup our WordPress instance which we previously deployed through our docker-compose. For this feat we need to know what we want to backup and how we can actually do it. I included my backup script which runs automatically on my Pi. After reading this post I hope you get an overview on my take on how i managed the backup in the script.
0. Backup options
In general you have many options to backup your current WordPress data. To save all posts and meta information you could export the WordPress related data through its intrinsic functionality under tools > export which generates a XML file. However during restoring the data, i encountered a problem in that the size of the file exceeds the maximum upload size. You could workaround this by either increasing the maximum upload size (depends on server) or split the backup parts in smaller bits. This does sound a little bit “unhandy” and since we do own our Pi we do have access to better options.
It is for example possible to just backup (copy) the whole WordPress folder (in our case the folder data) and save it to a separate dedicated harddrive plugged into the Pi. Restoring it would be fairly simple since you only have to copy the backup to the original location.
However i wanted to back my files to a cloud storage (dropbox) and it is inconvenient to upload 400+ mb everyday. The solution at hand would be to only upload the necessary files right? In the following post I want to introduce you to my approach.
1. WordPress components
First we need to know, what we actually do need to back up. In #0 Introduction to Docker and WordPress we already learned the components of WordPress. These stay the same, independent whether we use Docker or not. We want to backup:
- our database
- plugins
- uploaded files (such as images or videos)
- configuration files (such as the Caddyfile, wp-config.php)
2. Backup database
We can backup our database by initiating a mysql dump. This will output a sql DDL file which when executed in return will create all the needed database structures as well as the data we had at the point of the dump. If you run MySQL/MariaDB without Docker i believe this functionality is located at /usr/bin/mysqldump . Since we utilize Docker, we need to call it within our Container.
First we need to find our container.
$ mysql_container=$(docker ps | sed 's/ */|/g' | cut -d '|' -f7 | grep webservice_mariadb)
Then we want to call a mysqldump and output the result to the host system. In the following snippet the credentials are read from the same config file which we used to instantiate our mariadb. The name of our backup file includes a date-based prefix to distinguish different backup versions.
$ docker exec $mysql_container /usr/bin/mysqldump --user="${mysql_user}" --password="${mysql_pwd}" --all-databases > $sql_backupfilename
To restore our database we can copy the dump to /data/restore which gets automatically mounted to the container. We then hop inside the container and navigate to the dumpfile and execute the SQL. Be careful this should only be executed on a fresh database since the dump does DROP existing tables!
$ docker exec -it <container-ID> sh $ cd restore $ mysql -u root -p < backup_dumb.sql
3. Backup plugins
Plugins do save their meta data inside the database but there might be an off chance that different versions have a different data model and wont work together. To prevent this from happening we just save the same plugin version alongside the dump file. The plugins are generally saved inside /data/wp/www/wp-content/uploads/plugins but be aware that depending on the plugin, it might generate other files which are saved elsewhere (e.g. in uploads).
To simplify the backup i just assumed that we need to copy everything inside the plugins folder only with the following command:
$ rsync -a --delete $source_plugin_dir backup/
- The -a flag signalizes rsync to act recursivelyn and preserve meta data, excluding hard links
- The –delete flag signalizes rsync to overwrite / delete files in the target folder
(source: see man, stackoverflow or howtogeek.com )
4. Backup Images / Uploads
In my case once uploaded, images don’t change much or get deleted. So it would make sense to only upload a delta of new images, instead of all images every night.
The key was to have my uploads structured in a year/month folder fashion which can be enabled in wordpress under Settings > Media > Organize my uploads into month- and year-based folders. The rough algorithm would be like so:
- For every month in year do:
- if a SUCCESS flag exists, skip this folder, else continue
- backup content with rsync, zip it
- create a hash over the content, save it and compare it with the previous hash to detect changes. If changes occured, upload the zip
- if month (and year) is less than current year/month then close the folder with a SUCCESS flag
5. Final Notes
Finally there are some configuration files which we did not cover and still need to be backed up such as:
- the Caddyfile (of your Webserver)
- the mysql_credentials and wp-config.php ( to retain the same database connection )
Ironically i backup my files to dropbox with the help of a dropbox-loader container (https://github.com/andreafabrizi/Dropbox-Uploader). If you want to use the same container you will need to initialize it first and then save the configuration to be able to automatically call the dropbox-uploader without manually inputting credentials.
Since you will create a new mysql dump every backup iteration, you should also not forget to clean it up.
Additionally it would be nice to backup multiple WordPress instances. Depending on how they are connected it might be different for your case. I do have a subdomain on a separate WordPress installation though it shares the same database. This means i only need one database dump, but need to backup images and plugins for each instance separately.
I included the backup script in the repository under backup as well as an example configuration file which has a preconfiguration assuming you cloned the repo into /home/pi/docker/. Furthermore if you don’t want to upload your files to dropbox, you can change the uploader function accordingly. Keep in mind that you need to change the cleanup_mysqldump function as well.
To be honest, at this point realistically i dont think the demand for a self written backup shell script is high, which is why i did not optimize it yet (never change a working system). However if you want to implement this method and still struggle at any point, feel free to reply to this comment and i will try my best to help you out 🙂
<<<#2 Containerized WordPress: General troubleshooting and Migration