服务器数据一键备份脚本 backup.sh(新增支持COS/阿里云盘)

本备份脚本修改自秋水逸冰的 backup.sh
增加了上传到 腾讯云COS 和 阿里云盘 功能
增强加密强度,增加 pbkdf2 迭代 20w 次,md换成sha256

1、一键备份脚本 backup.sh 功能特点

  1. 支持 MySQL/MariaDB/Percona 的数据库全量备份或选择备份;
  2. 支持指定目录或文件的备份;
  3. 支持加密备份文件(需安装 openssl 命令,可选);
  4. 支持上传至 Google Drive(需先安装 rclone 并配置,可选);
  5. 支持上传至 腾讯云COS(需先安装 coscmd 并配置,可选);
  6. 支持上传至 阿里云盘(需先安装 aliyunpan 并配置,可选);
  7. 支持上传至 FTP(可选);
  8. 支持在删除指定天数本地旧的备份文件的同时,也删除 Google Drive/COS/阿里云盘 上的同名文件(可选)。

2、修改并配置脚本

(1)关于变量名的一些说明:

  • ENCRYPTFLG (加密FLG,true 为加密,false 为不加密,默认是加密)
  • BACKUPPASS (加密密码,重要,务必要修改)
  • LOCALDIR (备份目录,可自己指定)
  • TEMPDIR (备份目录的临时目录,可自己指定)
  • LOGFILE (脚本运行产生的日志文件路径)
  • MYSQL_ROOT_PASSWORD (MySQL/MariaDB/Percona 的 root 用户密码)
  • MYSQL_DATABASE_NAME (指定 MySQL/MariaDB/Percona 的数据库名,留空则是备份所有数据库)

※ MYSQL_DATABASE_NAME 是一个数组变量,可以指定多个。举例如下:

MYSQL_DATABASE_NAME[0]="phpmyadmin"
MYSQL_DATABASE_NAME[1]="test"
  • BACKUP (需要备份的指定目录或文件列表,留空就是不备份目录或文件)

※ BACKUP 是一个数组变量,可以指定多个。举例如下:

BACKUP[0]="/data/www/default/test.tgz"
BACKUP[1]="/data/www/default/test/"
BACKUP[2]="/data/www/default/test2/"
  • LOCALAGEDAILIES (指定多少天之后删除本地旧的备份文件,默认为 7 天)
  • DELETE_REMOTE_FILE_FLG (删除 Google Drive/COS/AliyunDrive/FTP 上备份文件的 FLG,true 为删除,false 为不删除)
  • RCLONE_NAME (设置 rclone config 时设定的 remote 名称,务必要指定)
  • RCLONE_FOLDER (指定备份时设定的 remote 的目录名称,该目录名在 Google Drive 不存在时则会自行创建。默认为空,也就是根目录)
  • RCLONE_FLG (上传本地备份文件至 Google Drive 的 FLG,true 为上传,false 为不上传)
  • COS_FOLDER (指定备份时设定的 remote 的目录名称,该目录名在 COS 不存在时则会自行创建。默认为空,也就是根目录)
  • COS_FLG (上传本地备份文件至 COS 的 FLG,true 为上传,false 为不上传)
  • ALI_FLG (上传本地备份文件至 AliyunDrive 的 FLG,true 为上传,false 为不上传)
  • ALI_FOLDER (指定备份时设定的 remote 的目录名称,__该目录名在 AliyunDrive 不存在时会错误!!!需要手动创建!__)
  • ALI_PY_FILE (指定 aliyunpan 的 main.py 路径)
  • ALI_REFRESH_TOKEN (阿里云盘的 REFRESH_TOKEN )
  • FTP_FLG (上传文件至 FTP 服务器的 FLG,true 为上传,false 为不上传)
  • FTP_HOST (连接的 FTP 域名或 IP 地址)
  • FTP_USER (连接的 FTP 的用户名)
  • FTP_PASS (连接的 FTP 的用户的密码)
  • FTP_DIR (连接的 FTP 的远程目录,比如: public_html)

(2)一些注意事项的说明:

  1. 脚本需要用 root 用户来执行;
  2. 脚本需要用到 openssl 来加密,请事先安装好;
  3. 脚本默认备份所有的数据库(全量备份);
  4. 备份文件的解密命令如下:
openssl enc -aes256 -salt -pbkdf2 -iter 200000 -in [ENCRYPTED BACKUP] -out decrypted_backup.tgz -pass pass:[BACKUPPASS] -d -md sha256
  1. 备份文件解密后,解压命令如下:
tar -zxPf [DECRYPTION BACKUP FILE]

解释一下参数 -P:
tar 压缩文件默认都是相对路径的。加个 -P 是为了 tar 能以绝对路径压缩文件。因此,解压的时候也要带个 -P 参数。

3、配置 rclone 命令(可选)

rclone 是一个命令行工具,用于 Google Drive 的上传下载等操作。官网网站:https://rclone.org/

你可以用以下的命令来安装 rclone,以 RedHat 系举例,记得要先安装 unzip 命令。

yum -y install unzip && wget -qO- https://rclone.org/install.sh | bash

然后,运行以下命令开始配置:

rclone config

参考这篇文章,当设置到 Use auto config? 是否使用自动配置,选 n 不自动配置,然后根据提示用浏览器打开 rclone 给出的 URL,点击接受(Accept),然后将浏览器上显示出来的字符串粘贴回命令行里,完成授权,然后退出即可。参考文章里有挂载的操作,记得这里不需要挂载 Google Drive。

4、配置 coscmd 命令(可选)

(1)通过 pip 安装

执行pip命令进行安装:

pip install coscmd

安装成功之后,用户可以通过-v或者–version命令查看当前的版本信息。

(2)pip 更新

安装完成后,执行以下命令进行更新:

pip install coscmd -U

! 当 pip 版本号大于等于10.0.0 时,升级或安装依赖库时可能会出现失败,建议使用 pip 版本 9.x(pip install pip==9.0.0)。如果您安装的是最新 Python 版本(例如3.9.0),则已集成 pip,您无需再次安装。

(3)快速配置

通常情况下,若您只需要进行简单的操作,可参照以下操作示例进行快速配置。

?配置前,您需要先在 COS 控制台创建一个用于配置参数的存储桶(例如 configure-bucket-1250000000),并创建密钥信息。

coscmd config -a AChT4ThiXAbpBDEFGhT4ThiXAbp**** -s WE54wreefvds3462refgwewe**** -b configure-bucket-1250000000 -r ap-chengdu

5、配置 aliyunpan 命令(可选)

git clone https://github.com/wxy1343/aliyunpan.git
cd aliyunpan
pip install -r requirements.txt

6、运行脚本开始备份

./backup.sh

脚本默认会显示备份进度,并在最后统计出所需时间。
如果你想将脚本加入到 cron 自动运行的话,就不需要前台显示备份进度,只写日志就可以了。
这个时候你需要稍微改一下脚本中的 log 函数。

log() {
    echo "$(date "+%Y-%m-%d %H:%M:%S")" "$1"
    echo -e "$(date "+%Y-%m-%d %H:%M:%S")" "$1" >> ${LOGFILE}
}

改为:

log() {
    echo -e "$(date "+%Y-%m-%d %H:%M:%S")" "$1" >> ${LOGFILE}
}

关于如何使用 cron 自动备份,网上有一堆教程,这里不在叙述。

7、附:脚本文件

#!/usr/bin/env bash
# Copyright (C) 2013 - 2020 Teddysun <i@teddysun.com>
# 
# This file is part of the LAMP script.
#
# LAMP is a powerful bash script for the installation of 
# Apache + PHP + MySQL/MariaDB and so on.
# You can install Apache + PHP + MySQL/MariaDB in an very easy way.
# Just need to input numbers to choose what you want to install before installation.
# And all things will be done in a few minutes.
#
# Description:      Auto backup shell script
# Description URL:  https://teddysun.com/469.html
#
# Website:  https://lamp.sh
# Github:   https://github.com/teddysun/lamp
#
# You must to modify the config before run it!!!
# Backup MySQL/MariaDB datebases, files and directories
# Backup file is encrypted with AES256-cbc with SHA1 message-digest (option)
# Auto transfer backup file to Google Drive (need install rclone command) (option)
# Auto transfer backup file to FTP server (option)
# Auto delete Google Drive's or FTP server's remote file (option)
[[ $EUID -ne 0 ]] && echo "Error: This script must be run as root!" && exit 1
########## START OF CONFIG ##########
# Encrypt flag (true: encrypt, false: not encrypt)
ENCRYPTFLG=true
# WARNING: KEEP THE PASSWORD SAFE!!!
# The password used to encrypt the backup
# To decrypt backups made by this script, run the following command:
# openssl enc -aes256 -in [encrypted backup] -out decrypted_backup.tgz -pass pass:[backup password] -d -md sha1
BACKUPPASS=""
# Directory to store backups
LOCALDIR=""
# Temporary directory used during backup creation
TEMPDIR=""
# File to log the outcome of backups
LOGFILE=""
# OPTIONAL:
# If you want to backup the MySQL database, enter the MySQL root password below, otherwise leave it blank
MYSQL_ROOT_PASSWORD=""
# Below is a list of MySQL database name that will be backed up
# If you want backup ALL databases, leave it blank.
MYSQL_DATABASE_NAME[0]=""
# Below is a list of files and directories that will be backed up in the tar backup
# For example:
# File: /data/www/default/test.tgz
# Directory: /data/www/default/test
BACKUP[0]="/home/wwwroot"
# Number of days to store daily local backups (default 7 days)
LOCALAGEDAILIES="7"
# Delete remote file from Googole Drive or FTP server flag (true: delete, false: not delete)
DELETE_REMOTE_FILE_FLG=true
# Rclone remote name
RCLONE_NAME=""
# Rclone remote folder name (default "")
RCLONE_FOLDER=""
# Cos remote folder name (default "")
COS_FOLDER=""
# AliyunDrive remote folder name (default "")
ALI_FOLDER=""
# Upload local file to FTP server flag (true: upload, false: not upload)
FTP_FLG=false
# Upload local file to Google Drive flag (true: upload, false: not upload)
RCLONE_FLG=false
# Upload local file to Cos flag (true: upload, false: not upload)
COS_FLG=false
# Upload local file to AliyunDrive flag (true: upload, false: not upload)
ALI_FLG=false
# AliyunDrive main.py
ALI_PY_FILE="main.py"
ALI_REFRESH_TOKEN=""
# FTP server
# OPTIONAL: If you want to upload to FTP server, enter the Hostname or IP address below
FTP_HOST=""
# FTP username
# OPTIONAL: If you want to upload to FTP server, enter the FTP username below
FTP_USER=""
# FTP password
# OPTIONAL: If you want to upload to FTP server, enter the username's password below
FTP_PASS=""
# FTP server remote folder
# OPTIONAL: If you want to upload to FTP server, enter the FTP remote folder below
# For example: public_html
FTP_DIR=""
########## END OF CONFIG ##########
# Date & Time
DAY=$(date +%d)
MONTH=$(date +%m)
YEAR=$(date +%C%y)
BACKUPDATE=$(date +%Y%m%d%H%M%S)
# Backup file name
TARFILE="${LOCALDIR}""$(hostname)"_"${BACKUPDATE}".tgz
# Encrypted backup file name
ENC_TARFILE="${TARFILE}.enc"
# Backup MySQL dump file name
SQLFILE="${TEMPDIR}mysql_${BACKUPDATE}.sql"
log() {
echo "$(date "+%Y-%m-%d %H:%M:%S")" "$1"
echo -e "$(date "+%Y-%m-%d %H:%M:%S")" "$1" >> ${LOGFILE}
}
# Check for list of mandatory binaries
check_commands() {
# This section checks for all of the binaries used in the backup
# Do not check mysql command if you do not want to backup the MySQL database
if [ -z "${MYSQL_ROOT_PASSWORD}" ]; then
BINARIES=( cat cd du date dirname echo openssl pwd rm tar )
else
BINARIES=( cat cd du date dirname echo openssl mysql mysqldump pwd rm tar )
fi
# Iterate over the list of binaries, and if one isn't found, abort
for BINARY in "${BINARIES[@]}"; do
if [ ! "$(command -v "$BINARY")" ]; then
log "$BINARY is not installed. Install it and try again"
exit 1
fi
done
# check rclone command
RCLONE_COMMAND=false
if [ "$(command -v "rclone")" ]; then
RCLONE_COMMAND=true
fi
# check COS command
COS_COMMAND=false
if [ "$(command -v "coscmd")" ]; then
COS_COMMAND=true
fi
# check AliyunDrive command
ALI_COMMAND=false
if [ ! -f "${ALI_PY_FILE}" ]; then
ALI_COMMAND=true
fi
# check ftp command
if ${FTP_FLG}; then
if [ ! "$(command -v "ftp")" ]; then
log "ftp is not installed. Install it and try again"
exit 1
fi
fi
}
calculate_size() {
local file_name=$1
local file_size=$(du -h $file_name 2>/dev/null | awk '{print $1}')
if [ "x${file_size}" = "x" ]; then
echo "unknown"
else
echo "${file_size}"
fi
}
# Backup MySQL databases
mysql_backup() {
if [ -z "${MYSQL_ROOT_PASSWORD}" ]; then
log "MySQL root password not set, MySQL backup skipped"
else
log "MySQL dump start"
mysql -u root -p"${MYSQL_ROOT_PASSWORD}" 2>/dev/null <<EOF
exit
EOF
if [ $? -ne 0 ]; then
log "MySQL root password is incorrect. Please check it and try again"
exit 1
fi
if [ "${MYSQL_DATABASE_NAME[@]}" == "" ]; then
mysqldump -u root -p"${MYSQL_ROOT_PASSWORD}" --all-databases > "${SQLFILE}" 2>/dev/null
if [ $? -ne 0 ]; then
log "MySQL all databases backup failed"
exit 1
fi
log "MySQL all databases dump file name: ${SQLFILE}"
#Add MySQL backup dump file to BACKUP list
BACKUP=(${BACKUP[@]} ${SQLFILE})
else
for db in ${MYSQL_DATABASE_NAME[@]}; do
unset DBFILE
DBFILE="${TEMPDIR}${db}_${BACKUPDATE}.sql"
mysqldump -u root -p"${MYSQL_ROOT_PASSWORD}" ${db} > "${DBFILE}" 2>/dev/null
if [ $? -ne 0 ]; then
log "MySQL database name [${db}] backup failed, please check database name is correct and try again"
exit 1
fi
log "MySQL database name [${db}] dump file name: ${DBFILE}"
#Add MySQL backup dump file to BACKUP list
BACKUP=(${BACKUP[@]} ${DBFILE})
done
fi
log "MySQL dump completed"
fi
}
start_backup() {
[ "${#BACKUP[@]}" -eq 0 ] && echo "Error: You must to modify the [$(basename $0)] config before run it!" && exit 1
log "Tar backup file start"
tar -zcPf ${TARFILE} ${BACKUP[@]}
if [ $? -gt 1 ]; then
log "Tar backup file failed"
exit 1
fi
log "Tar backup file completed"
# Encrypt tar file
if ${ENCRYPTFLG}; then
log "Encrypt backup file start"
openssl enc -aes256 -salt -pbkdf2 -iter 200000 -in "${TARFILE}" -out "${ENC_TARFILE}" -pass pass:"${BACKUPPASS}" -md sha256
log "Encrypt backup file completed"
# Delete unencrypted tar
log "Delete unencrypted tar file: ${TARFILE}"
rm -f ${TARFILE}
fi
# Delete MySQL temporary dump file
for sql in $(ls ${TEMPDIR}*.sql); do
log "Delete MySQL temporary dump file: ${sql}"
rm -f ${sql}
done
if ${ENCRYPTFLG}; then
OUT_FILE="${ENC_TARFILE}"
else
OUT_FILE="${TARFILE}"
fi
log "File name: ${OUT_FILE}, File size: $(calculate_size ${OUT_FILE})"
}
# Transfer backup file to Google Drive
# If you want to install rclone command, please visit website:
# https://rclone.org/downloads/
rclone_upload() {
if ${RCLONE_FLG} && ${RCLONE_COMMAND}; then
[ -z "${RCLONE_NAME}" ] && log "Error: RCLONE_NAME can not be empty!" && return 1
if [ -n "${RCLONE_FOLDER}" ]; then
rclone ls ${RCLONE_NAME}:${RCLONE_FOLDER} 2>&1 > /dev/null
if [ $? -ne 0 ]; then
log "Create the path ${RCLONE_NAME}:${RCLONE_FOLDER}"
rclone mkdir ${RCLONE_NAME}:${RCLONE_FOLDER}
fi
fi
log "Tranferring backup file: ${OUT_FILE} to Google Drive"
rclone copy ${OUT_FILE} ${RCLONE_NAME}:${RCLONE_FOLDER} >> ${LOGFILE}
if [ $? -ne 0 ]; then
log "Error: Tranferring backup file: ${OUT_FILE} to Google Drive failed"
return 1
fi
log "Tranferring backup file: ${OUT_FILE} to Google Drive completed"
fi
}
# Tranferring backup file to COS
cos_upload() {
if ${COS_FLG} && ${COS_COMMAND}; then
[ -z "${COS_FOLDER}" ] && log "Error: COS_FOLDER can not be empty!" && return 1
log "Tranferring backup file: ${OUT_FILE} to COS"
coscmd upload ${OUT_FILE} ${COS_FOLDER}/ >> ${LOGFILE}
if [ $? -ne 0 ]; then
log "Error: Tranferring backup file: ${OUT_FILE} to COS"
return 1
fi
log "Tranferring backup file: ${OUT_FILE} to COS completed"
fi
}
# Tranferring backup file to COS
ali_upload() {
if ${ALI_FLG} && ${COS_COMMAND}; then
[ -z "${ALI_FOLDER}" ] && log "Error: ALI_FOLDER can not be empty!" && return 1
log "Tranferring backup file: ${OUT_FILE} to AliyunDrive"
python3 ${ALI_PY_FILE} ${ALI_REFRESH_TOKEN} u ${OUT_FILE} /${ALI_FOLDER}/ >> ${LOGFILE}
if [ $? -ne 0 ]; then
log "Error: Tranferring backup file: ${OUT_FILE} to AliyunDrive"
return 1
fi
log "Tranferring backup file: ${OUT_FILE} to AliyunDrive completed"
fi
}
# Tranferring backup file to FTP server
ftp_upload() {
if ${FTP_FLG}; then
[ -z "${FTP_HOST}" ] && log "Error: FTP_HOST can not be empty!" && return 1
[ -z "${FTP_USER}" ] && log "Error: FTP_USER can not be empty!" && return 1
[ -z "${FTP_PASS}" ] && log "Error: FTP_PASS can not be empty!" && return 1
[ -z "${FTP_DIR}" ] && log "Error: FTP_DIR can not be empty!" && return 1
local FTP_OUT_FILE=$(basename ${OUT_FILE})
log "Tranferring backup file: ${FTP_OUT_FILE} to FTP server"
ftp -in ${FTP_HOST} 2>&1 >> ${LOGFILE} <<EOF
user $FTP_USER $FTP_PASS
binary
lcd $LOCALDIR
cd $FTP_DIR
put $FTP_OUT_FILE
quit
EOF
if [ $? -ne 0 ]; then
log "Error: Tranferring backup file: ${FTP_OUT_FILE} to FTP server failed"
return 1
fi
log "Tranferring backup file: ${FTP_OUT_FILE} to FTP server completed"
fi
}
# Get file date
get_file_date() {
#Approximate a 30-day month and 365-day year
DAYS=$(( $((10#${YEAR}*365)) + $((10#${MONTH}*30)) + $((10#${DAY})) ))
unset FILEYEAR FILEMONTH FILEDAY FILEDAYS FILEAGE
FILEYEAR=$(echo "$1" | cut -d_ -f2 | cut -c 1-4)
FILEMONTH=$(echo "$1" | cut -d_ -f2 | cut -c 5-6)
FILEDAY=$(echo "$1" | cut -d_ -f2 | cut -c 7-8)
if [[ "${FILEYEAR}" && "${FILEMONTH}" && "${FILEDAY}" ]]; then
#Approximate a 30-day month and 365-day year
FILEDAYS=$(( $((10#${FILEYEAR}*365)) + $((10#${FILEMONTH}*30)) + $((10#${FILEDAY})) ))
FILEAGE=$(( 10#${DAYS} - 10#${FILEDAYS} ))
return 0
fi
return 1
}
# Delete Google Drive's old backup file
delete_gdrive_file() {
local FILENAME=$1
if ${DELETE_REMOTE_FILE_FLG} && ${RCLONE_COMMAND}; then
rclone ls ${RCLONE_NAME}:${RCLONE_FOLDER}/${FILENAME} 2>&1 > /dev/null
if [ $? -eq 0 ]; then
rclone delete ${RCLONE_NAME}:${RCLONE_FOLDER}/${FILENAME} >> ${LOGFILE}
if [ $? -eq 0 ]; then
log "Google Drive's old backup file: ${FILENAME} has been deleted"
else
log "Failed to delete Google Drive's old backup file: ${FILENAME}"
fi
else
log "Google Drive's old backup file: ${FILENAME} is not exist"
fi
fi
}
# Delete COS's old backup file
delete_cos_file() {
local FILENAME=$1
if ${DELETE_REMOTE_FILE_FLG} && ${COS_COMMAND}; then
cos delete ${COS_FOLDER}/${FILENAME} >> ${LOGFILE}
if [ $? -eq 0 ]; then
log "COS's old backup file: ${FILENAME} has been deleted"
else
log "Failed to delete COS's old backup file: ${FILENAME}"
fi
fi
}
# Delete AliyunDrive's old backup file
delete_ali_file() {
local FILENAME=$1
if ${DELETE_REMOTE_FILE_FLG} && ${ALI_COMMAND}; then
python3 ${ALI_PY_FILE} ${ALI_REFRESH_TOKEN} del /${ALI_FOLDER}/${FILENAME} >> ${LOGFILE}
if [ $? -eq 0 ]; then
log "AliyunDrive's old backup file: ${FILENAME} has been deleted"
else
log "Failed to delete AliyunDrive's old backup file: ${FILENAME}"
fi
fi
}
# Delete FTP server's old backup file
delete_ftp_file() {
local FILENAME=$1
if ${DELETE_REMOTE_FILE_FLG} && ${FTP_FLG}; then
ftp -in ${FTP_HOST} 2>&1 >> ${LOGFILE} <<EOF
user $FTP_USER $FTP_PASS
cd $FTP_DIR
del $FILENAME
quit
EOF
if [ $? -eq 0 ]; then
log "FTP server's old backup file: ${FILENAME} has been deleted"
else
log "Failed to delete FTP server's old backup file: ${FILENAME}"
fi
fi
}
# Clean up old file
clean_up_files() {
cd ${LOCALDIR} || exit
if ${ENCRYPTFLG}; then
LS=($(ls *.enc))
else
LS=($(ls *.tgz))
fi
for f in ${LS[@]}; do
get_file_date ${f}
if [ $? -eq 0 ]; then
if [[ ${FILEAGE} -gt ${LOCALAGEDAILIES} ]]; then
rm -f ${f}
log "Old backup file name: ${f} has been deleted"
delete_gdrive_file ${f}
delete_ftp_file ${f}
delete_cos_file ${f}
delete_ali_file ${f}
fi
fi
done
}
# Main progress
STARTTIME=$(date +%s)
# Check if the backup folders exist and are writeable
[ ! -d "${LOCALDIR}" ] && mkdir -p ${LOCALDIR}
[ ! -d "${TEMPDIR}" ] && mkdir -p ${TEMPDIR}
log "Backup progress start"
check_commands
mysql_backup
start_backup
log "Backup progress complete"
log "Upload progress start"
rclone_upload
ftp_upload
cos_upload
ali_upload
log "Upload progress complete"
log "Cleaning up"
clean_up_files
ENDTIME=$(date +%s)
DURATION=$((ENDTIME - STARTTIME))
log "All done"
log "Backup and transfer completed in ${DURATION} seconds"

发表评论