How to create backups to AWS S3?

From PheonixSolutions
Jump to navigation Jump to search

Login to Amazon S3 with the root login:

Goto S3 🡪 Buckets 🡪 Click on Create Bucket 🡪 add bucket name and create it.

Now login to a particular server :

Step 1: Install AWSCli

 sudo apt install awscli
 aws –version
 aws configure (Enter the access Key ID and secret access key)

Step 2:

 netstat -tulpn (To check services used in server)
 cd /etc/apache2/sites-available/
 ls -latr- we can open a conf file to check document root(/var/www/taxary)

Step3:

Create backup.sh file (vi backup.sh) paste the following script in the backup file (change the bucket name accordingly)

#!/bin/sh
#THEDBUSER="myDatabaseUsername"
#THEDBPW="myDatabasePassword"
THEDATE=`date +%d%m%y%H%M`
# Export all the databases
#mysqldump -u $THEDBUSER -p${THEDBPW} --all-databases > /var/www/_backups/dbbackup_${THEDATE}.sql# Remove backups older than 31 days
find /bin/_backups/site* -mtime +31 -exec rm {} \;
find /bin/_backups/apache* -mtime +31 -exec rm {} \;
#find /var/www/_backups/db* -mtime +31 -exec rm {} \;# Export files
#tar czf /bin/_backups/sitebackup_${THEDATE}.tar -C / home
tar czf /bin/_backups/sitebackup1_${THEDATE}.tar -C / var/www
# Export the Apache vhosts configuration
tar czf /bin/_backups/apachebackup_${THEDATE}.tar -C / etc/apache2/sites-available
# Sync to amazon. With the 'delete' option, the files removed from
# /var/www/_backups will be removed from the bucket as well
aws s3 sync /bin/_backups s3://taxary
rm -rf /bin/_backups/sitebackup_${THEDATE}.tar
rm -rf /bin/_backups/sitebackup1_${THEDATE}.tar
rm -rf /bin/_backups/apachebackup_${THEDATE}.tar

Step 4: Create a directory:

 mkdir /bin/_backups/

step 5: Set execute permission.

 chmod +x backup.sh

and check using ls -latrh

Step 5:

Run the script using below command

 ./backups.sh

Step 6:

we have to create another file for db

 vi dbbackups.sh

Enter the following script in the editor:

 backup_name=~/db_backups-`date +%Y-%m-%d-%H%M`
mongodump --out $backup_name
tar czf $backup_name.tar.gz $backup_name
aws s3 cp $backup_name.tar.gz s3://taxary/db_backups/
rm -rf $backup_name
rm $backup_name.tar.gz

Step 7: we have to create another file for db -postgresql

 #!/bin/sh
# Database credentials
THEDBUSER="postgres"
THEDBPW="xfTghWF45Fg"
THEDBNAME="openproject"
# Backup directory
BACKUP_DIR="/root/postgres_bk"
# Date for backup filename
THEDATE=$(date +%Y-%m-%d-%H%M)
# Export PostgreSQL database
PGPASSWORD=$THEDBPW pg_dump -U $THEDBUSER -h localhost -p 45432 $THEDBNAME > $BACKUP_DIR/openproject_backup_${THEDATE}.sql
# Create a compressed tarball from the backup
#tar czf $BACKUP_DIR/openproject_backup_${THEDATE}.tar.gz $BACKUP_DIR/openproject_backup_${THEDATE}.sql
# Copy the tarball to an Amazon S3 bucket
aws s3 cp /root/postgres_bk/openproject_backup_${THEDATE}.sql s3://jira-new/postgres_bk/
# Remove local backup files
rm -f $BACKUP_DIR/openproject_backup_${THEDATE}.sql
rm -f $BACKUP_DIR/openproject_backup_${THEDATE}.tar.gz

Step 8:

Create a directory ,set permission and run the script

 mkdir ~/db_backups
 chmod +x dbbackups.sh
 ./dbbackups.sh

Step 9:

crontab -e (if we are unbale to open crontab in vi ,use  1) export VISUAL=vi
2) crontab -e)

add the following script in the editor:

0 0 * * * /bin/bash /root/dbbackups.sh >> /logs/db_backups.log 2>&1

0 0 * * * /bin/bash /root/backup.sh >> ~/logs/backups.log 2>&1

Step 10:

 mkdir /logs
 crontab -e

step 11:

Restart the Cron service

 systemctl restart cron