Backup on AWS with Jenkins

There will be a situation where you have to take a backup whenever you run Jenkins job. Job has to trigger the required execution steps after the execution, it should store this particular a backup for the generated files into remote machines are FTP servers.

So here we are going to see how to store or backup these files into the AWS S3 bucket.

Having a little familiarity with AWS will help you in this lecture but it is not necessary. Because we are going to showcase in simple steps only.

We will not be performing be complex operations, we will be performing a simple copy operation, so a basic understanding of copy and paste is enough for this lecture.

So once we create a backup of files into an AWS, then we will be seeing how to create a backup for MySQL DB and then store it in the AWS S3 bucket.

Before all the stuff, we will be learning for little on volumes in docker.

Volumes in Docker

Whenever you start docker service, for example, consider that you are creating a remote machine using a docker. And if you are performing some operation that creates the files in the docker service are in the remote machine created by the docker.

Whenever the docker service is destroyed the complete setup of the docker service will be destroyed which means that no files will be present in more in the docker service.

But for some purpose we might need these files, so to get these files, we will be using volumes option in docker.

Not this every time when the doctor is created it will be created from scratch which means that it will not hold any files that you have copied when you have performed operation in earlier days.

Also, it is painful to copy all the files from our machine to docker service every single time; to avoid this, we can use volumes option from docker again.

Persist in docker:

Persist in docker means nothing but making sure that the files that exist in our machine match with the files that exist in the docker service. We can say it otherwise as well, making sure that the files exist in the docker service matches with files in our system.

Having sync between our machine and docker service

Let's edit our docker-compose file to add volumes:

  • Add volumes option into the docker-compose.yml, for remote_vm
    volumes:
          - $PWD/persistance-test:/tmp/​
  • The docker-compose.yml should be
    version: '3.7'
    services:
      jenkins:
        container_name: jenkins
        image: jenkins/jenkins
        ports:
          - "8080:8080"
        volumes:
          - $PWD/jenkins_home:/var/jenkins_home
        networks:
          - net
    
      remote_vm:
        container_name: remote_vm
        image: remote_vm
        build:
          context: centos-vm
        volumes:
          - $PWD/persistance-test:/tmp/
        networks:
          - net
    networks:
      net:​
  • Create a folder called persistance-test, Create a file called xyz.txt, add some text to the file
    [[email protected] jenkins-data]# mkdir persistance-test
    [[email protected] jenkins-data]# ls
    centos-vm  demo.sh  docker-compose.yml  jenkins_home  persistance-test
    [[email protected] jenkins-data]# cd persistance-test/
    [[email protected] persistance-test]# vi xyz.txt
    [[email protected] persistance-test]# cat xyz.txt
    This content is created in the base machine​
  • Start docker-compose for the remote service.
    docker-compose-remote-machine-backup
  • Switch remote machine and go to tmp folder
    [[email protected] jenkins-data]# docker exec -it remote_vm bash
    [[email protected] /]# cd /tmp/
    [[email protected] tmp]#
  • Verify the xyz.txt file is present and also verify the content. If file and content matches then we can say the file be created in our mission is persistent into docker service
    [[email protected] tmp]# ls
    xyz.txt
    [[email protected] tmp]# cat xyz.txt
    This content is created in the base machine​
  • Create a folder called starwars inside which create a file called abc.txt and have some text in it
    [[email protected] tmp]# mkdir starwars
    [[email protected] tmp]# ls
    starwars  xyz.txt
    [[email protected] tmp]# cd starwars/
    [[email protected] starwars]# vi abc.txt
    [[email protected] starwars]# cat abc.txt
    welcome to the remote order​
  • Now exit the remote service
    [[email protected] starwars]# exit
    exit
    [[email protected] jenkins-data]#​
  • Navigate to the persistance-test folder check whether starwars folder is present and abc.txt file is present inside the starwars folder. If the backup folder is present and also if the file, content is present then we can say the files present in the docker service persisting into our machine.
    [[email protected] jenkins-data]# ls
    centos-vm  demo.sh  docker-compose.yml  jenkins_home  persistance-test
    [[email protected] jenkins-data]# cd persistance-test
    [[email protected] persistance-test]# ls
    starwars  xyz.txt
    [[email protected] persistance-test]# cd starwars/
    [[email protected] starwars]# ls
    abc.txt
    [[email protected] starwars]# cat abc.txt
    welcome to the remote order​
  • Kill the docker service for the remote machine; check the whether above file still present; It will be present and this process of having sync is called persistence.

Copy files to AWS

In the above topic, we have seen how to process the data from docker service to local and local to docker service. But in some cases, you might need to store these files in AWS, AZURE, or in some FTP servers.

So let's learn how to store these files into AWS S3 bucket;

  • Create AWS console account by navigating to https://aws.amazon.com/console/
    aws-console-docker-backup-jenkins

  • Create an AWS account; once you have an account follow the below steps
  • Click on the services; then search for S3; click on S3 scalable storage in the cloud
    s3-bucket-docker-jenkins-aws

MySQL backup on docker with Jenkins

This part of the article considers that you already have knowledge of the MySQL database. We might not need complete knowledge but a basic knowledge that something called database exists and which stores the data in the form of records. And one of the database types is MySQL

Previously we have created a remote machine using docker service we are going to create my SQL databases using docker service.

What we create the database using MySQL dumping these SQL databases to create a backup. We will be uploading this backup up to the required target, here our target is S3 bucket.

First, we have to create the MySQL service inside the docker-compse.yml file.

  mysql_db:
    container_name: mysql_db
    image: mysql:8.0
    environment:
      - "MYSQL_ROOT_PASSWORD=12345"
    volumes:
      - $PWD/mysql-data:/var/lib/mysql
    networks:
      - net

Here the environment variable is important; when the docker creates the MySQL database, it will create a user called root and also set password for it. We have to supply the password using the MYSQL_ROOT_PASSWORD. Learn more about environment variable of MySQL on the official MySQL docker page : https://hub.docker.com/_/mysql

Volumes also set to /var/lib/mysql; and this where all files related to DB will be created in the docker service; we have to use this location and we cannot configure it to another location.

The complete docker-compose.yml file will look like:

version: '3.7'
services:
  jenkins:
    container_name: jenkins
    image: jenkins/jenkins
    ports:
      - "8080:8080"
    volumes:
      - $PWD/jenkins_home:/var/jenkins_home
    networks:
      - net

  remote_vm:
    container_name: remote_vm
    image: remote_vm
    build:
      context: centos-vm
    volumes:
      - $PWD/persistance-test:/tmp/
    networks:
      - net
  mysql_db:
    container_name: mysql_db
    image: mysql:8.0
    environment:
      - "MYSQL_ROOT_PASSWORD=12345"
    volumes:
      - $PWD/mysql-data:/var/lib/mysql
    networks:
      - net
networks:
  net:

To download and start the docker-compose; let use the docker-compose up -d command. We are using the MySQL image for the first time, because of which docker will download the MySQL docker image.
docker-compse-mysql-jenkins-backup

You can check whether the MySQL image is present or not using docker images command
mysql-image-docker-jenkins

It may take few seconds to complete the process of starting the MySQL service; You can Switch to the MySQL Service using bash command.

[[email protected] jenkins-data]# docker exec -it mysql_db bash
[email protected]:/#

We are in the MySQL docker service as root user. but we have not logged to MySQL.

To login to the MySQL we have to use :

  • -u for username
  • -p is password (mostly supplied after this command)
#mysql -u username -p
mysql -u root -p

mysql-service-docker-jenkins

Provide the password that you set in the docker-compose.yml file. You can also provide the password along with the login command.

#mysql -u USERNAME -pPASSWORD 
mysql -u root -p12345

login-mysql-with-password

View all the databases show databases
show-databases-docker-mysql-jenkins

Create a new database and table

mysql> create database testdb;
Query OK, 1 row affected (0.01 sec)

mysql> use testdb;
Database changed
mysql> create table author (name varchar(20), age int(2));
Query OK, 0 rows affected, 1 warning (0.07 sec)

Check the table details desc author
describe-docker-mysql-table

Insert a value into the table

mysql> INSERT INTO author VALUES('karthiq', 29);
Query OK, 1 row affected (0.04 sec)

Select all records from the database;

mysql> SELECT * FROM author;
+---------+------+
| name    | age  |
+---------+------+
| karthiq |   29 |
+---------+------+
1 row in set (0.00 sec)

Exit from the MySQL, now you will be inside the MySQL docker container.

Now take the backup of the database

[email protected]:/# mysqldump -u root -p testdb >/tmp/db.sql
Enter password:
[email protected]:/#

You also take the backup by providing the password also in the same line.

[email protected]:/# mysqldump -u root -p12345 testdb >/tmp/db2.sql
mysqldump: [Warning] Using a password on the command line interface can be insecure.
[email protected]:/#

Check the tmp folder for the backup files.

[email protected]:/# cd /tmp/
[email protected]:/tmp# ls
db.sql  db2.sql
[email protected]:/tmp# 

Now we have the database as files, now we can create take the back up to FTP, AWS, or AZURE.

Persist to local system:

In case if you wanted to get these files into local system then you need to remember where we have created the volumes.

When we are using the MySQL the path for volumes is /var/lib/mysql . If you put the backup file into this location it will persist to local system.

Change the mysqldump command like this.

[email protected]:/tmp# mysqldump -u root -p12345 testdb >/var/lib/mysql/db-backup.sql
mysqldump: [Warning] Using a password on the command line interface can be insecure.
[email protected]:/tmp#

Exit the mysql_db docker container

[email protected]:/tmp# exit
exit
[[email protected] jenkins-data]#

Navigate to the mysql-data which we configured in docker-compose.yml; then list the directory.
db-back-up-docker-

About Author :

I am Pavankumar, Having 8.5 years of experience currently working in Video/Live Analytics project.

Comment / Suggestion Section
Point our Mistakes and Post Your Suggestions