AWS+SIOS SPS+MySQL测试(Linux)

AWS CLI安装URL

https://docs.aws.amazon.com/zh_cn/cli/latest/userguide/awscli-install-linux.html#awscli-install-linux-path


原文档URL

http://www.linuxclustering.net/2016/03/21/step-by-step-how-to-configure-a-linux-failover-cluster-in-amazon-ec2-without-shared-storage-amazon-aws-sanless-cluster/#VPC


1Edit /etc/hosts

Unless you have already have a DNS server setup, you’ll want to create host file entries on all 3 servers so that they can properly resolve each other by name

Add the following lines to the end of your /etc/hosts file:

10.0.0.4 node1

10.0.1.4 node2

10.0.2.4 witness

10.1.0.10 mysql-vip


2Disable SELinux

Edit /etc/sysconfig/linux and set “SELINUX=disabled”:

# vi /etc/sysconfig/selinux

# This file controls the state of SELinux on the system.

# SELINUX= can take one of these three values:

# enforcing - SELinux security policy is enforced.

# permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded.

SELINUX=disabled

# SELINUXTYPE= can take one of these two values:

# targeted - Targeted processes are protected,

# mls - Multi Level Security protection. SELINUXTYPE=targeted


3,Set Hostnames


4,Reboot Cluster Nodes



5,Install and Configure VNC (and related packages)

https://www.cnblogs.com/chenjianxiang/p/5042977.html


6,Partition and Format the “data” disk


7,Install EC2 API Tools

# wget http://s3.amazonaws.com/ec2-downloads/ec2-api-tools.zip

# unzip ec2-api-tools.zip # mv ec2-api-tools-1.7.5.1/ /opt/aws/

# export EC2_HOME="/opt/aws"

https://blog.csdn.net/appleyk/article/details/77992873

https://www.cnblogs.com/wangmo/p/7880521.html


mkdir /usr/local/java

cp jdk-8u171-linux-x64.tar.gz /usr/local/java/

cd /usr/local/java

tar -zxvf jdk-8u171-linux-x64.tar.gz


vim /etc/profile

#JAVA

JAVA_HOME=/usr/local/java/jdk1.8.0_171

JRE_HOME=/usr/local/java/jdk1.8.0_171/jre

CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib

PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH

export PATH JAVA_HOME CLASSPATH JRE_HOME



#AWS

export EC2_HOME="/opt/aws"

export EC2_URL=https://ec2.cn-north-1.amazonaws.com.cn

export AWS_ACCESS_KEY=AKIAOWZC6G3GHWHH6TKA

export AWS_SECRET_KEY=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


source /etc/profile

/opt/aws/bin/ec2-describe-regions


8,Install and Configure MySQL

https://segmentfault.com/a/1190000007667534

https://www.cnblogs.com/a3470194/p/5480911.html

https://www.cnblogs.com/renjidong/p/7047396.html

https://blog.csdn.net/qq_23689053/article/details/79138462


Access deny解决方法

https://blog.csdn.net/tys1986blueboy/article/details/7056835

https://www.2cto.com/database/201501/367951.html


9,Install SIOS Protection Suite for Linux

# mkdir /tmp/install

# mount -o loop sps.img /tmp/install

# cd /tmp/install

# ./setup

  1. The ARKs are ONLY required on “node1” and “node2”. You do not need to install on “witness”
  2. Navigate the list with the up/down arrows, and press SPACEBAR to select the following:
  3. lkDR – DataKeeper for Linux
  4. lkSQL – LifeKeeper MySQL RDBMS Recovery Kit
  5. This will result in the following additional RPMs installed on “node1” and “node2”:
  6. steeleye-lkDR-9.0.2-6513.noarch.rpm
  7. steeleye-lkSQL-9.0.2-6513.noarch.rpm

10,Install Witness/Quorum package

Install the Witness/Quorum rpm on all 3 nodes (node1, node2, witness):

# cd /tmp/install/quorum

# rpm -Uvh steeleye-lkQWK-9.2.1-6653.noarch.rpm

On ALL 3 nodes (node1, node2, witness), edit /etc/default/LifeKeeper, set

NOBCASTPING=1

On ONLY the Witness server (“witness”), edit /etc/default/LifeKeeper, set

WITNESS_MODE=off/none



11,Install the EC2 Recovery Kit Package

Install the EC2 rpm (node1, node2):

cd /tmp/install/kits

rpm -Uvh steeleye-lkECC-9.2.1-6653.noarch.rpm

12,Install a License key

On all 3 nodes, use the “lkkeyins” command to install the license file that you obtained from SIOS:

# /opt/LifeKeeper/bin/lkkeyins <path_to_file>/<filename>.lic


13,Start LifeKeeper

On all 3 nodes, use the “lkstart” command to start the cluster software:

# /opt/LifeKeeper/bin/lkstart


14,Set User Permissions for LifeKeeper GUI

On all 3 nodes, create a new linux user account (i.e. “tony” in this example). Edit /etc/group and add the “tony” user to the “lkadmin” group to grant access to the LifeKeeper GUI. By default only “root” is a member of the group, and we don’t have the root password here:

# useradd tony

# passwd tony

# vi /etc/group

lkadmin:x:1001:root,tony


15,Open the LifeKeeper GUI

# /opt/LifeKeeper/bin/lkGUIapp &


16,Create Communication Paths

Right-click on “node1” and select Create Comm Path

Select BOTH “node2” and “witness” and then follow the wizard. This will create comm paths between:

  1. node1 & node2
  2. node1 & witness

A comm path still needs to be created between node2 & witness. Right click on “node2” and select Create Comm Path. Follow the wizard and select “witness” as the remote server:

At this point the following comm paths have been created:

  1. node1 <—> node2
  2. node1 <—> witness
  3. node2 <—> witness

The icons in front of the servers have changed from a green “checkmark” to a yellow “hazard sign”. This is because we only have a single communication path between nodes.

If the VMs had multiple NICs (information on creating Azure VMs with multiple NICs can be found here, but won’t be covered in this article), you would create redundant comm paths between each server.


17,Verify Communication Paths

/opt/LifeKeeper/bin/lcdstatus -q -d node1


18,Create a Data Replication cluster resource (i.e. Mirror)

Next, create a Data Replication resource to replicate the /var/lib/mysql partition from node1 (source) to node2 (target). Click the “green plus” icon to create a new resource:

Follow the wizard with these selections:

Please Select Recovery Kit: Data Replication

Switchback Type: intelligent

Server: node1

Hierarchy Type: Replicate Exiting Filesystem

Existing Mount Point: /var/lib/mysql

Data Replication Resource Tag: datarep-mysql

File System Resource Tab: /var/lib/mysql

Bitmap File: (default value)

Enable Asynchronous Replication: No

After the resource has been created, the “Extend” (i.e. define backup server) wizard will appear. Use the following selections:

Target Server: node2

Switchback Type: Intelligent

Template Priority: 1 Target Priority: 10

Target Disk: /dev/xvdb1

Data Replication Resource Tag: datarep-mysql

Bitmap File: (default value)

Replication Path: 10.0.0.4/10.0.1.4

Mount Point: /var/lib/mysql

Root Tag: /var/lib/mysql

19,Create Virtual IP

Next, create a Virtual IP cluster resource. Click the “green plus” icon to create a new resource:

Follow the wizard with to create the IP resource with these selections:

Select Recovery Kit: IP

Switchback Type: Intelligent

IP Resource: 10.1.0.10

Netmask: 255.255.255.0

Network Interface: eth0

IP Resource Tag: ip-10.1.0.10

Extend the IP resource with these selections:

Switchback Type: Intelligent

Template Priority: 1

Target Priority: 10

IP Resource: 10.1.0.10

Netmask: 255.255.255.0

Network Interface: eth0

IP Resource Tag: ip-10.1.0.10

20,Configure a Ping List for the IP resource

By default, SPS-Linux monitors the health of IP resources by performing a broadcast ping. In many virtual and cloud environments, broadcast pings don’t work. In a previous step, we set “NOBCASTPING=1” in /etc/default/LifeKeeper to turn off broadcast ping checks. Instead, we will define a ping list. This is a list of IP addresses to be pinged during IP health checks for this IP resource. In this guide, we will add the witness server (10.0.2.4) to our ping list.

Right click on the IP resource (ip-10.1.0.10) and select Properties:

You will see that initially, no ping list is configured for our 10.1.0.0 subnet. Click “Modify Ping List”:

Enter “10.0.2.4” (the IP address of our witness server), click “Add address” and finally click “Save List”:

You will be returned to the IP properties panel, and can verify that 10.0.2.4 has been added to the ping list. Click OK to close the window:

21,Create the MySQL resource hierarchy

Next, create a MySQL cluster resource. The MySQL resource is responsible for stopping/starting/monitoring of your MySQL database.

Before creating MySQL resource, make sure the database is running. Run “ps -ef | grep sql” to check.

If it’s running, great – nothing to do. If not, start the database back up:

# mysqld_safe --user=root --socket=/var/lib/mysql/mysql.sock --port=3306 --datadir=/var/lib/mysql --log &

To create, click the “green plus” icon to create a new resource:

Follow the wizard with to create the IP resource with these selections:

Select Recovery Kit: MySQL Database

Switchback Type: Intelligent

Server: node1

Location of my.cnf: /var/lib/mysql

Location of MySQL executables: /usr/bin

Database Tag: mysql

Extend the IP resource with the following selections:

Target Server: node2

Switchback Type: intelligent

Template Priority: 1

Target Priority: 10

As a result, your cluster will look as follows. Notice that the Data Replication resource was automatically moved underneath the database (dependency automatically created) to ensure it’s always brought online before the database:


22,Create an EC2 resource to manage the route tables upon failover

SPS-Linux provides specific features that allow resources to failover between nodes in different availability zones and regions. Here, the EC2 Recovery Kit (i.e. cluster agent) is used to manipulate Route Tables so that connections to the Virtual IP are routed to the active cluster node.

To create, click the “green plus” icon to create a new resource:

Follow the wizard with to create the EC2 resource with these selections:

Select Recovery Kit: Amazon EC2

Switchback Type: Intelligent

Server: node1

EC2 Home: /opt/aws

EC2 URL: ec2.us-west-2.amazonaws.com AWS

Access Key: (enter Access Key obtained earlier)

AWS Secret Key: (enter Secret Key obtained earlier)

EC2 Resource Type: RouteTable (Backend cluster)

IP Resource: ip-10.1.0.10

EC2 Resource Tag: ec2-10.1.0.10

Extend the IP resource with the following selections:

Target Server: node2

Switchback Type: intelligent

Template Priority: 1

Target Priority: 10

EC2 Resource Tag: ec2-10.1.0.10

The cluster will look like this. Notice how the EC2 resource is underneath the IP resource:

23,Create a Dependency between the IP resource and the MySQL Database resource

Create a dependency between the IP resource and the MySQL Database resource so that they failover together as a group. Right click on the “mysql” resource and select “Create Dependency”:

On the following screen, select the “ip-10.1.0.10” resource as the dependency. Click Next and continue through the wizard:

24,Test Cluster Connectivity

Cluster resources are currently active on node1:

est connectivity to the cluster from the witness server (or another linux instance if you have one) SSH into the witness server, “sudo su -” to gain root access. Install the mysql client if needed:

[root@witness ~]# yum -y install mysql

Test MySQL connectivity to the cluster:

[root@witness ~]# mysql --host=10.1.0.10 mysql -u root -p

Execute the following MySQL query to display the hostname of the active cluster node:

MariaDB [mysql]> select @@hostname;

+------------+

| @@hostname |

+------------+

| node1 |

+------------+

1 row in set (0.00 sec)

MariaDB [mysql]>

Using LifeKeeper GUI, failover from Node1 -> Node2″. Right click on the mysql resource underneath node2, and select “In Service…”:

After failover has completed, re-run the MySQL query. You’ll notice that the MySQL client has detected that the session was lost (during failover) and automatically reconnects:

Execute the following MySQL query to display the hostname of the active cluster node, verifying that now “node2” is active:

MariaDB [mysql]> select @@hostname;

ERROR 2006 (HY000): MySQL server has gone away

No connection. Trying to reconnect...

Connection id: 12

Current database: mysql

+------------+

| @@hostname |

+------------+

| node2 |

+------------+

1 row in set (0.53 sec)

MariaDB [mysql]>