一:基础

1、基础搭建

所有个节点都做

[1]

name=1

baseurl=ftp://172.25.254.100/pub/media

enabled=1

gpgcheck=0

[2]

name=2

baseurl=ftp://172.25.254.100/pub/media/addons/HighAvailability

enabled=1

gpgcheck=0

 

hostnamectl set-hostname node1     更改主机名

hostnamectl set-hostname node2

 

cat /etc/hosts

172.25.254.100 node1

172.25.254.101 node2

各节点同步时间NTP,此处不在搭建

各节点需要ssh无密码密钥搭建(相互之间都需要)

2、防火墙配置

 

systemctl stop firewalld.service

systemctl disable firewalld.service

sed -i '/SELINUX/s/permissive/disabled/g' /etc/selinux/config

setenforce 0物理机也关

###firewall-cmd --permanent  --add-service=high-availability

###firewall-cmd  --reload

二、集群服务配置

1、装包启服务

yum -y install pcs pacemaker corosync fence-agents-all   lvm2-cluster(存储的)

 

为了各个节点通讯,在每个节点创建相同账户和密码

echo 123456 |passwd --stdin hacluster           用户暂时不能改,除非知道该那个文件

服务启动

cp /etc/corosync/corosync.conf.example /etc/corosync/corosync.conf     防止起不来

 

systemctl start pcsd

systemctl enable pcsd

 

 

systemctl start pacemaker

systemctl enable pacemaker

systemctl start corosync

systemctl enable corosync

2、节点间验证

各个节点之间进行验证

pcs cluster auth node1 node2  。。。

账号hacluster

密码 123456

(pcs cluster auth node103 node102 node101 -u hacluster -p 123456)

3、创建并启动集群(单节点做)

pcs cluster setup --start --name xiang node1 node2    --force

设置集群开机启动

pcs cluster enable --name xiang

pcs cluster enable --all

查看状态

pcs status

pcs cluster status

pcs property set stonith-enabled=falsefence暂不用

三:服务配置

pcs resource list | grep 查服务

1、配置存储

查看iscsi的配置文档

 mkfs.xfs /dev/sdb1 -f

重启另外一个节点使生效

 

pcs resource create  FS ocf:heartbeat:Filesystem device="/dev/sdb1"  directory="/mnt"  fstype="xfs"

 

2、配置浮点IP

pcs resource create VIP ocf:heartbeat:IPaddr2 ip=172.25.254.170 cidr_netmask=24

pcs resource  update VIP op  monitor interval=15s

pcs resource show VIP

 Resource: VIP (class=ocf provider=heartbeat type=IPaddr2)

  Attributes: ip=172.25.254.170 cidr_netmask=24

  Operations: start interval=0s timeout=20s (VIP-start-timeout-20s)

              stop interval=0s timeout=20s (VIP-stop-timeout-20s)

              monitor interval=15s (VIP-monitor-interval-15s)

pcs status

ping    172.25.254.170

3、配置服务apache

所有节点yum -y install httpd

确定开机为禁用状态

pcs resource create WEB ocf:heartbeat:apache

pcs resource show WEB

 

4、创建group,将必要的服务绑定

IPWEB resource捆绑到这个group中,使之作为一个整体在集群中切换

pcs resource show

 VIP(ocf::heartbeat:IPaddr2):Started

 WEB(ocf::heartbeat:apache):Stopped

pcs resource group add MyGroup VIP

pcs resource group add MyGroup WEB

pcs resource update WEB statusurl="http://172.25.254.170"

 

pcs resource disable WEB加组

pcs resource disable VIP

pcs resource group remove MyGroup WEB

pcs resource group remove MyGroup VIP

pcs resource group add MyGroup FS VIP WEB

pcs resource enable FS

pcs resource enable VIP

pcs resource enable WEB

pcs status

 

5、启动顺序

配置服务启动顺序,以避免出现资源冲突,语法:(pcs resource group add的时 候也可以根据加的顺序依次启动,此配置为可选)

pcs constraint order [action] then [action]

pcs constraint order start VIP then start WEB

pcs constraint show --full

 

 

 

在所有节点上安装数据库

yum -y install mariadb  mariadb-*

systemctl disable mariadb

systemctl status mariadb     保持关闭

单个节点

pcs resource create mariadb ocf:heartbeat:mysql

pcs status

 

配置一个互为备份的双机环境,一个节点作为web,一个节点作为db,各自有自己的浮动IP,出现故障的时候互相切换

# pcs resource create VIP1 ocf:heartbeat:IPaddr2 ip=192.168.122.171 cidr_netmask=24 nic=eth0

# pcs resource group add MyGroup1 VIP1 DB(因为前面已经把DB添加到MyGroup组,所以需要先用pcs resource disable WEB; pcs resource diable DB;pcs resource group remove MyGroup DB; pcs resource enable WEB;才可以添加DBMyGroup1 )
# pcs constraint location add server1 DB node1 1
# pcs constraint location add server1 VIP1 node1 1
# pcs constraint location add server2 FS node2 1
# pcs constraint location add server2 VIP node2 1
# pcs constraint location add server2 WEB node2 1
执行上述命令后,DBVIP1将启动在node1上而剩下的FS VIP WEB将启动在node2

 

 

 timedatectl status

 

 

6、查看集群状态

corosync-cfgtool -s

Printing ring status.

Local node ID 1

RING ID 0

    id    = 192.168.17.132

    status    = ring 0 active with no faults

 

 

corosync-cmapctl | grep members

runtime.totem.pg.mrp.srp.members.1.ip (str) = r(0) ip(192.168.17.132)

runtime.totem.pg.mrp.srp.members.1.join_count (u32) = 1

runtime.totem.pg.mrp.srp.members.1.status (str) = joined

runtime.totem.pg.mrp.srp.members.2.config_version (u64) = 0

 

pcs status corosync

Membership information

----------------------

    Nodeid      Votes Name

         1          1 controller1 (local)

         3          1 controller3

         2          1 controller2