redhat6 安装Oracle 12C R2 RAC(二)

admin 2018-09-16 阅读


接上篇内容,本篇内容主要是记录如何在虚拟机中利用iscsi构建RAC共享存储。

 

 

1、共享存储配置

  添加一台服务器模拟存储服务器,配置一个局域网地址和两个私有地址,私网地址和rac客户端连接多路径,磁盘划分和配置。

  目标:从存储中划分出来两台主机可以同时看到的共享LUN,一共六个:3个1G的盘用作OCR和Voting Disk,1个50G的盘做GIMR,其余规划做数据盘和FRA(快速恢复区)。

 为存储服务器加93G的硬盘


 
  1. //lv划分

  2. asmdisk1 1G

  3. asmdisk2 1G

  4. asmdisk3 1G

  5. asmdisk4 50G

  6. asmdisk5 20G

  7. asmdisk6 20G

1.1 检查存储网络

 

 rac为存储客户端,VMware建立vlan10,vlan20,两个rac节点、存储服务器上的两块网卡,分别划分到vlan10、vlan20,这样就可以通过多路径和存储进行连接。

  存储(服务端):10.0.0.111、10.0.0.222

     rac-jydb1(客户端):10.0.0.5、10.0.0.11

  rac-jydb2(客户端):10.0.0.6、10.0.0.22

  最后测试网路互通没问题即可进行下一步

1.2 安装iscsi软件包

 

--服务端

yum install scsi-target-utils

--客户端

yum install iscsi-initiator-utils

1.3模拟存储加盘(服务端操作)

填加一个93G的盘,实际就是用来模拟存储新增实际的一块盘。
我这里新增加的盘显示为/dev/sdb,我将它创建成lvm


 
  1. # pvcreate /dev/sdb

  2. Physical volume "/dev/sdb" successfully created

  3.  
  4. # vgcreate vg_storage /dev/sdb

  5. Volume group "vg_storage" successfully created

  6.  
  7. # lvcreate -L 10g -n lv_lun1 vg_storage //按照之前划分的磁盘容量分配多少g

  8. Logical volume "lv_lun1" created

1.4 配置iscsi服务端

  iSCSI服务端主要配置文件:/etc/tgt/targets.conf

  所以我这里按照规范设置的名称,添加好如下配置:


 
  1. <target iqn.2018-03.com.cnblogs.test:alfreddisk>

  2. backing-store /dev/vg_storage/lv_lun1 # Becomes LUN 1

  3. backing-store /dev/vg_storage/lv_lun2 # Becomes LUN 2

  4. backing-store /dev/vg_storage/lv_lun3 # Becomes LUN 3

  5. backing-store /dev/vg_storage/lv_lun4 # Becomes LUN 4

  6. backing-store /dev/vg_storage/lv_lun5 # Becomes LUN 5

  7. backing-store /dev/vg_storage/lv_lun6 # Becomes LUN 6

  8. </target>

配置完成后,就启动服务和设置开机自启动:


 
  1. [root@Storage ~]# service tgtd start

  2. Starting SCSI target daemon: [ OK ]

  3. [root@Storage ~]# chkconfig tgtd on

  4. [root@Storage ~]# chkconfig --list|grep tgtd

  5. tgtd 0:off 1:off 2:on 3:on 4:on 5:on 6:off

  6. [root@Storage ~]# service tgtd status

  7. tgtd (pid 1763 1760) is running...

然后查询下相关的信息,比如占用的端口、LUN信息(Type:disk):


 
  1. [root@Storage ~]# service tgtd start

  2. Starting SCSI target daemon: [ OK ]

  3. [root@Storage ~]# chkconfig tgtd on

  4. [root@Storage ~]# chkconfig --list|grep tgtd

  5. tgtd 0:off 1:off 2:on 3:on 4:on 5:on 6:off

  6. [root@Storage ~]# service tgtd status

  7. tgtd (pid 1763 1760) is running...

  8.  
  9.   然后查询下相关的信息,比如占用的端口、LUN信息(Type:disk):

  10.  
  11. [root@Storage ~]# netstat -tlunp |grep tgt

  12. tcp 0 0 0.0.0.0:3260 0.0.0.0:* LISTEN 1760/tgtd

  13. tcp 0 0 :::3260 :::* LISTEN 1760/tgtd

  14.  
  15. [root@Storage ~]# tgt-admin --show

  16. Target 1: iqn.2018-03.com.cnblogs.test:alfreddisk

  17. System information:

  18. Driver: iscsi

  19. State: ready

  20. I_T nexus information:

  21. LUN information:

  22. LUN: 0

  23. Type: controller

  24. SCSI ID: IET 00010000

  25. SCSI SN: beaf10

  26. Size: 0 MB, Block size: 1

  27. Online: Yes

  28. Removable media: No

  29. Prevent removal: No

  30. Readonly: No

  31. Backing store type: null

  32. Backing store path: None

  33. Backing store flags:

  34. LUN: 1

  35. Type: disk

  36. SCSI ID: IET 00010001

  37. SCSI SN: beaf11

  38. Size: 10737 MB, Block size: 512

  39. Online: Yes

  40. Removable media: No

  41. Prevent removal: No

  42. Readonly: No

  43. Backing store type: rdwr

  44. Backing store path: /dev/vg_storage/lv_lun1

  45. Backing store flags:

  46. Account information:

  47. ACL information:

  48. ALL

1.5 配置iscsi客户端

 

确认开机启动项设置开启:


 
  1. # chkconfig --list|grep scsi

  2. iscsi 0:off 1:off 2:off 3:on 4:on 5:on 6:off

  3. iscsid 0:off 1:off 2:off 3:on 4:on 5:on 6:off

使用iscsiadm命令扫描服务端的LUN(探测iSCSI Target)


 
  1. [root@jydb1 ~]# iscsiadm -m discovery -t sendtargets -p 10.0.0.111

  2. 10.0.0.111:3260,1 iqn.2018-03.com.cnblogs.test:alfreddisk

  3. [root@jydb1 ~]# iscsiadm -m discovery -t sendtargets -p 10.0.0.222

  4. 10.0.0.222:3260,1 iqn.2018-03.com.cnblogs.test:alfreddisk

查看iscsiadm -m node


 
  1. [root@jydb1 ~]# iscsiadm -m node

  2. 10.0.0.111:3260,1 iqn.2018-03.com.cnblogs.test:alfreddisk

  3. 10.0.0.222:3260,1 iqn.2018-03.com.cnblogs.test:alfreddis

查看/var/lib/iscsi/nodes/下的文件:


 
  1. [root@jydb1 ~]# ll -R /var/lib/iscsi/nodes/

  2. /var/lib/iscsi/jydbs/:

  3. 总用量 4

  4. drw------- 4 root root 4096 3月 29 00:59 iqn.2018-03.com.cnblogs.test:alfreddisk

  5.  
  6. /var/lib/iscsi/jydbs/iqn.2018-03.com.cnblogs.test:alfreddisk:

  7. 总用量 8

  8. drw------- 2 root root 4096 3月 29 00:59 10.0.1.99,3260,1

  9. drw------- 2 root root 4096 3月 29 00:59 10.0.2.99,3260,1

  10.  
  11. /var/lib/iscsi/jydbs/iqn.2018-03.com.cnblogs.test:alfreddisk/10.0.0.111,3260,1:

  12. 总用量 4

  13. -rw------- 1 root root 2049 3月 29 00:59 default

  14.  
  15. /var/lib/iscsi/jydbs/iqn.2018-03.com.cnblogs.test:alfreddisk/10.0.0.222,3260,1:

  16. 总用量 4

  17. -rw------- 1 root root 2049 3月 29 00:59 default

挂载iscsi磁盘

  根据上面探测的结果,执行下面命令,挂载共享磁盘:

iscsiadm -m node -T iqn.2018-03.com.cnblogs.test:alfreddisk --login


 
  1. [root@jydb1 ~]# iscsiadm -m node -T iqn.2018-03.com.cnblogs.test:alfreddisk --login

  2. Logging in to [iface: default, target: iqn.2018-03.com.cnblogs.test:alfreddisk, portal: 10.0.0.111,3260] (multiple)

  3. Logging in to [iface: default, target: iqn.2018-03.com.cnblogs.test:alfreddisk, portal: 10.0.0.222,3260] (multiple)

  4. Login to [iface: default, target: iqn.2018-03.com.cnblogs.test:alfreddisk, portal: 10.0.0.111,3260] successful.

  5. Login to [iface: default, target: iqn.2018-03.com.cnblogs.test:alfreddisk, portal: 10.0.0.222,3260] successful.

  6. 显示挂载成功

通过(fdisk -l或lsblk)命令查看挂载的iscsi硬盘


 
  1. [root@jydb1 ~]# lsblk

  2. NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

  3. sda 8:0 0 35G 0 disk

  4. ├─sda1 8:1 0 200M 0 part /boot

  5. ├─sda2 8:2 0 7.8G 0 part [SWAP]

  6. └─sda3 8:3 0 27G 0 part /

  7. sr0 11:0 1 3.5G 0 rom /mnt

  8. sdb 8:16 0 1G 0 disk

  9. sdc 8:32 0 1G 0 disk

  10. sdd 8:48 0 1G 0 disk

  11. sde 8:64 0 1G 0 disk

  12. sdf 8:80 0 1G 0 disk

  13. sdg 8:96 0 1G 0 disk

  14. sdi 8:128 0 40G 0 disk

  15. sdk 8:160 0 10G 0 disk

  16. sdm 8:192 0 10G 0 disk

  17. sdj 8:144 0 10G 0 disk

  18. sdh 8:112 0 40G 0 disk

  19. sdl 8:176 0 10G 0 disk

1.6 配置multipath多路径(客户端操作)

 

安装多路径软件包:


 
  1. rpm -qa |grep device-mapper-multipath

  2. 没有安装则yum安装

  3. #yum install -y device-mapper-multipath

  4. 或下载安装这两个rpm

  5. device-mapper-multipath-libs-0.4.9-72.el6.x86_64.rpm

  6. device-mapper-multipath-0.4.9-72.el6.x86_64.rpm

添加开机启动

chkconfig multipathd on

生成多路径配置文件


 
  1. --生成multipath配置文件

  2. /sbin/mpathconf --enable

  3.  
  4. --显示多路径的布局

  5. multipath -ll

  6.  
  7. --重新刷取

  8. multipath -v2 或-v3


 
  1. --清空所有多路径--重新生成时需先清空

  2. multipath -F

参考如下操作:


 
  1. root@jydb1 ~]# multipath -v2

  2. [root@jydb1 ~]# multipath -ll

  3. Mar 29 03:40:10 | multipath.conf line 109, invalid keyword: multipaths

  4. Mar 29 03:40:10 | multipath.conf line 115, invalid keyword: multipaths

  5. Mar 29 03:40:10 | multipath.conf line 121, invalid keyword: multipaths

  6. Mar 29 03:40:10 | multipath.conf line 127, invalid keyword: multipaths

  7. Mar 29 03:40:10 | multipath.conf line 133, invalid keyword: multipaths

  8. Mar 29 03:40:10 | multipath.conf line 139, invalid keyword: multipaths

  9. asmdisk6 (1IET 00010006) dm-5 IET,VIRTUAL-DISK //wwid

  10. size=10.0G features='0' hwhandler='0' wp=rw

  11. |-+- policy='round-robin 0' prio=1 status=active

  12. | `- 33:0:0:6 sdj 8:144 active ready running

  13. `-+- policy='round-robin 0' prio=1 status=enabled

  14. `- 34:0:0:6 sdm 8:192 active ready running

  15. asmdisk5 (1IET 00010005) dm-2 IET,VIRTUAL-DISK

  16. size=10G features='0' hwhandler='0' wp=rw

  17. |-+- policy='round-robin 0' prio=1 status=active

  18. | `- 33:0:0:5 sdh 8:112 active ready running

  19. `-+- policy='round-robin 0' prio=1 status=enabled

  20. `- 34:0:0:5 sdl 8:176 active ready running

  21. asmdisk4 (1IET 00010004) dm-4 IET,VIRTUAL-DISK

  22. size=40G features='0' hwhandler='0' wp=rw

  23. |-+- policy='round-robin 0' prio=1 status=active

  24. | `- 33:0:0:4 sdf 8:80 active ready running

  25. `-+- policy='round-robin 0' prio=1 status=enabled

  26. `- 34:0:0:4 sdk 8:160 active ready running

  27. asmdisk3 (1IET 00010003) dm-3 IET,VIRTUAL-DISK

  28. size=1.0G features='0' hwhandler='0' wp=rw

  29. |-+- policy='round-robin 0' prio=1 status=active

  30. | `- 33:0:0:3 sdd 8:48 active ready running

  31. `-+- policy='round-robin 0' prio=1 status=enabled

  32. `- 34:0:0:3 sdi 8:128 active ready running

  33. asmdisk2 (1IET 00010002) dm-1 IET,VIRTUAL-DISK

  34. size=1.0G features='0' hwhandler='0' wp=rw

  35. |-+- policy='round-robin 0' prio=1 status=active

  36. | `- 33:0:0:2 sdc 8:32 active ready running

  37. `-+- policy='round-robin 0' prio=1 status=enabled

  38. `- 34:0:0:2 sdg 8:96 active ready running

  39. asmdisk1 (1IET 00010001) dm-0 IET,VIRTUAL-DISK

  40. size=1.0G features='0' hwhandler='0' wp=rw

  41. |-+- policy='round-robin 0' prio=1 status=active

  42. | `- 33:0:0:1 sdb 8:16 active ready running

  43. `-+- policy='round-robin 0' prio=1 status=enabled

  44. `- 34:0:0:1 sde 8:64 active ready running

启动multipath服务

#service multipathd start

配置multipath


 
  1. 修改第一处:

  2. #建议user_friendly_names设为no。如果设定为 no,即指定该系统应使用WWID 作为该多路径的别名。如果将其设为 yes,系统使用文件 #/etc/multipath/mpathn 作为别名。

  3.  
  4. #当将 user_friendly_names 配置选项设为 yes 时,该多路径设备的名称对于一个节点来说是唯一的,但不保证对使用多路径设备的所有节点都一致。也就是说,

  5.  
  6. 在节点一上的mpath1和节点二上的mpath1可能不是同一个LUN,但是各个服务器上看到的相同LUN的WWID都是一样的,所以不建议设为yes,而是设为#no,用WWID作为别名。

  7.  
  8. defaults {

  9. user_friendly_names no

  10. path_grouping_policy failover //表示multipath工作模式为主备,path_grouping_policy multibus为主主

  11. }

  12.  
  13. 添加第二处:绑定wwid<br>这里的wwid在multipath -l中体现

  14. multipaths {

  15. multipath {

  16. wwid "1IET 00010001"

  17. alias asmdisk1

  18. }

  19.  
  20. multipaths {

  21. multipath {

  22. wwid "1IET 00010002"

  23. alias asmdisk2

  24. }

  25.  
  26. multipaths {

  27. multipath {

  28. wwid "1IET 00010003"

  29. alias asmdisk3

  30. }

  31.  
  32. multipaths {

  33. multipath {

  34. wwid "1IET 00010004"

  35. alias asmdisk4

  36. }

  37.  
  38. multipaths {

  39. multipath {

  40. wwid "1IET 00010005"

  41. alias asmdisk5

  42. }

  43.  
  44. multipaths {

  45. multipath {

  46. wwid "1IET 00010006"

  47. alias asmdisk6

配置完成要生效得重启multipathd

service multipathd restart

绑定后查看multipath别名


 
  1. [root@jydb1 ~]# cd /dev/mapper/

  2. [root@jydb1 mapper]# ls

  3. asmdisk1 asmdisk2 asmdisk3 asmdisk4 asmdisk5 asmdisk6 control

udev绑定裸设备

首先进行UDEV权限绑定,否则权限不对安装时将扫描不到共享磁盘

  修改之前:


 
  1. [root@jydb1 ~]# ls -lh /dev/dm*

  2. brw-rw---- 1 root disk 253, 0 4月 2 16:18 /dev/dm-0

  3. brw-rw---- 1 root disk 253, 1 4月 2 16:18 /dev/dm-1

  4. brw-rw---- 1 root disk 253, 2 4月 2 16:18 /dev/dm-2

  5. brw-rw---- 1 root disk 253, 3 4月 2 16:18 /dev/dm-3

  6. brw-rw---- 1 root disk 253, 4 4月 2 16:18 /dev/dm-4

  7. brw-rw---- 1 root disk 253, 5 4月 2 16:18 /dev/dm-5

  8. crw-rw---- 1 root audio 14, 9 4月 2 16:18 /dev/dmmidi

我这里系统是RHEL6.6,对于multipath的权限,手工去修改几秒后会变回root。所以需要使用udev去绑定好权限。

  搜索对应的配置文件模板:


 
  1. [root@jyrac1 ~]# find / -name 12-*

  2. /usr/share/doc/device-mapper-1.02.79/12-dm-permissions.rules

根据模板新增12-dm-permissions.rules文件在/etc/udev/rules.d/下面进行修改:


 
  1. vi /etc/udev/rules.d/12-dm-permissions.rules

  2. # MULTIPATH DEVICES

  3. #

  4. # Set permissions for all multipath devices

  5. ENV{DM_UUID}=="mpath-?*", OWNER:="grid", GROUP:="asmadmin", MODE:="660" //修改这里

  6.  
  7. # Set permissions for first two partitions created on a multipath device (and detected by kpartx)

  8. # ENV{DM_UUID}=="part[1-2]-mpath-?*", OWNER:="root", GROUP:="root", MODE:="660"

成后启动start_udev,权限正常则OK


 
  1. [root@jydb1 ~]# start_udev

  2. 正在启动 udev:[确定]

  3. [root@jydb1 ~]# ls -lh /dev/dm*

  4. brw-rw---- 1 grid asmadmin 253, 0 4月 2 16:25 /dev/dm-0

  5. brw-rw---- 1 grid asmadmin 253, 1 4月 2 16:25 /dev/dm-1

  6. brw-rw---- 1 grid asmadmin 253, 2 4月 2 16:25 /dev/dm-2

  7. brw-rw---- 1 grid asmadmin 253, 3 4月 2 16:25 /dev/dm-3

  8. brw-rw---- 1 grid asmadmin 253, 4 4月 2 16:25 /dev/dm-4

  9. brw-rw---- 1 grid asmadmin 253, 5 4月 2 16:25 /dev/dm-5

  10. crw-rw---- 1 root audio 14, 9 4月 2 16:24 /dev/dmmidi

磁盘设备绑定

  查询裸设备的主设备号、次设备号


 
  1. <span style="color: rgb(51, 51, 51);">[root@jydb1 ~]# ls -lt /dev/dm-*

  2. brw-rw---- 1 grid asmadmin 253, 5 3月 29 04:00 /dev/dm-5

  3. brw-rw---- 1 grid asmadmin 253, 3 3月 29 04:00 /dev/dm-3

  4. brw-rw---- 1 grid asmadmin 253, 2 3月 29 04:00 /dev/dm-2

  5. brw-rw---- 1 grid asmadmin 253, 4 3月 29 04:00 /dev/dm-4

  6. brw-rw---- 1 grid asmadmin 253, 1 3月 29 04:00 /dev/dm-1

  7. brw-rw---- 1 grid asmadmin 253, 0 3月 29 04:00 /dev/dm-0

  8.  
  9.  
  10. [root@jydb1 ~]# dmsetup ls|sort

  11. asmdisk1 (253:0)

  12. asmdisk2 (253:1)

  13. asmdisk3 (253:3)

  14. asmdisk4 (253:4)

  15. asmdisk5 (253:2)

  16. asmdisk6 (253:5)

  17. </span><span style="color: rgb(255, 0, 0);">###这里要注意系统分区时是否划分了LVM,如划分则会占据编号,在下面操作绑定裸设备时需注意对应编号####</span>


 
  1. 根据对应关系绑定裸设备

  2. vi /etc/udev/rules.d/60-raw.rules

  3. # Enter raw device bindings here.

  4. #

  5. # An example would be:

  6. # ACTION=="add", KERNEL=="sda", RUN+="/bin/raw /dev/raw/raw1 %N"

  7. # to bind /dev/raw/raw1 to /dev/sda, or

  8. # ACTION=="add", ENV{MAJOR}=="8", ENV{MINOR}=="1", RUN+="/bin/raw /dev/raw/raw2 %M %m"

  9. # to bind /dev/raw/raw2 to the device with major 8, minor 1.

  10. ACTION=="add", ENV{MAJOR}=="253", ENV{MINOR}=="0", RUN+="/bin/raw /dev/raw/raw1 %M %m"

  11. ACTION=="add", ENV{MAJOR}=="253", ENV{MINOR}=="1", RUN+="/bin/raw /dev/raw/raw2 %M %m"

  12. ACTION=="add", ENV{MAJOR}=="253", ENV{MINOR}=="2", RUN+="/bin/raw /dev/raw/raw3 %M %m"

  13. ACTION=="add", ENV{MAJOR}=="253", ENV{MINOR}=="3", RUN+="/bin/raw /dev/raw/raw4 %M %m"

  14. ACTION=="add", ENV{MAJOR}=="253", ENV{MINOR}=="4", RUN+="/bin/raw /dev/raw/raw5 %M %m"

  15. ACTION=="add", ENV{MAJOR}=="253", ENV{MINOR}=="5", RUN+="/bin/raw /dev/raw/raw6 %M %m"

  16.  
  17.  
  18. ACTION=="add", KERNEL=="raw1", OWNER="grid", GROUP="asmadmin", MODE="660"

  19. ACTION=="add", KERNEL=="raw2", OWNER="grid", GROUP="asmadmin", MODE="660"

  20. ACTION=="add", KERNEL=="raw3", OWNER="grid", GROUP="asmadmin", MODE="660"

  21. ACTION=="add", KERNEL=="raw4", OWNER="grid", GROUP="asmadmin", MODE="660"

  22. ACTION=="add", KERNEL=="raw5", OWNER="grid", GROUP="asmadmin", MODE="660"

  23. ACTION=="add", KERNEL=="raw6", OWNER="grid", GROUP="asmadmin", MODE="660"

完成后查看


 
  1. [root@jydb1 ~]# start_udev

  2. 正在启动 udev:[确定]

  3. [root@jydb1 ~]# ll /dev/raw/raw*

  4. crw-rw---- 1 grid asmadmin 162, 1 5月 25 05:03 /dev/raw/raw1

  5. crw-rw---- 1 grid asmadmin 162, 2 5月 25 05:03 /dev/raw/raw2

  6. crw-rw---- 1 grid asmadmin 162, 3 5月 25 05:03 /dev/raw/raw3

  7. crw-rw---- 1 grid asmadmin 162, 4 5月 25 05:03 /dev/raw/raw4

  8. crw-rw---- 1 grid asmadmin 162, 5 5月 25 05:03 /dev/raw/raw5

  9. crw-rw---- 1 grid asmadmin 162, 6 5月 25 05:03 /dev/raw/raw6

  10. crw-rw---- 1 root disk 162, 0 5月 25 05:03 /dev/raw/rawctl

至此本篇结束,下一篇讲记录如何安装grid.

https://blog.csdn.net/weixin_40283570/article/details/80927901

 

声明

本文内容仅代表作者观点,或转载于其他网站,本站不以此文作为商业用途
如有涉及侵权,请联系本站进行删除
转载本站原创文章,请注明来源及作者。