资源描述
1、 系统环境
1.1、 硬件环境
服务器:vmware 10,新建2个虚拟机主机名分别为rac1和rac2,每个虚拟机分配40G磁盘空间,添加两个网络适配器。其中第二个适配器网络连接调整为自定义,且两个节点保持一致。
。
Widonws本机ip:192.168.6.1
1.2、 软件环境
数据库:oracle11.2.0.4 database-x86-64
GRID:oracle11.2.0.4_grid-x86-64
操作系统:rhel-server-6.3-x86_64采用最小化安装
1.3、 网络环境
Ip地址规划分配为
IP 名称
子网掩码
IP 地址
Rac1-public
255.255.255.0
192.168.6.11
Rac2-public
255.255.255.0
192.168.6.12
Rac1-private
255.255.255.0
2.2.2.2
Rac2-private
255.255.255.0
2.2.2.3
Rac1-vip
255.255.255.0
192.168.6.13
Rac2-vip
255.255.255.0
192.168.6.14
SCAN
255.255.255.0
192.168.6.15
1.4、 共享磁盘分区
计划创建四个共享磁盘sdb、sdc、sdd、sde,每个磁盘计划分三个分区
2、 前期环境准备
1.
2.1 配置静态IP地址
vi /etc/sysconfig/network-scripts/ifcfg-eth0
修改ip地址。每个虚拟机eth0网卡为public,
DEVICE="eth0"
BOOTPROTO="static"
HWADDR="00:0C:29:D1:4E:A6"
NM_CONTROLLED="yes"
ONBOOT="yes"
TYPE="Ethernet"*
UUID="e59cb6a0-deb0-4164-a2b0-8b4dcc0cb027"
IPADDR=192.168.6.11
NETMASK=255.255.255.0
GATEWAY=192.168.6.1
vi /etc/sysconfig/network-scripts/ifcfg-eth1
修改ip地址。每个虚拟机eth1网卡位private
DEVICE="eth1"
BOOTPROTO="static"
HWADDR="00:0C:29:D1:4E:A6"
NM_CONTROLLED="yes"
ONBOOT="yes"
TYPE="Ethernet"*
UUID="e59cb6a0-deb0-4164-a2b0-8b4dcc0cb027"
IPADDR=2.2.2.2
NETMASK=255.255.255.0
修改完成后执行
service network restart
2.2 在rac1和rac2上分别关闭防火墙
service iptables stop --停止防火墙
chkconfig iptables off --禁用防火墙
2.3 在rac1和rac2上分别修改主机名
vi /etc/sysconfig/network 重启生效,一个rac1,另一个rac2
HOSTNAME=rac1
2.4 在rac1和rac2 上分别改hosts
vi /etc/hosts
添加对应的ip信息
#public
192.168.6.180 rac1
192.168.6.181 rac2
#private
2.2.2.1 rac1-priv
2.2.2.2 rac2-priv
#virtual
192.168.6.182 rac1-vip
192.168.6.183 rac2-vip
#scan
192.168.6.184 cluster-scan
2.5 在rac1和rac2上分别执行配置内核参数
vi /etc/sysctl.conf
加入以下内容
fs.aio-max-nr = 1048576
fs. = 6815744
kernel.shmall = 2147483648
kernel.shmmax = 68719476736
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048586
使修改参数立即生效:
sysctl -p
2.6 在rac1和rac2上分别执行修改limits
vi /etc/security/limits.conf
加入以下信息
grid soft nproc 2047
grid hard nproc 16384
grid soft no
grid hard no
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft no
oracle hard no
2.7 在rac1和rac2上分别修改/etc/pam.d/login
vi /etc/pam.d/login
加入以下信息
session required /lib/security/pam_limits.so
session required pam_limits.so
2.8 在rac1和rac2上分别执行修改/etc/profile
vi /etc/profile
加入以下信息
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
2.9 在rac1和rac2 上分别执行禁用 selinux
vi /etc/selinux/config
修改 SELINUX值
SELINUX=disabled
2.10 在rac1和rac2上分别执行停止 ntp 服务
service ntpd stop
chkconfig ntpd off
mv /etc/ntp.conf /etc/ntp.conf.bak
2.11 在rac1和rac2上分别处理/dev/shm 共享内存不足的处理
df -h 查看tmpfs分区是否大于1G,如果过小需增加。
vi /etc/fstab
默认的:tmpfs /dev/shm tmpfs defaults 0 0
改成:
tmpfs /dev/shm tmpfs defaults,size=1024m 0 0
mount -o remount /dev/shm
2.12 在rac1和rac2上分别检查软件是否全部安装
rpm -qa | grep binutils-
rpm -qa | grep compat-libstdc++-
rpm -qa | grep elfutils-libelf-
rpm -qa | grep elfutils-libelf-devel-
rpm -qa | grep glibc-
rpm -qa | grep glibc-common-
rpm -qa | grep glibc-devel-
rpm -qa | grep gcc-
rpm -qa | grep gcc-c++-
rpm -qa | grep libaio-
rpm -qa | grep libaio-devel-
rpm -qa | grep libgcc-
rpm -qa | grep libstdc++-
rpm -qa | grep libstdc++-devel-
rpm -qa | grep make-
rpm -qa | grep sysstat-
rpm -qa | grep unixODBC-
rpm -qa | grep unixODBC-devel-
2.13 在rac1和rac2 2分别将未安装的包通过yum安装
mkdir /yum
mount /dev/cdrom /yum
vi /etc/yum.repos.d/chenbin.repo
添加以下内容
[rhel-chenbin]
name=Red Hat Enterprise Linux $releasever - $basearch - Debug
baseurl=
enabled=1
gpgcheck=1
gpgkey=RPM-GPG-KEY-redhat-release
yum list --查看可用包
yum -y install binutils* compat-* elfutils-libelf* gcc-* gcc-* kernel-* ksh-* libaio-* libgcc-* libgomp-* libstdc++-* make-* numactl-devel-* sysstat-* unixODBC-* pdksh*
2.14 在ora db1和rac2上分别关闭不需要的服务
chkconfig autofs off
chkconfig acpid off
chkconfig sendmail off
chkconfig cups-config-daemon off
chkconfig cpus off
chkconfig xfs off
chkconfig lm_sensors off
chkconfig gpm off
chkconfig openibd off
chkconfig pcmcia off
chkconfig cpuspeed off
chkconfig nfslock off
chkconfig ip6tables off
chkconfig rpcidmapd off
chkconfig apmd off
chkconfig sendmail off
chkconfig arptables_jf off
chkconifg microcode_ctl off
chkconfig rpcgssd off
chkconfig ntpd off
3、 添加组和用户
2
3
3.1 在rac1和rac2上分别添加oracle和grid用户和组
groupadd -g 501 oinstall
groupadd -g 502 dba
groupadd -g 503 oper
groupadd -g 504 asmadmin
groupadd -g 505 asmoper
groupadd -g 506 asmdba
useradd -g oinstall -G dba,asmdba,oper oracle
useradd -g oinstall -G asmadmin,asmdba,asmoper,oper,dba grid
3.2 在rac1和rac2上分别为oracle和grid用户设密码
passwd oracle
passwd grid
3.3 在rac1和rac2上分别创建目录grid 和 oracle 用户的
mkdir -p /u01/app/oracle
mkdir -p /u01/app/grid
mkdir -p /u01/app/11.2.0/grid
chown -R grid:oinstall /u01/app/grid
chown -R grid:oinstall /u01/app/11.2.0
chown -R oracle:oinstall /u01/app/oracle
chmod -R 775 /u01
mkdir -p /u01/app/oraInventory
chown -R grid:oinstall /u01/app/oraInventory
chmod -R 775 /u01/app/oraInventory
3.4 在rac1和rac2上分别修改oracle用户的.bash_profile文件
vi /home/oracle/.bash_profile
export ORACLE_SID=rac1
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export NLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS"
export TMP=/tmp
export TMPDIR=$TMP
export PATH=$PATH:$ORACLE_HOME/bin
3.5 在rac1和rac2上分别修改grid 用户的.bash_profile文件
vi /home/grid/.bash_profile
export ORACLE_SID=+ASM1
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/11.2.0/grid
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export NLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS"
export PATH=$ORACLE_HOME/bin:$PATH
注意另外一台数据库实例名须做相应修改:
Oracle:export ORACLE_SID=rac2
grid:export ORACLE_SID=+ASM2
3.6 建立 ssh 等效性(可不配置)
Root用户等效性可不用配置,配置了仅管理服务器方便,gird用户和oracle的ssh等效性由安装程序配置,
mkdir ~/.ssh
chmod 755 ~/.ssh
ssh-keygen -t rsa
ssh-keygen -t dsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
在rac1上执行
ssh rac2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
ssh rac2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
在rac2上执行
ssh rac1 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
ssh rac1 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
在两台服务器上验证是否可以正常访问对方,执行时不再提示输入密码,则表示SSH 对等性配置成功
ssh rac1 date
ssh rac2 date
ssh rac1-priv date
ssh rac2-priv date
4、 添加共享磁盘(VM)
4
4.1 在本机上创建共享磁盘
生产环境使用共享存储,直接略过
在windows的cmd 中进入VMware Workstation 安装目录,执行命令创建磁盘:注意创建共享磁盘的路径及文件大小,例如我的VM装在c盘,
cd c:\Program Files (x86)\VMware\VMware Workstation
vmware-vdiskmanager.exe -c -s 5g -a lsilogic -t 2 "F:\shdisk\vot.vmdk"
Vmware-vdiskmanager.exe -c -s 2g -a lsilogic -t 2 "F:\shdisk\fra.vmdk"
vmware-vdiskmanager.exe -c -s 5g -a lsilogic -t 2 "F:\shdisk\data.vmdk"
vmware-vdiskmanager.exe -c -s 5g -a lsilogic -t 2 "F:\shdisk\data1.vmdk"
4.2 将共享磁盘挂到两台虚拟机
关闭两台虚拟机并退出VM,分别用记事本打开虚拟机安装位置中虚拟机名字. vmx,
添加以下内容
disk.EnableUUID="TRUE"
disk.locking = "FALSE"
scsi1.shared = "TRUE"
diskLib.dataCacheMaxSize = "0"
diskLib.dataCacheMaxReadAheadSize = "0"
diskLib.dataCacheMinReadAheadSize = "0"
diskLib.dataCachePageSize= "4096"
diskLib.maxUnsyncedWrites = "0"
scsi1.present = "TRUE"
scsi1.virtualDev = "lsilogic"
scsil.sharedBus = "VIRTUAL"
scsi1:0.present = "TRUE"
scsi1:0.mode = "independent-persistent"
scsi1:0. = "F:\shdisk\vot.vmdk"
scsi1:0.deviceType = "disk"
scsi1:0.redo = ""
scsi1:1.present = "TRUE"
scsi1:1.mode = "independent-persistent"
scsi1:1. = "F:\shdisk\fra.vmdk"
scsi1:1.deviceType = "disk"
scsi1:1.redo = ""
scsi1:2.present = "TRUE"
scsi1:2.mode = "independent-persistent"
scsi1:2. = "F:\shdisk\data.vmdk"
scsi1:2.deviceType = "disk"
scsi1:2.redo = ""
scsi1:3.present = "TRUE"
scsi1:3.mode = "independent-persistent"
scsi1:3. = "F:\shdisk\data1.vmdk"
scsi1:3.deviceType = "disk"
scsi1:3.redo = ""
4.3 两个虚拟机开机,格式化硬盘,确认是否共享成功。
在rac1上执行
fdsik -l --确认看以正常看到sdb,sbc,sdd,sde
按前期规划创建分区,分区完成后在rac2 中执行fdisk -l 查询是否同步
4.4 在rac1和rac2上分别将裸设备文件和分区设备文件进行绑定
vi /etc/udev/rules.d/60-raw.rules
根据磁盘情况修改下面信息加入到文件中
ACTION=="add", KERNEL=="sdb1", RUN+="/bin/raw /dev/raw/raw1 %N"
ACTION=="add", KERNEL=="sdb2", RUN+="/bin/raw /dev/raw/raw2 %N"
ACTION=="add", KERNEL=="sdb3", RUN+="/bin/raw /dev/raw/raw3 %N"
ACTION=="add", KERNEL=="sdc1", RUN+="/bin/raw /dev/raw/raw4 %N"
ACTION=="add", KERNEL=="sdc2", RUN+="/bin/raw /dev/raw/raw5 %N"
ACTION=="add", KERNEL=="sdc3", RUN+="/bin/raw /dev/raw/raw6 %N"
ACTION=="add", KERNEL=="sdd1", RUN+="/bin/raw /dev/raw/raw7 %N"
ACTION=="add", KERNEL=="sdd2", RUN+="/bin/raw /dev/raw/raw8 %N"
ACTION=="add", KERNEL=="sdd3", RUN+="/bin/raw /dev/raw/raw9 %N"
KERNEL=="raw[0-9]", OWNER="grid", GROUP="asmadmin", MODE="0660"
此处 1、KERNEL==”raw[0-9]” 10以必须单独指定,否则无效
2、MODE=”0666” 和GROUP=”oinstall”的设置,在后面的check会报waring
应改为:MODE=”0660 GROUP=”asmadmin”
执行:start_udev命令,此处建议重启虚拟机
执行 raw -qa查看绑定状态:确保两个节点看以看到相同的内容
执行ls -l /dev/raw/查看 裸设备归属权限:
5、 安装Grid
5
5.1 在rac1和rac2上分别安装cvuqdisk包
在两个 Oracle RAC 节点上安装操作系统程序包 cvuqdisk。cvuqdisk RPM 包含在 Oracle Grid Infrastructure 安装介质上的 rpm 目录中。
rpm -ivh cvuqdisk-1.0.7-1.rpm
5.2 检查 CRS 的安装环境
只需要在其中一个节点的gird用户执行.执行需提前设置好grid、oracle用户ssh等效性,本次不检查
./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -verbose
未检测通过的显示为failed,根据情况修复一下即可:resolv.conf不一致的DNS问题可以忽略
5.3 安装Grid Infrastructure软件
5.3.1 开始安装
export DISPLAY=192.168.6.18:0.0
./runInstaller
5.3.2 语言支持中文
5.3.3 配置Scan name与host文件一致
此处的SCAN_NAME与/etc/hosts 文件中的scan地址保持一致
5.3.4 Grid用户等效性配置
点击setup
出现提示安装成功。Next
5.3.5 创建OCR磁盘组
设置asm密码
5.3.6 选择grid安装位置
Package这个只要确保安装了ksh即可。 Task resolve此项是因为没有使用dns解析scan,可忽略
5.3.7 开始安装
5.3.8 执行脚本
在每个服务器上以root身份执行图中的2个脚本,两个节点都执行完成后点OK。
/u01/app/oraInventory/orainstRoot.sh
/u01/app/grid/11.2.0/root.sh
出现以下字样表示在该节点安装成功
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
5.4 确认Grid安装
5.4.1 CRS状态
crs_stat -t -v
除GSD外其他全部为online
crsctl stat res -t
5.4.2 voting disk状态
crsctl query css votedisk
## STATE Id Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 7b8903f49cc84fa8bf06d199bdf5dfe3 (ORCL:DISK01) [CRSDG]
5.4.3 检查 Oracle 集群注册表 (OCR)
ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 2264
Available space (kbytes) : 259856
ID : 1510360228
Device/ : +CRSDG
Device/ check succeeded
Device/ configured
Device/ configured
Device/ configured
Device/ configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
5.4.4 检查 CRS 状态
crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
6、 ASM磁盘组配置
6
6.1 检查监听状态
[grid@rac01 ~]$ lsnrctl status
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=172.18.3.211)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=172.18.3.213)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM1", status READY, has 1 handler(s) for this service...
The command completed successfully
6.2 创建DATA磁盘组
以grid用户执行asmca命令,创建DATADG和FRADG两个磁盘组。
6.3 创建FLA磁盘组:
FLASHDG创建成功,退出ASMCA。
验证:crs_stat -t -v或 crsctl stat res -t
7、 安装数据库软件
7
7.1 切换到oracle用户
export DISPLAY=192.168.6.18:0.0
./runInstaller
7.2 两个节点安装
7.3 oracle用户等效性
输入oracle用户密码。点击setup
7.4 选择路径
7.5 执行root脚本
在两个节点上,分别以root身份执行上述脚本,然后点击OK。
数据库软件安装完成,点击close退出。
7.6 测试数据库安装完成
[oracle@rac1~]$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.4.0 Production on Wed Oct 1 22:42:56 2014
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to an idle instance.
SQL>
8、 使用DBCA创建数据库
8
8.1 使用oracle用户登录,执行dbca
8.2 启用em
8.3 选择asm磁盘组
8.4 选择FRA磁盘组:
确定是否启用快速恢复区、归档日志
设置最大连接数
8.5 注意选合适的字符集
点击exit完成安装
9、 集群数据库维护
9
9.1 所有 Oracle 实例状态
srvctl status database -d oradb
Instance rac1is running on node node1
Instance rac2 is running on node node2
9.2 单个 Oracle 实例状态
srvctl status instance -d oradb -i rac1
Instance rac1 is running on node node1
9.3 节点应用程序状态
srvctl status nodeapps
VIP node1-vip is enabled
VIP node1-vip is running on node: node1
VIP node2-vip is enabled
VIP node2-vip is running on node: node2
Network is enabled
Network is running on node: node1
Network is running on node: node2
GSD is disabled
GSD is not running on node: node1
GSD is not running on node: node2
ONS is enabled
ONS daemon is running on node: node1
ONS daemon is running on node: node2
9.4 节点应用程序配置
srvctl config nodeapps
Network exists: 1/192.168.0.0/255.255.0.0/eth0, type static
VIP exists: /node1-vip/192.168.1.151/192.168.0.0/255.255.0.0/eth0, hosting node node1
VIP exists: /node2-vip/192.168.1.152/192.168.0.0/255.255.0.0/eth0, hosting node node2
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
9.5 数据库配置
srvctl config database -d oradb -a
Database unique name: oradb
Database name: oradb
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATADG/oradb/sp
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: oradb
Database instances: rac1,rac2
Disk Groups: DATADG,FRADG
Mount point paths:
Services:
Type: RAC
Database is enabled
Database is administrator managed
9.6 ASM 状态
srvctl status asm
ASM is running on node2,node1
9.7 ASM 配置
srvctl config asm -a
ASM home: /u01/app/11.2.0/grid
ASM listener: LISTENER
ASM is enabled.
9.8 TNS 监听器状态
srvctl status listener
Listener LISTENER is enabled
Listener LISTENER is running on node(s): node2,node1
9.9 TNS 监听器配置
srvctl config listener -a
Name: LISTENER
Network: 1, Owner: grid
Home: <CRS home>
/u01/app/11.2.0/grid on node(s) node2,node1
End points: TCP:1521
9.10 SCAN 状态
srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node node2
SCAN VIP scan2 is enabled
SCAN VIP scan2 is running on node node1
SCAN VIP scan3 is enabled
SCAN VIP scan3 is running on node node1
9.11 SCAN 配置
srvctl config scan
SCAN name: cluster-scan.localdomain, Network: 1/192.168.0.0/255.255.0.0/eth0
SCAN VIP name: scan1, IP: /cluster-scan.localdomain/192.168.1.
展开阅读全文