Oracle 19c RAC 安装 以及 升级 RU

Oracle 19c RAC安装以及升级 RU

Revision V1.0

No.DateAuthor/ModifierComments
1.02020-02-05谈权初稿

1. 系统规划

1.1 网络规划

  • 主机名允许使用小写字母、数字和中横线(-),并且只能以小写字母开头。
  • 两节点rac建议DNS服务器环境配置3个SCANIP,否则配置1个SCANIP。
  • 私网需要使用独立交换机,而不是网线对联。
  • 多套RAC使用同一私网交换机,需划分成不同VLAN,或者使用不同网段。

virtual-rac

1.2 存储规划

存储使用Oracle ASM管理。操作系统层通过udev绑定。如果使用afd参考以下文档

  • 1.ASMFD (ASM Filter Driver) Support on OS Platforms (Certification Matrix).

    (文档 ID 2034681.1)

  • 2.How to configure and Create a Disk group using ASMFD

    (文档 ID 2053045.1)

1.3 操作系统规范

操作系统:CentOS Linux 7.7

磁盘分区:(内存 4G)

分区大小 (Size)
SWAP8G
/100G

1.4 数据库相关介质

介质文件名
Oracle gridLINUX.X64_193000_grid_home.zip
Oracle databaseLINUX.X64_193000_db_home.zip
Patch 30501910: GI RELEASE UPDATE 19.6.0.0.0
说明:由于 GI RU 包含 DB RU,所以 RAC 环境升级 DB 时,还将使用此 Patch。
p30501910_190000_Linux-x86-64.zip
Patch 30557433: DATABASE RELEASE UPDATE 19.6.0.0.0
说明:单实例升级 DB RU 时,使用此 Patch。
p30557433_190000_Linux-x86-64.zip
OPatchp6880880_190000_Linux-x86-64.zip

1.5 小结:双节点RAC整体规划

rac1/rac2 主机名:

rac1rac2
操作系统CentOS 7.7CentOS 7.7
主机名tqdb21tqdb22
IP地址 (Public)(enp0s8)192.168.6.21192.168.6.22
IP地址 (Private)(enp0s9)172.16.8.21172.16.8.22
IP地址 (Virtual)(enp0s8)192.168.6.23192.168.6.24
IP地址 (SCAN)(enp0s8)192.168.6.20192.168.6.20
GRID 用户环境变量export ORACLE_SID=+ASM1export ORACLE_SID=+ASM2
export ORACLE_BASE=/u01/app/gridexport ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/19c/gridexport ORACLE_HOME=/u01/app/19c/grid
export TNS_ADMIN=$ORACLE_HOME/network/adminexport TNS_ADMIN=$ORACLE_HOME/network/admin
ORACLE 用户环境变量export ORACLE_SID=tqdb1export ORACLE_SID=tqdb2
export DB_UNIQUE_NAME=tqdbexport DB_UNIQUE_NAME=tqdb
export ORACLE_UNQNAME=tqdbexport ORACLE_UNQNAME=tqdb
export ORACLE_BASE=/u01/app/oracleexport ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/product/19c/dbhomeexport ORACLE_HOME=/u01/app/oracle/product/19c/dbhome
export TNS_ADMIN=$ORACLE_HOME/network/adminexport TNS_ADMIN=$ORACLE_HOME/network/admin
GRID Version19.6.0.0.019.6.0.0.0
DB Version19.6.0.0.019.6.0.0.0
共享存储 OCR & voting disk2G * 32G * 3
共享存储 ASM: DATA DiskGroup50G * 250G * 2

2. 环境配置

2.1 网络配置

rac1/rac2 主机名:

rac1rac2
主机名tqdb21tqdb22
IP地址 (Public)(enp0s8)192.168.6.21192.168.6.22
IP地址 (Private)(enp0s9)172.16.8.21172.16.8.22
IP地址 (Virtual)(enp0s8)192.168.6.23192.168.6.24
IP地址 (SCAN)(enp0s8)192.168.6.20192.168.6.20

修改 HOSTS 文件实例:

# vim /etc/hosts
​```
127.0.0.1 localhost
# Public (enp0s8)
192.168.6.21 tqdb21
192.168.6.22 tqdb22
# Private (enp0s9)
172.16.8.21 tqdb21-priv
172.16.8.22 tqdb22-priv
# Virtual (enp0s8)
192.168.6.23 tqdb21-vip
192.168.6.24 tqdb22-vip
# SCAN
192.168.6.20 tqdb-cluster tqdb-cluster-scan
​```

2.2 修改启动模式为: 「multi-user.target」

-- 查看当前启动模式:
​```
systemctl get-default
​```

-- 设置(修改)启动模式: 「multi-user.target」
​```
systemctl set-default multi-user.target
​```

2.3 关闭操作系统 NUMA

​```
1. 编辑 `/etc/default/grub` 文件,在 `GRUB_CMDLINE_LINUX=` 加上:numa=off
2. 重新生成 /etc/grub2.cfg 配置文件:
    `grub2-mkconfig -o /etc/grub2.cfg`
3. 重启操作系统
    `reboot`
4. 重启之后进行确认:
    `dmesg | grep -i numa`
再次确认: `cat /proc/cmdline`
​```

2.4 关闭防火墙

原来使用iptables,现在在CentOS 7中失效。关闭防火墙使用chkconfig iptables off,是会报错error reading information on service iptables: No such file or directory

需要:

# systemctl stop firewalld.service
# systemctl disable firewalld.service
# systemctl status firewalld.service

2.5 在CentOS 7 中禁止 IPv6

vim /etc/default/grub 增加 `ipv6.disable=1`
​```
[root@tqdb21: ~]#  vim /etc/default/grub   
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="ipv6.disable=1 spectre_v2=retpoline rhgb quiet numa=off"
GRUB_DISABLE_RECOVERY="true"
​```
# grub2-mkconfig -o /boot/grub2/grub.cfg

# reboot

# lsmod | grep ipv6

2.6 禁用 SELINUX 配置

# vim /etc/selinux/config
​```
SELINUX=disabled
​```

2.7 配置(关闭)服务

# systemctl disable firewalld
# systemctl disable avahi-daemon
# systemctl disable bluetooth
# systemctl disable cpuspeed
# systemctl disable cups
# systemctl disable firstboot
# systemctl disable ip6tables
# systemctl disable iptables
# systemctl disable pcmcia

2.8 关闭 THP

关闭 Transparent HugePages 特性(RHEL7/OL7)

# vim /etc/rc.d/rc.local

# 增加下列内容:
​```
if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
echo never > /sys/kernel/mm/transparent_hugepage/enabled
fi
if test -f /sys/kernel/mm/transparent_hugepage/defrag; then
echo never > /sys/kernel/mm/transparent_hugepage/defrag
fi
​```

授权、执行、查看:

# chmod +x /etc/rc.d/rc.local
# source /etc/rc.d/rc.local
# cat /sys/kernel/mm/transparent_hugepage/enabled 
​```
always madvise [never]  <<--- THP Disabled
​```
# cat /sys/kernel/mm/transparent_hugepage/defrag
​```
always defer defer+madvise madvise [never]
​```

2.9 NOZEROCONF

12c RAC 配置

CSSD Fails to Join the Cluster After Private Network Recovered if avahi Daemon is up and Running (Doc ID 1501093.1)文档中建议

# echo "NOZEROCONF=yes" >> /etc/sysconfig/network

2.10 软件包安装

配置扩展 YUM 源:

RHEL六版用户
# wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm
# rpm -Uvh epel-release-latest-6.noarch.rpm

RHEL七版用户
# wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
# rpm -Uvh epel-release-latest-7.noarch.rpm

安装系统包:

# rpm -q --qf '%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n' binutils \
compat-libstdc++-33 \
compat-libcap1 \
elfutils-libelf \
elfutils-libelf-devel \
gcc \
gcc-c++ \
glibc \
glibc-common \
glibc-devel \
glibc-headers \
ksh \
libaio \
libaio-devel \
libgcc \
libstdc++ \
libXext \
libXtst \
kde-l10n-Chinese.noarch \
libstdc++-devel \
make \
xclock \
sysstat \
man \
nfs-utils \
lsof \
expect \
unzip \
redhat-lsb \
openssh-clients \
smartmontools \
unixODBC \
perl \
telnet \
vsftpd \
ntsysv \
lsscsi \
libX11 \
libxcb \
libXau \
libXi \
strace \
sg3_utils \
kexec-tools \
net-tools \
unixODBC-devel |grep "not installed" |awk '{print $2}' |xargs yum install -y

2.11 安装 cvuqdisk 包

安装包位置在:解压 grid 安装包后

(在步骤:「3.1 GRID 安装」解压 grid 安装包后,安装 cvuqdisk-1.0.10-1.rpm 包)

# cd $ORACLE_HOME/cv/rpm
# rpm –ivh cvuqdisk*.rpm

2.12 时间服务

使用 Chrony 服务

# vim /etc/chrony.conf

server iburst

重启时间同步服务:

# systemctl restart chronyd.service
# systemctl enable chronyd.service

查看时间同步源:

# chronyc sources -v

查看时间同步状态:

# chronyc sourcestats

2.13 创建用户

创建用户组:

# groupadd -g 600 oinstall 
# groupadd -g 601 dba 
# groupadd -g 602 oper 
# groupadd -g 603 asmadmin 
# groupadd -g 604 asmoper 
# groupadd -g 605 asmdba  
# groupadd -g 606 backupdba
# groupadd -g 607 dgdba
# groupadd -g 608 kmdba
# groupadd -g 609 racdba

创建 oracle 和 grid 用户:

# useradd -u 600 -g oinstall -G asmadmin,dba,asmdba,backupdba,dgdba,kmdba,racdba,oper oracle 
# useradd -u 601 -g oinstall -G oper,asmadmin,asmdba,asmoper,dba,racdba grid 

设置 oracle 和 grid 用户的密码:

-- oracle 口令
# passwd oracle

-- grid 口令
# passwd grid

2.14 内核参数调整

修改内核参数(结合实际环境修改):

# vim /etc/sysctl.conf
​```
# oracle-database-preinstall-19c 
fs.file-max = 6815744
kernel.sem = 250 32000 100 128
kernel.shmmni = 4096
kernel.shmall = 1073741824
kernel.shmmax = 4398046511104
kernel.panic_on_oops = 1
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
net.ipv4.conf.all.rp_filter = 2
net.ipv4.conf.default.rp_filter = 2
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500

# SGA 1(G)*1024/2 +20 =HugePages_Total
vm.nr_hugepages = 532
vm.swappiness=5
​```

参数名 参数说明 建议值

kernel.shmmax=单个共享内存最大值单位b,一般大于等于sga_max_target

kernel.shmall=控制内存页数,一页是4kb,一般是kernel.shmmax/4

kernel.shmmni 共享内存段的最大数量 4096

kernel.shmall 控制共享内存页数 根据计算公式进行调整,物理内存0.71024*1024KB/4KB

kernel.shmmax 单个共享内存段的最大值 根据计算公式进行调整,物理内存0.7102410241024

kernel.sem 每个进程通讯需要的信号灯

计算Hugepagesize的方法

vm.nr_hugepages = sga_max_size / Hugepagesize = 12GB/2048KB = 6144 (can be set slightly bigger than this figure)

执行命令生效:

# sysctl -p

2.15 LIMITS 配置

# vim /etc/security/limits.conf
​```
# oracle-database-preinstall-19c
oracle   soft   nofile    1024
oracle   hard   nofile    65536
oracle   soft   nproc    16384
oracle   hard   nproc    16384
oracle   soft   stack    10240
oracle   hard   stack    32768
oracle   hard   memlock    134217728
oracle   soft   memlock    134217728

grid   soft   nofile    1024
grid   hard   nofile    65536
grid   soft   nproc    16384
grid   hard   nproc    16384
grid   soft   stack    10240
grid   hard   stack    32768
grid   hard   memlock    134217728
grid   soft   memlock    134217728
​```

备注:Memlock在HugePage环境开启,单位为KB。

值得一提的是,Linux 7 的 limit 配置已经不在是 /etc/security/limits.conf 了,而是在 /etc/security/limits.d 目录下面。

[root@tqdb21: /etc/security/limits.d]# cat oracle-database-19c.conf 
​```
oracle   soft   nofile    1024
oracle   hard   nofile    65536
oracle   soft   nproc    16384
oracle   hard   nproc    16384
oracle   soft   stack    10240
oracle   hard   stack    32768
oracle   hard   memlock    134217728
oracle   soft   memlock    134217728

grid   soft   nofile    1024
grid   hard   nofile    65536
grid   soft   nproc    16384
grid   hard   nproc    16384
grid   soft   stack    10240
grid   hard   stack    32768
grid   hard   memlock    134217728
grid   soft   memlock    134217728
​```
[root@tqdb21: /etc/security/limits.d]# 

2.16 目录创建

# mkdir -p /u01/app/grid
# mkdir -p /u01/app/oraInventory
# mkdir -p /u01/app/19c/grid
# mkdir -p /u01/app/oracle
# mkdir -p /u01/app/oracle/product/19c/dbhome
# chown -R grid:oinstall /u01
# chown -R grid:oinstall /u01/app/oraInventory
# chown -R oracle:oinstall /u01/app/oracle
# chown -R oracle:oinstall /u01/app/oracle/product
# chown -R oracle:oinstall /u01/app/oracle/product/19c/dbhome
# chmod -R 775 /u01

2.17 配置 profile

# vim /etc/profile

# 增加下列内容:
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
 if [ $SHELL = "/bin/ksh" ]; then 
     ulimit -p 16384
     ulimit -n 65536
   else
         ulimit -u 16384 -n 65536
 fi
 umask 022 
fi

2.18 GRID 用户环境变量

rac1 节点 1:

-- rac1 节点 1: GRID用户环境变量
# su - grid
$ vim .bash_profile 

​```
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_SID=+ASM1
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/19c/grid
export TNS_ADMIN=$ORACLE_HOME/network/admin
export NLS333=$ORACLE_HOME/ocommon/nls/admin/data
export NLS_LANG=american_america.AL32UTF8
export LIBPATH=$ORACLE_HOME/lib:$ORACLE_HOME/lib32
export LD_LIBRARY_PATH=$ORACLE_HOME/jdk/jre/lib:$ORACLE_HOME/network/lib:$ORACLE_HOME/rdbms/lib:$LD_LIBRARY_PATH 
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/lib32:$LD_LIBRARY_PATH
export CLASS_PATH=$ORACLE_HOME/jre:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib 
export PATH=/usr/local/bin:/usr/bin:/bin:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:.
export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$HOME/bin:$PATH
export PATH=/usr/bin/xdpyinfo:$PATH

umask 022
if [ $USER = "grid" ]; then
if [ $SHELL = "/bin/ksh" ]; then
     ulimit -p 16384
     ulimit -n 65536 
 else
     ulimit -u 16384 -n 65536 
 fi
fi

##
export NLS_LANG="american_america.AL32UTF8"
export LANG="en_US.UTF-8"
export NLS_DATE_FORMAT="YYYY-MM-DD HH24:MI:SS"
export NLS_TIMESTAMP_FORMAT="YYYY-MM-DD HH24:MI:SS.FF9"

alias impdp='rlwrap impdp'
alias sqlplus='rlwrap sqlplus'
alias asmcmd='rlwrap asmcmd'
​```

rac2 节点 2:

-- rac2 节点 2: GRID用户环境变量
# su - grid
$ vim .bash_profile 

​```
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_SID=+ASM2
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/19c/grid
export TNS_ADMIN=$ORACLE_HOME/network/admin
export NLS333=$ORACLE_HOME/ocommon/nls/admin/data
export NLS_LANG=american_america.AL32UTF8
export LIBPATH=$ORACLE_HOME/lib:$ORACLE_HOME/lib32
export LD_LIBRARY_PATH=$ORACLE_HOME/jdk/jre/lib:$ORACLE_HOME/network/lib:$ORACLE_HOME/rdbms/lib:$LD_LIBRARY_PATH 
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/lib32:$LD_LIBRARY_PATH
export CLASS_PATH=$ORACLE_HOME/jre:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib 
export PATH=/usr/local/bin:/usr/bin:/bin:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:.
export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$HOME/bin:$PATH
export PATH=/usr/bin/xdpyinfo:$PATH

umask 022
if [ $USER = "grid" ]; then
if [ $SHELL = "/bin/ksh" ]; then
  ulimit -p 16384
     ulimit -n 65536 
 else
     ulimit -u 16384 -n 65536 
 fi
fi

##
export NLS_LANG="american_america.AL32UTF8"
export LANG="en_US.UTF-8"
export NLS_DATE_FORMAT="YYYY-MM-DD HH24:MI:SS"
export NLS_TIMESTAMP_FORMAT="YYYY-MM-DD HH24:MI:SS.FF9"

alias impdp='rlwrap impdp'
alias sqlplus='rlwrap sqlplus'
alias asmcmd='rlwrap asmcmd'
​```

2.19 ORACLE 用户环境变量

rac1 节点 1:

-- rac1 节点 1:ORACLE用户环境变量
# su - oracle
$ vim .bash_profile
​```
export LANG=en_US
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_SID=tqdb1
export DB_UNIQUE_NAME=tqdb
export ORACLE_UNQNAME=tqdb
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/product/19c/dbhome
export TNS_ADMIN=$ORACLE_HOME/network/admin
export NLS333=$ORACLE_HOME/ocommon/nls/admin/data
export NLS_LANG=american_america.AL32UTF8
export LIBPATH=$ORACLE_HOME/lib:$ORACLE_HOME/lib32
export LD_LIBRARY_PATH=$ORACLE_HOME/jdk/jre/lib:$ORACLE_HOME/network/lib:$ORACLE_HOME/rdbms/lib:$LD_LIBRARY_PATH 
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/lib32:$LD_LIBRARY_PATH
export CLASS_PATH=$ORACLE_HOME/jre:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib
export PATH=/usr/local/bin:/usr/bin:/bin:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:.
export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$HOME/bin:$PATH
export PATH=/usr/bin/xdpyinfo:$PATH

umask 022
if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then 
     ulimit -p 16384
     ulimit -n 65536 
 else
     ulimit -u 16384 -n 65536 
 fi
fi

##
export NLS_LANG="american_america.AL32UTF8"
export LANG="en_US.UTF-8"
export NLS_DATE_FORMAT="YYYY-MM-DD HH24:MI:SS"
export NLS_TIMESTAMP_FORMAT="YYYY-MM-DD HH24:MI:SS.FF9"

alias impdp='rlwrap impdp'
alias sqlplus='rlwrap sqlplus'
alias rman='rlwrap rman'

rac2 节点 2:

-- rac2 节点 2:ORACLE用户环境变量
# su - oracle
$ vim .bash_profile
​```
export LANG=en_US
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_SID=tqdb2
export DB_UNIQUE_NAME=tqdb
export ORACLE_UNQNAME=tqdb
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/product/19c/dbhome
export TNS_ADMIN=$ORACLE_HOME/network/admin
export NLS333=$ORACLE_HOME/ocommon/nls/admin/data
export NLS_LANG=american_america.AL32UTF8
export LIBPATH=$ORACLE_HOME/lib:$ORACLE_HOME/lib32
export LD_LIBRARY_PATH=$ORACLE_HOME/jdk/jre/lib:$ORACLE_HOME/network/lib:$ORACLE_HOME/rdbms/lib:$LD_LIBRARY_PATH 
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/lib32:$LD_LIBRARY_PATH
export CLASS_PATH=$ORACLE_HOME/jre:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib
export PATH=/usr/local/bin:/usr/bin:/bin:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:.
export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$HOME/bin:$PATH
export PATH=/usr/bin/xdpyinfo:$PATH

umask 022
if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then 
  ulimit -p 16384
     ulimit -n 65536 
 else
     ulimit -u 16384 -n 65536 
 fi
fi

##
export NLS_LANG="american_america.AL32UTF8"
export LANG="en_US.UTF-8"
export NLS_DATE_FORMAT="YYYY-MM-DD HH24:MI:SS"
export NLS_TIMESTAMP_FORMAT="YYYY-MM-DD HH24:MI:SS.FF9"

alias impdp='rlwrap impdp'
alias sqlplus='rlwrap sqlplus'
alias rman='rlwrap rman'

2.20 ROOT 用户添加 crsctl 命令

-- ROOT用户添加crsctl命令

# vim /etc/profile
​```
# ROOT用户添加 `crsctl` 命令
export PATH=/u01/app/19c/grid/bin:$PATH
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/19c/grid

# 图形化界面相关
export PATH=/usr/bin/xdpyinfo:$PATH

# history: 显示历史时间
export HISTSIZE=4096
export HISTTIMEFORMAT="%F %T `whoami` "
​```

2.21 手动配置 SSH 等效性 (也可在图形安装GRID、DB时,点击SSH connectivity

2.21.1 oracle 用户的 SSH 等效性

命令: (建立oracle 用户的SSH等效性(两个节点在oracle用户下执行))

-- 1. 在 tqdb21 节点1 执行:
root# su - oracle
oracle$ mkdir ~/.ssh
oracle$ chmod 700 ~/.ssh/
oracle$ ssh-keygen -t rsa
oracle$ ssh-keygen -t dsa

-- 2. 在 tqdb22 节点2 执行:
root# su - oracle
oracle$ mkdir ~/.ssh
oracle$ chmod 700 ~/.ssh/
oracle$ ssh-keygen -t rsa
oracle$ ssh-keygen -t dsa

-- 3. 在 tqdb21 节点1 执行:
oracle$ cd ~/.ssh
oracle$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
oracle$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
oracle$ ssh oracle@tqdb22 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 
oracle$ ssh oracle@tqdb22 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys 
oracle$ scp /home/oracle/.ssh/authorized_keys oracle@tqdb22:~/.ssh/authorized_keys

-- 4. 在 tqdb22 节点2 执行:  ( 查看从 tqdb21 复制(scp) 过来的 `authorized_keys` 文件)
oracle$ cd ~/.ssh
oracle$ ll
oracle$ cat authorized_keys

在每个节点上测试连接。验证当您再次运行以下命令时,系统是否不再提示您输入口令。
-- 5. 在 tqdb21 节点1 执行: 
-- 第一次,需要输入 `yes`
oracle$ ssh tqdb21 date
oracle$ ssh tqdb22 date
oracle$ ssh tqdb21-priv date
oracle$ ssh tqdb22-priv date
-- 第二次,不再需要输入 `yes`, 可以直接返回结果。 说明 SSH 等效性已配置好了。
oracle$ ssh tqdb21 date
oracle$ ssh tqdb22 date
oracle$ ssh tqdb21-priv date
oracle$ ssh tqdb22-priv date
oracle$ date; ssh tqdb22 date
oracle$ date; ssh tqdb22-priv date

-- 6. 在 tqdb22 节点2 执行:
-- 第一次,需要输入 `yes`
oracle$ ssh tqdb21 date
oracle$ ssh tqdb22 date
oracle$ ssh tqdb21-priv date
oracle$ ssh tqdb22-priv date
-- 第二次,不再需要输入 `yes`, 可以直接返回结果。 说明 SSH 等效性已配置好了。
oracle$ ssh tqdb21 date
oracle$ ssh tqdb22 date
oracle$ ssh tqdb21-priv date
oracle$ ssh tqdb22-priv date
oracle$ date; ssh tqdb22 date
oracle$ date; ssh tqdb22-priv date

执行记录:

建立SSH等效性(两个节点在oracle用户下执行)
在 tqdb21 节点1 执行
​```
[root@tqdb21: ~]# su - oracle
Last login: Tue Feb 11 23:23:44 CST 2020 on pts/0
[oracle@tqdb21: ~]$ la
bash: la: command not found...
[oracle@tqdb21: ~]$ l.
.  ..  .bash_history  .bash_logout  .bash_profile  .bashrc  .cache  .config  .kshrc  .mozilla  .viminfo
[oracle@tqdb21: ~]$ mkdir ~/.ssh
[oracle@tqdb21: ~]$ chmod 700 ~/.ssh/
[oracle@tqdb21: ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:sKgc0lXiMqxIoB1c++ANDWQbG7um0Tbz+nM8PE1LpBc oracle@tqdb21
The key's randomart image is:
+---[RSA 2048]----+
|...oO .          |
|oo.+ %           |
|..= X o          |
|oo * B o  E      |
|+ + X + So .     |
| o B +  . +      |
|  +   .o = .     |
|     .. * o      |
|    ...o o       |
+----[SHA256]-----+
[oracle@tqdb21: ~]$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
SHA256:5TXoyCtm2RREUNtssej/nesLCv3DCgYr572hXJ48vzQ oracle@tqdb21
The key's randomart image is:
+---[DSA 1024]----+
|      .++ .      |
|       . = +     |
|        + B o    |
|       o B . .   |
|      . S o      |
|       * +       |
|    . B O E..    |
|     B O.* =oo . |
|      + *+=ooo*o |
+----[SHA256]-----+
[oracle@tqdb21: ~]$ 
[oracle@tqdb21: ~]$ cd .ssh/
[oracle@tqdb21: ~/.ssh]$ ls
id_dsa  id_dsa.pub  id_rsa  id_rsa.pub
[oracle@tqdb21: ~/.ssh]$ ll
total 16
-rw-------. 1 oracle oinstall  668 Feb 11 23:31 id_dsa
-rw-r--r--. 1 oracle oinstall  603 Feb 11 23:31 id_dsa.pub
-rw-------. 1 oracle oinstall 1679 Feb 11 23:30 id_rsa
-rw-r--r--. 1 oracle oinstall  395 Feb 11 23:30 id_rsa.pub
[oracle@tqdb21: ~/.ssh]$ 
​```

在 tqdb22 节点2 执行
​```
[root@tqdb22: ~]# su - oracle
Last login: Tue Feb 11 23:25:32 CST 2020 on pts/0
[oracle@tqdb22: ~]$ l.
.  ..  .bash_history  .bash_logout  .bash_profile  .bashrc  .cache  .config  .kshrc  .mozilla  .viminfo
[oracle@tqdb22: ~]$ mkdir ~/.ssh
[oracle@tqdb22: ~]$ chmod 700 ~/.ssh/
[oracle@tqdb22: ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:TwfmNqXPaYMjwC4dvIy4Qf+R5m4l57iee8qlOvxmeJA oracle@tqdb22
The key's randomart image is:
+---[RSA 2048]----+
|                 |
|                 |
|          o .    |
|     o   o +     |
|  .  .= S * .    |
| . oE=.=o+ * .   |
|  o.+oO*o + *    |
|   o+**=o. o .   |
|  . .@&=         |
+----[SHA256]-----+
[oracle@tqdb22: ~]$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
SHA256:JS6Z/5ztai4ZGn6jeprpJ7LO0GJMwzAPDduW92t7tY4 oracle@tqdb22
The key's randomart image is:
+---[DSA 1024]----+
|.                |
| = .             |
|= = .   . .      |
|o= . . + o       |
| +.   = S        |
|o..   .+. .      |
|oo.  .oo.+ .     |
|.+. .+=.*+oo     |
| .++**o+E*Boo    |
+----[SHA256]-----+
[oracle@tqdb22: ~]$ ll .ssh/
total 16
-rw-------. 1 oracle oinstall  668 Feb 11 23:33 id_dsa
-rw-r--r--. 1 oracle oinstall  603 Feb 11 23:33 id_dsa.pub
-rw-------. 1 oracle oinstall 1679 Feb 11 23:33 id_rsa
-rw-r--r--. 1 oracle oinstall  395 Feb 11 23:33 id_rsa.pub
[oracle@tqdb22: ~]$ 
​```

在 tqdb21 节点1 执行:
​```
[oracle@tqdb21: ~/.ssh]$ cat id_dsa.pub 
ssh-dss AAAAB3NzaC1kc3MAAACBAKUOvUgNh2W91m9nrftiov4cRsP8sdiz2Tnd4+6t0WCBgu+hcppe/RD2zv/Dn3Q3tmaGE7vkCzdMpvCuFr0dOX2bQZtu+e98itdn0s6iM1Wrbri1n6a9yNLbvNVXbW+WRpHMImePDS35C5zzQJFc0DXmxeZ0UQxsqR3ZE9NpFJ9/AAAAFQC1MRowodOePZVcMSunpKDL+SndowAAAIAMBGObmCEZZnCFfQ0NtT/YBNgdyBohULgUa+jUCWPJLXis1wNJjadoWVEW7+KKHPUdx7NfS4kmDKYQL4xkXLUBzRvQVYncskpWtxnZvNiw0g6iVrLc5+DCr2AOqz1rpaGQmsfunFOXAQ0OHgSf6bUzxdHcTK8sEL0dtBi1yNM+AgAAAIAN/3QY7mk2D6/dmpo9Mq75Mv+vDM4ln/9pApqJSgE/UEKre1v6VI73xIawV3eaetAdgbGDDhyEJYb8k0LI6b+Ptox0mtKFi92OmIIiDh07b/CmDucy8K7XM/NRjS4z5C4kuuhNODNK7XLZGUxYi0Pa78zVHaCaWTRskNBUqFBNAQ== oracle@tqdb21
[oracle@tqdb21: ~/.ssh]$ cat id_rsa.pub    
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDJzgjh6y8oKYHf7ebuWmxjESIfoquJ9r+xdr3SCzS76pIjDIBq0+Awh2GafvlwDFd/FfmTDxcz6q1blp73NEDQq6RZA8nudSvER/qY7wrUW41RnrHzt2X7WrmAZ8KBWqvAjYD925jxfODwVROXzj7kzwWoR1jzsZvJiARDaWNKuiQSnMlkaluE3BaSNnacvlNGVkjNi6rgybTGDcalojiYvBuIIgOP7t5N4vxYbT1oACuGjs+vmoKeFnPJmbvZeWStTOsJMkVqz04WMquoXgULHtTBJocRf4mCLF8wAMU0me6K1ywxx4FZKP57Bqq1N70EF+t+XtXlIf3R4zq5AJsH oracle@tqdb21
[oracle@tqdb21: ~/.ssh]$ 
[oracle@tqdb21: ~/.ssh]$ 
[oracle@tqdb21: ~/.ssh]$ cd
[oracle@tqdb21: ~]$ 
[oracle@tqdb21: ~]$ 
[oracle@tqdb21: ~]$ cd -
/home/oracle/.ssh
[oracle@tqdb21: ~/.ssh]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[oracle@tqdb21: ~/.ssh]$ ll
total 20
-rw-r--r--. 1 oracle oinstall  395 Feb 11 23:38 authorized_keys
-rw-------. 1 oracle oinstall  668 Feb 11 23:31 id_dsa
-rw-r--r--. 1 oracle oinstall  603 Feb 11 23:31 id_dsa.pub
-rw-------. 1 oracle oinstall 1679 Feb 11 23:30 id_rsa
-rw-r--r--. 1 oracle oinstall  395 Feb 11 23:30 id_rsa.pub
[oracle@tqdb21: ~/.ssh]$ less authorized_keys 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDJzgjh6y8oKYHf7ebuWmxjESIfoquJ9r+xdr3SCzS76pIjDIBq0+Awh2GafvlwDFd/FfmTDxcz6q1blp73NEDQq6RZA8nudSvER/qY7wrUW41RnrHzt2X7WrmAZ8KBWqvAjYD925jxfODwVROXzj7kzwWoR1jzsZvJiARDaWNKuiQSnMlkaluE3BaSNnacvlNGVkjNi6rgybTGDcalojiYvBuIIgOP7t5N4vxYbT1oACuGjs+vmoKeFnPJmbvZeWStTOsJMkVqz04WMquoXgULHtTBJocRf4mCLF8wAMU0me6K1ywxx4FZKP57Bqq1N70EF+t+XtXlIf3R4zq5AJsH oracle@tqdb21
[oracle@tqdb21: ~/.ssh]$ 
[oracle@tqdb21: ~/.ssh]$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[oracle@tqdb21: ~/.ssh]$ 
[oracle@tqdb21: ~/.ssh]$ cat authorized_keys 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDJzgjh6y8oKYHf7ebuWmxjESIfoquJ9r+xdr3SCzS76pIjDIBq0+Awh2GafvlwDFd/FfmTDxcz6q1blp73NEDQq6RZA8nudSvER/qY7wrUW41RnrHzt2X7WrmAZ8KBWqvAjYD925jxfODwVROXzj7kzwWoR1jzsZvJiARDaWNKuiQSnMlkaluE3BaSNnacvlNGVkjNi6rgybTGDcalojiYvBuIIgOP7t5N4vxYbT1oACuGjs+vmoKeFnPJmbvZeWStTOsJMkVqz04WMquoXgULHtTBJocRf4mCLF8wAMU0me6K1ywxx4FZKP57Bqq1N70EF+t+XtXlIf3R4zq5AJsH oracle@tqdb21
ssh-dss AAAAB3NzaC1kc3MAAACBAKUOvUgNh2W91m9nrftiov4cRsP8sdiz2Tnd4+6t0WCBgu+hcppe/RD2zv/Dn3Q3tmaGE7vkCzdMpvCuFr0dOX2bQZtu+e98itdn0s6iM1Wrbri1n6a9yNLbvNVXbW+WRpHMImePDS35C5zzQJFc0DXmxeZ0UQxsqR3ZE9NpFJ9/AAAAFQC1MRowodOePZVcMSunpKDL+SndowAAAIAMBGObmCEZZnCFfQ0NtT/YBNgdyBohULgUa+jUCWPJLXis1wNJjadoWVEW7+KKHPUdx7NfS4kmDKYQL4xkXLUBzRvQVYncskpWtxnZvNiw0g6iVrLc5+DCr2AOqz1rpaGQmsfunFOXAQ0OHgSf6bUzxdHcTK8sEL0dtBi1yNM+AgAAAIAN/3QY7mk2D6/dmpo9Mq75Mv+vDM4ln/9pApqJSgE/UEKre1v6VI73xIawV3eaetAdgbGDDhyEJYb8k0LI6b+Ptox0mtKFi92OmIIiDh07b/CmDucy8K7XM/NRjS4z5C4kuuhNODNK7XLZGUxYi0Pa78zVHaCaWTRskNBUqFBNAQ== oracle@tqdb21
[oracle@tqdb21: ~/.ssh]$ 
[oracle@tqdb21: ~/.ssh]$ 
[oracle@tqdb21: ~/.ssh]$ ssh oracle@tqdb22 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 
The authenticity of host 'tqdb22 (192.168.6.22)' can't be established.
ECDSA key fingerprint is SHA256:QT8z0WN0dmX3S0jnMcLe/MeraabCFvwlYKTmX/kKJ+o.
ECDSA key fingerprint is MD5:de:f8:90:99:5d:f1:05:5c:65:4b:fb:8b:0f:bc:63:7d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'tqdb22,192.168.6.22' (ECDSA) to the list of known hosts.
oracle@tqdb22's password: 
Permission denied, please try again.
oracle@tqdb22's password: 
[oracle@tqdb21: ~/.ssh]$ cat authorized_keys 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDJzgjh6y8oKYHf7ebuWmxjESIfoquJ9r+xdr3SCzS76pIjDIBq0+Awh2GafvlwDFd/FfmTDxcz6q1blp73NEDQq6RZA8nudSvER/qY7wrUW41RnrHzt2X7WrmAZ8KBWqvAjYD925jxfODwVROXzj7kzwWoR1jzsZvJiARDaWNKuiQSnMlkaluE3BaSNnacvlNGVkjNi6rgybTGDcalojiYvBuIIgOP7t5N4vxYbT1oACuGjs+vmoKeFnPJmbvZeWStTOsJMkVqz04WMquoXgULHtTBJocRf4mCLF8wAMU0me6K1ywxx4FZKP57Bqq1N70EF+t+XtXlIf3R4zq5AJsH oracle@tqdb21
ssh-dss AAAAB3NzaC1kc3MAAACBAKUOvUgNh2W91m9nrftiov4cRsP8sdiz2Tnd4+6t0WCBgu+hcppe/RD2zv/Dn3Q3tmaGE7vkCzdMpvCuFr0dOX2bQZtu+e98itdn0s6iM1Wrbri1n6a9yNLbvNVXbW+WRpHMImePDS35C5zzQJFc0DXmxeZ0UQxsqR3ZE9NpFJ9/AAAAFQC1MRowodOePZVcMSunpKDL+SndowAAAIAMBGObmCEZZnCFfQ0NtT/YBNgdyBohULgUa+jUCWPJLXis1wNJjadoWVEW7+KKHPUdx7NfS4kmDKYQL4xkXLUBzRvQVYncskpWtxnZvNiw0g6iVrLc5+DCr2AOqz1rpaGQmsfunFOXAQ0OHgSf6bUzxdHcTK8sEL0dtBi1yNM+AgAAAIAN/3QY7mk2D6/dmpo9Mq75Mv+vDM4ln/9pApqJSgE/UEKre1v6VI73xIawV3eaetAdgbGDDhyEJYb8k0LI6b+Ptox0mtKFi92OmIIiDh07b/CmDucy8K7XM/NRjS4z5C4kuuhNODNK7XLZGUxYi0Pa78zVHaCaWTRskNBUqFBNAQ== oracle@tqdb21
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDNLZOEguCk/87HUUtYnayz8klrehAk7bgK87F6zjdp6roaAXQDFiKKz5se2JmAoTKccZ2WvmYZvyRfhpyNJWV7ZdgsPwrk4iW/SpLDrH/m/5TaD3406ghp+rpziMdwpiHXeA6td00ZLA+ZL3HcIzG975K1PVurdZFBMj0uNPL3dJNwTKcdzEiXULgCLNSzbSvgmD8WZEarb9UfqS4uzq0jGct52uOELxHHwvlhAqCUDMma0wOcTLd/4eqCQUcqDCIjGpgiN7c2clLSJqLPGmiGx8S6rvg02AHxvxPm+2D3MvNpOwOkuJPbB9SQoyPyGroilslu+if7awbJYd6INpXP oracle@tqdb22
[oracle@tqdb21: ~/.ssh]$ 
[oracle@tqdb21: ~/.ssh]$ ssh oracle@tqdb22 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys 
oracle@tqdb22's password: 
[oracle@tqdb21: ~/.ssh]$ 
[oracle@tqdb21: ~/.ssh]$ cat authorized_keys 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDJzgjh6y8oKYHf7ebuWmxjESIfoquJ9r+xdr3SCzS76pIjDIBq0+Awh2GafvlwDFd/FfmTDxcz6q1blp73NEDQq6RZA8nudSvER/qY7wrUW41RnrHzt2X7WrmAZ8KBWqvAjYD925jxfODwVROXzj7kzwWoR1jzsZvJiARDaWNKuiQSnMlkaluE3BaSNnacvlNGVkjNi6rgybTGDcalojiYvBuIIgOP7t5N4vxYbT1oACuGjs+vmoKeFnPJmbvZeWStTOsJMkVqz04WMquoXgULHtTBJocRf4mCLF8wAMU0me6K1ywxx4FZKP57Bqq1N70EF+t+XtXlIf3R4zq5AJsH oracle@tqdb21
ssh-dss AAAAB3NzaC1kc3MAAACBAKUOvUgNh2W91m9nrftiov4cRsP8sdiz2Tnd4+6t0WCBgu+hcppe/RD2zv/Dn3Q3tmaGE7vkCzdMpvCuFr0dOX2bQZtu+e98itdn0s6iM1Wrbri1n6a9yNLbvNVXbW+WRpHMImePDS35C5zzQJFc0DXmxeZ0UQxsqR3ZE9NpFJ9/AAAAFQC1MRowodOePZVcMSunpKDL+SndowAAAIAMBGObmCEZZnCFfQ0NtT/YBNgdyBohULgUa+jUCWPJLXis1wNJjadoWVEW7+KKHPUdx7NfS4kmDKYQL4xkXLUBzRvQVYncskpWtxnZvNiw0g6iVrLc5+DCr2AOqz1rpaGQmsfunFOXAQ0OHgSf6bUzxdHcTK8sEL0dtBi1yNM+AgAAAIAN/3QY7mk2D6/dmpo9Mq75Mv+vDM4ln/9pApqJSgE/UEKre1v6VI73xIawV3eaetAdgbGDDhyEJYb8k0LI6b+Ptox0mtKFi92OmIIiDh07b/CmDucy8K7XM/NRjS4z5C4kuuhNODNK7XLZGUxYi0Pa78zVHaCaWTRskNBUqFBNAQ== oracle@tqdb21
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDNLZOEguCk/87HUUtYnayz8klrehAk7bgK87F6zjdp6roaAXQDFiKKz5se2JmAoTKccZ2WvmYZvyRfhpyNJWV7ZdgsPwrk4iW/SpLDrH/m/5TaD3406ghp+rpziMdwpiHXeA6td00ZLA+ZL3HcIzG975K1PVurdZFBMj0uNPL3dJNwTKcdzEiXULgCLNSzbSvgmD8WZEarb9UfqS4uzq0jGct52uOELxHHwvlhAqCUDMma0wOcTLd/4eqCQUcqDCIjGpgiN7c2clLSJqLPGmiGx8S6rvg02AHxvxPm+2D3MvNpOwOkuJPbB9SQoyPyGroilslu+if7awbJYd6INpXP oracle@tqdb22
ssh-dss AAAAB3NzaC1kc3MAAACBAJ5Uuh7XD2mJhJUeIP3r46augiN0wc3Qksou9DA+v2QXaoUzBFrdeIdQ7wSYuLkp/1rZm6imS8PlFBd8uVPudybmh+jNwVtk3d18eYgJ8lunY115/7yhsDvS7yt+cYSIVFqoiGQUWPBfXM/oGUnT+RzPqMdrEz0K7mrWpMJffFh5AAAAFQCd/xbvKCr5cYNWwqF/WUQ0mQ0U6QAAAIBwBDpTCszu1WFeYzX1o2WVSEtnnaIX+BkeHELXa90Co1F2EPTNqoA1KDoCalw0dPKyyQYeG4SDXQ7AhSSAuvIc+xherUciFDjtNYW+uVGNot+++1zMVwKaj5T0EWmoNsw60ALKeLbWniBKKahwwbRKsUL7A49D0iaqRDX6d2X2IwAAAIBztUTFji8KD/0j2N9D4pa+opeKjz571i88Iy/R9JpN8XRz1XBxP/dkfPIOTXebaY7vFSeHb0HSP2Fd70yFhqIm14Kn0A2Uf7XnSjTRvDTub51XLKI2cJKi16EwgcMOnFFJBD+A9HfYlXtVGBl+uag07sEenLW4F2FWK57TkrDRcQ== oracle@tqdb22
[oracle@tqdb21: ~/.ssh]$ 
[oracle@tqdb21: ~/.ssh]$ 
[oracle@tqdb21: ~/.ssh]$ 
[oracle@tqdb21: ~/.ssh]$ scp /home/oracle/.ssh/authorized_keys oracle@tqdb22:~/.ssh/authorized_keys
oracle@tqdb22's password: 
authorized_keys                                                                                                                                                      100% 1996     2.5MB/s   00:00    
[oracle@tqdb21: ~/.ssh]$ 
​```

在 tqdb22 节点2 执行:  ( 查看 `authorized_keys` )
​```
[oracle@tqdb22: ~/.ssh]$ ll
total 20
-rw-r--r--. 1 oracle oinstall 1996 Feb 11 23:46 authorized_keys
-rw-------. 1 oracle oinstall  668 Feb 11 23:33 id_dsa
-rw-r--r--. 1 oracle oinstall  603 Feb 11 23:33 id_dsa.pub
-rw-------. 1 oracle oinstall 1679 Feb 11 23:33 id_rsa
-rw-r--r--. 1 oracle oinstall  395 Feb 11 23:33 id_rsa.pub
[oracle@tqdb22: ~/.ssh]$ cat authorized_keys 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDJzgjh6y8oKYHf7ebuWmxjESIfoquJ9r+xdr3SCzS76pIjDIBq0+Awh2GafvlwDFd/FfmTDxcz6q1blp73NEDQq6RZA8nudSvER/qY7wrUW41RnrHzt2X7WrmAZ8KBWqvAjYD925jxfODwVROXzj7kzwWoR1jzsZvJiARDaWNKuiQSnMlkaluE3BaSNnacvlNGVkjNi6rgybTGDcalojiYvBuIIgOP7t5N4vxYbT1oACuGjs+vmoKeFnPJmbvZeWStTOsJMkVqz04WMquoXgULHtTBJocRf4mCLF8wAMU0me6K1ywxx4FZKP57Bqq1N70EF+t+XtXlIf3R4zq5AJsH oracle@tqdb21
ssh-dss AAAAB3NzaC1kc3MAAACBAKUOvUgNh2W91m9nrftiov4cRsP8sdiz2Tnd4+6t0WCBgu+hcppe/RD2zv/Dn3Q3tmaGE7vkCzdMpvCuFr0dOX2bQZtu+e98itdn0s6iM1Wrbri1n6a9yNLbvNVXbW+WRpHMImePDS35C5zzQJFc0DXmxeZ0UQxsqR3ZE9NpFJ9/AAAAFQC1MRowodOePZVcMSunpKDL+SndowAAAIAMBGObmCEZZnCFfQ0NtT/YBNgdyBohULgUa+jUCWPJLXis1wNJjadoWVEW7+KKHPUdx7NfS4kmDKYQL4xkXLUBzRvQVYncskpWtxnZvNiw0g6iVrLc5+DCr2AOqz1rpaGQmsfunFOXAQ0OHgSf6bUzxdHcTK8sEL0dtBi1yNM+AgAAAIAN/3QY7mk2D6/dmpo9Mq75Mv+vDM4ln/9pApqJSgE/UEKre1v6VI73xIawV3eaetAdgbGDDhyEJYb8k0LI6b+Ptox0mtKFi92OmIIiDh07b/CmDucy8K7XM/NRjS4z5C4kuuhNODNK7XLZGUxYi0Pa78zVHaCaWTRskNBUqFBNAQ== oracle@tqdb21
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDNLZOEguCk/87HUUtYnayz8klrehAk7bgK87F6zjdp6roaAXQDFiKKz5se2JmAoTKccZ2WvmYZvyRfhpyNJWV7ZdgsPwrk4iW/SpLDrH/m/5TaD3406ghp+rpziMdwpiHXeA6td00ZLA+ZL3HcIzG975K1PVurdZFBMj0uNPL3dJNwTKcdzEiXULgCLNSzbSvgmD8WZEarb9UfqS4uzq0jGct52uOELxHHwvlhAqCUDMma0wOcTLd/4eqCQUcqDCIjGpgiN7c2clLSJqLPGmiGx8S6rvg02AHxvxPm+2D3MvNpOwOkuJPbB9SQoyPyGroilslu+if7awbJYd6INpXP oracle@tqdb22
ssh-dss AAAAB3NzaC1kc3MAAACBAJ5Uuh7XD2mJhJUeIP3r46augiN0wc3Qksou9DA+v2QXaoUzBFrdeIdQ7wSYuLkp/1rZm6imS8PlFBd8uVPudybmh+jNwVtk3d18eYgJ8lunY115/7yhsDvS7yt+cYSIVFqoiGQUWPBfXM/oGUnT+RzPqMdrEz0K7mrWpMJffFh5AAAAFQCd/xbvKCr5cYNWwqF/WUQ0mQ0U6QAAAIBwBDpTCszu1WFeYzX1o2WVSEtnnaIX+BkeHELXa90Co1F2EPTNqoA1KDoCalw0dPKyyQYeG4SDXQ7AhSSAuvIc+xherUciFDjtNYW+uVGNot+++1zMVwKaj5T0EWmoNsw60ALKeLbWniBKKahwwbRKsUL7A49D0iaqRDX6d2X2IwAAAIBztUTFji8KD/0j2N9D4pa+opeKjz571i88Iy/R9JpN8XRz1XBxP/dkfPIOTXebaY7vFSeHb0HSP2Fd70yFhqIm14Kn0A2Uf7XnSjTRvDTub51XLKI2cJKi16EwgcMOnFFJBD+A9HfYlXtVGBl+uag07sEenLW4F2FWK57TkrDRcQ== oracle@tqdb22
[oracle@tqdb22: ~/.ssh]$ 
​```



在每个节点上测试连接。验证当您再次运行以下命令时,系统是否不提示您输入口令。

在 tqdb21 节点1 执行:
​```
[oracle@tqdb21: ~]$ ssh tqdb21 date
The authenticity of host 'tqdb21 (192.168.6.21)' can't be established.
ECDSA key fingerprint is SHA256:P/+G/d30l5VTPHHL9N6D+RpzvZ63gAIm+g9F6PeX80A.
ECDSA key fingerprint is MD5:20:22:52:f2:51:c2:bf:a3:80:29:0b:e3:3c:c7:07:49.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'tqdb21,192.168.6.21' (ECDSA) to the list of known hosts.
Tue Feb 11 23:48:12 CST 2020
[oracle@tqdb21: ~]$ ssh tqdb22 date
Tue Feb 11 23:48:22 CST 2020
[oracle@tqdb21: ~]$ ssh tqdb21-priv date
The authenticity of host 'tqdb21-priv (172.16.8.21)' can't be established.
ECDSA key fingerprint is SHA256:P/+G/d30l5VTPHHL9N6D+RpzvZ63gAIm+g9F6PeX80A.
ECDSA key fingerprint is MD5:20:22:52:f2:51:c2:bf:a3:80:29:0b:e3:3c:c7:07:49.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'tqdb21-priv,172.16.8.21' (ECDSA) to the list of known hosts.
Tue Feb 11 23:48:42 CST 2020
[oracle@tqdb21: ~]$ ssh tqdb22-priv date 
The authenticity of host 'tqdb22-priv (172.16.8.22)' can't be established.
ECDSA key fingerprint is SHA256:QT8z0WN0dmX3S0jnMcLe/MeraabCFvwlYKTmX/kKJ+o.
ECDSA key fingerprint is MD5:de:f8:90:99:5d:f1:05:5c:65:4b:fb:8b:0f:bc:63:7d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'tqdb22-priv,172.16.8.22' (ECDSA) to the list of known hosts.
Tue Feb 11 23:49:06 CST 2020
[oracle@tqdb21: ~]$ 
[oracle@tqdb21: ~]$ ssh tqdb21 date
Tue Feb 11 23:49:35 CST 2020
[oracle@tqdb21: ~]$ ssh tqdb22 date
Tue Feb 11 23:49:39 CST 2020
[oracle@tqdb21: ~]$ ssh tqdb21-priv date
Tue Feb 11 23:49:48 CST 2020
[oracle@tqdb21: ~]$ ssh tqdb22-priv date 
Tue Feb 11 23:49:52 CST 2020
[oracle@tqdb21: ~]$ date; ssh tqdb22 date
Tue Feb 11 23:50:15 CST 2020
Tue Feb 11 23:50:15 CST 2020
[oracle@tqdb21: ~]$ date; ssh tqdb22-priv date
Tue Feb 11 23:50:40 CST 2020
Tue Feb 11 23:50:40 CST 2020
[oracle@tqdb21: ~]$ 
​```

在 tqdb22 节点2 执行:
​```
[oracle@tqdb22: ~]$ ssh tqdb21 date
The authenticity of host 'tqdb21 (192.168.6.21)' can't be established.
ECDSA key fingerprint is SHA256:P/+G/d30l5VTPHHL9N6D+RpzvZ63gAIm+g9F6PeX80A.
ECDSA key fingerprint is MD5:20:22:52:f2:51:c2:bf:a3:80:29:0b:e3:3c:c7:07:49.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'tqdb21,192.168.6.21' (ECDSA) to the list of known hosts.
Tue Feb 11 23:53:12 CST 2020
[oracle@tqdb22: ~]$ ssh tqdb22 date
The authenticity of host 'tqdb22 (192.168.6.22)' can't be established.
ECDSA key fingerprint is SHA256:QT8z0WN0dmX3S0jnMcLe/MeraabCFvwlYKTmX/kKJ+o.
ECDSA key fingerprint is MD5:de:f8:90:99:5d:f1:05:5c:65:4b:fb:8b:0f:bc:63:7d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'tqdb22,192.168.6.22' (ECDSA) to the list of known hosts.
Tue Feb 11 23:53:25 CST 2020
[oracle@tqdb22: ~]$ ssh tqdb21-priv date
The authenticity of host 'tqdb21-priv (172.16.8.21)' can't be established.
ECDSA key fingerprint is SHA256:P/+G/d30l5VTPHHL9N6D+RpzvZ63gAIm+g9F6PeX80A.
ECDSA key fingerprint is MD5:20:22:52:f2:51:c2:bf:a3:80:29:0b:e3:3c:c7:07:49.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'tqdb21-priv,172.16.8.21' (ECDSA) to the list of known hosts.
Tue Feb 11 23:53:35 CST 2020
[oracle@tqdb22: ~]$ ssh tqdb22-priv date 
The authenticity of host 'tqdb22-priv (172.16.8.22)' can't be established.
ECDSA key fingerprint is SHA256:QT8z0WN0dmX3S0jnMcLe/MeraabCFvwlYKTmX/kKJ+o.
ECDSA key fingerprint is MD5:de:f8:90:99:5d:f1:05:5c:65:4b:fb:8b:0f:bc:63:7d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'tqdb22-priv,172.16.8.22' (ECDSA) to the list of known hosts.
Tue Feb 11 23:53:42 CST 2020
[oracle@tqdb22: ~]$ ssh tqdb21 date
Tue Feb 11 23:53:59 CST 2020
[oracle@tqdb22: ~]$ ssh tqdb22 date
Tue Feb 11 23:54:03 CST 2020
[oracle@tqdb22: ~]$ ssh tqdb21-priv date
Tue Feb 11 23:54:08 CST 2020
[oracle@tqdb22: ~]$ ssh tqdb22-priv date 
Tue Feb 11 23:54:12 CST 2020
[oracle@tqdb22: ~]$ date; ssh tqdb21 date
Tue Feb 11 23:54:29 CST 2020
Tue Feb 11 23:54:29 CST 2020
[oracle@tqdb22: ~]$ date; ssh tqdb21-priv date 
Tue Feb 11 23:54:41 CST 2020
Tue Feb 11 23:54:41 CST 2020
[oracle@tqdb22: ~]$ 
​```


2.21.2 grid 用户的 SSH 等效性

命令: (建立grid 用户的SSH等效性(两个节点在grid用户下执行))

-- 1. 在 tqdb21 节点1 执行:
root# su - grid
grid$ mkdir ~/.ssh
grid$ chmod 700 ~/.ssh/
grid$ ssh-keygen -t rsa
grid$ ssh-keygen -t dsa

-- 2. 在 tqdb22 节点2 执行:
root# su - grid
grid$ mkdir ~/.ssh
grid$ chmod 700 ~/.ssh/
grid$ ssh-keygen -t rsa
grid$ ssh-keygen -t dsa

-- 3. 在 tqdb21 节点1 执行:
grid$ cd ~/.ssh
grid$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
grid$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
grid$ ssh grid@tqdb22 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 
grid$ ssh grid@tqdb22 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys 
grid$ scp /home/grid/.ssh/authorized_keys grid@tqdb22:~/.ssh/authorized_keys

-- 4. 在 tqdb22 节点2 执行:  ( 查看从 tqdb21 复制(scp) 过来的 `authorized_keys` 文件)
grid$ cd ~/.ssh
grid$ ll
grid$ cat authorized_keys

在每个节点上测试连接。验证当您再次运行以下命令时,系统是否不再提示您输入口令。
-- 5. 在 tqdb21 节点1 执行: 
-- 第一次,需要输入 `yes`
grid$ ssh tqdb21 date
grid$ ssh tqdb22 date
grid$ ssh tqdb21-priv date
grid$ ssh tqdb22-priv date
-- 第二次,不再需要输入 `yes`, 可以直接返回结果。 说明 SSH 等效性已配置好了。
grid$ ssh tqdb21 date
grid$ ssh tqdb22 date
grid$ ssh tqdb21-priv date
grid$ ssh tqdb22-priv date
grid$ date; ssh tqdb22 date
grid$ date; ssh tqdb22-priv date

-- 6. 在 tqdb22 节点2 执行:
-- 第一次,需要输入 `yes`
grid$ ssh tqdb21 date
grid$ ssh tqdb22 date
grid$ ssh tqdb21-priv date
grid$ ssh tqdb22-priv date
-- 第二次,不再需要输入 `yes`, 可以直接返回结果。 说明 SSH 等效性已配置好了。
grid$ ssh tqdb21 date
grid$ ssh tqdb22 date
grid$ ssh tqdb21-priv date
grid$ ssh tqdb22-priv date
grid$ date; ssh tqdb22 date
grid$ date; ssh tqdb22-priv date

执行记录:

建立SSH等效性(两个节点在grid用户下执行)
-- 1. 在 tqdb21 节点1 执行:
​```
[root@tqdb21: ~]# su - grid
Last login: Tue Feb 11 23:19:09 CST 2020 on pts/0
[grid@tqdb21: ~]$ l.
.  ..  .bash_history  .bash_logout  .bash_profile  .bashrc  .cache  .config  .kshrc  .mozilla  .viminfo
[grid@tqdb21: ~]$ mkdir ~/.ssh
[grid@tqdb21: ~]$ chmod 700 ~/.ssh/
[grid@tqdb21: ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/grid/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/grid/.ssh/id_rsa.
Your public key has been saved in /home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:+eIXXZgW7lXkgp8YS/AWr4NEqsk5a7brYSc4q8aODWw grid@tqdb21
The key's randomart image is:
+---[RSA 2048]----+
|          o .  ..|
|         o o.+ ..|
|        . ..*+o..|
|     . + o +=*oo |
|      * S .+=oo  |
|.    . o .. o.   |
|.E  o B o ..     |
|.+o  * * ..      |
|.o+...+...       |
+----[SHA256]-----+
[grid@tqdb21: ~]$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/grid/.ssh/id_dsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/grid/.ssh/id_dsa.
Your public key has been saved in /home/grid/.ssh/id_dsa.pub.
The key fingerprint is:
SHA256:HIEiASc3W7qUMbqeHzLY22EyTtIRRg8ktp0zJ0ISax4 grid@tqdb21
The key's randomart image is:
+---[DSA 1024]----+
|**@..  ..        |
|+O+@o .  .       |
|oEBB.o  .        |
|o+oo=  . .       |
|..o     S        |
|oo..             |
|o=*.o            |
| ++*..           |
|  o..            |
+----[SHA256]-----+
[grid@tqdb21: ~]$ ll .ssh/
total 16
-rw-------. 1 grid oinstall  668 Feb 12 00:37 id_dsa
-rw-r--r--. 1 grid oinstall  601 Feb 12 00:37 id_dsa.pub
-rw-------. 1 grid oinstall 1679 Feb 12 00:36 id_rsa
-rw-r--r--. 1 grid oinstall  393 Feb 12 00:36 id_rsa.pub
[grid@tqdb21: ~]$ 
​```

-- 2. 在 tqdb22 节点2 执行:
​```
[root@tqdb22: ~]# su - grid  
Last login: Wed Feb 12 00:38:22 CST 2020 on pts/0
[grid@tqdb22: ~]$ l.
.  ..  .bash_history  .bash_logout  .bash_profile  .bashrc  .cache  .config  .kshrc  .mozilla  .viminfo
[grid@tqdb22: ~]$ mkdir ~/.ssh
[grid@tqdb22: ~]$ chmod 700 ~/.ssh/
[grid@tqdb22: ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/grid/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/grid/.ssh/id_rsa.
Your public key has been saved in /home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:cUEInunmtLCI/lu1MEHLYMXHw3Jr02INyvrgKocA2NU grid@tqdb22
The key's randomart image is:
+---[RSA 2048]----+
|   oo++. oo      |
|  . =+EX.  .     |
|.. ..+O B .      |
|o .  +.* =       |
|.   oo*.S        |
|.. + *+..        |
|o.o +.o.         |
|+ ....           |
|.+oo.            |
+----[SHA256]-----+
[grid@tqdb22: ~]$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/grid/.ssh/id_dsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/grid/.ssh/id_dsa.
Your public key has been saved in /home/grid/.ssh/id_dsa.pub.
The key fingerprint is:
SHA256:Pss+L5zH1Gbyr+c8u3i79/R3UK0SJDzb6jzO6aPPplw grid@tqdb22
The key's randomart image is:
+---[DSA 1024]----+
|         .       |
|          + .    |
|           *    .|
|          . o   o|
|        S  o . o |
|       .  + = o  |
|       .o*E= . ..|
|       o=*O..o+.=|
|       .B%Ooo*OB*|
+----[SHA256]-----+
[grid@tqdb22: ~]$ ll .ssh/
total 16
-rw-------. 1 grid oinstall  668 Feb 12 00:39 id_dsa
-rw-r--r--. 1 grid oinstall  601 Feb 12 00:39 id_dsa.pub
-rw-------. 1 grid oinstall 1675 Feb 12 00:39 id_rsa
-rw-r--r--. 1 grid oinstall  393 Feb 12 00:39 id_rsa.pub
[grid@tqdb22: ~]$ 
​```

-- 3. 在 tqdb21 节点1 执行:
​```
[grid@tqdb21: ~]$ cd ~/.ssh
[grid@tqdb21: ~/.ssh]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[grid@tqdb21: ~/.ssh]$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[grid@tqdb21: ~/.ssh]$ ssh grid@tqdb22 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
The authenticity of host 'tqdb22 (192.168.6.22)' can't be established.
ECDSA key fingerprint is SHA256:QT8z0WN0dmX3S0jnMcLe/MeraabCFvwlYKTmX/kKJ+o.
ECDSA key fingerprint is MD5:de:f8:90:99:5d:f1:05:5c:65:4b:fb:8b:0f:bc:63:7d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'tqdb22,192.168.6.22' (ECDSA) to the list of known hosts.
grid@tqdb22's password: 
[grid@tqdb21: ~/.ssh]$ ssh grid@tqdb22 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
grid@tqdb22's password: 
[grid@tqdb21: ~/.ssh]$ cat authorized_keys 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCyn//M6EKUXPcLk9lcennl3mN4CxFHupvFa6vR2eoGn40SNA1Yqtzhm+EI5I7O5JLT7zCUHu3aV8kIoXZdnivrw93VB8INeUIi9ZN98KDakq+nrORC3c9fZLnwCsNti+4Qy6vl2gy+fH0yR/vyhv8AVmIUe86jl8ql6TX5Xo2aS0YidD7okLumnKbCCK8HF58sPvr5j5fyMrq/w7xNhuq8jdrjOurVjcutu7u/xfSXCHvYnqbQDJbRj03bBOlhk63HYIgZYsB04/aMvngaq0lZ2XrXjBqfOq8OYBKLATgPyhoFDoD4IDymvrAm3Jsc3ZEy2wV3oOhiV895Kkvp4DtH grid@tqdb21
ssh-dss AAAAB3NzaC1kc3MAAACBAMmx0zcjtMOo8cIDWQamFqTKYN/ac0dHmRzpd2XeKTOUe7l2TeRiGilMIeyH+5i9CQ2bZzCPszd+KyJpf2BpKotsRKKlM+P09sDtDyteoTvCMyIGIvT24yQfriSFP3R5yKo0XJv0NEI6VX00wnsG5wzyaEad0FaSYdi/HlYar3MBAAAAFQCfQMu2F1kOm86SraJ2dL2m8ZZ8lwAAAIB9Tu2oCjL8wr2mDhBfaM7vg0bl6XEyYVm0b71uEzkklLKingyP38Jr3BIQ4+DSUv3+anfpJFGz1FouKI6Sow8oJCstJAx5CYM1AiHHVn5wTwBNW475kqWHTYaqwldYGjj2GwnB81QhVz+i4k4RLoMDi3BTOwoHC+hIwX6CrwumEgAAAIAWWDUYYg0b1ppVijCqU7VdXeS9FIdnlaA8puhVccSF5mzHcg4x0cQ1mWEnQjpFv1+/NsGdUoPidHWV2YQY6CRKP0xDN4nae/Cw1tKHOhnICGvrMVG8nZmDTnBUGDEGn22v4Mn5YGMAo+AclxcfYbpRnITCi4lqIHqgfm/YYWL1rA== grid@tqdb21
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCyTO5PjRN3LAYgvXN6vVgTuRT6/ZMIi1ZwKiKYAzfDECwfVMZpgO27/67JhsS4QYkf0qpAHrRrNsTLiMJJ2n0xAQ4y1PcGeKLttsUH3w5+bkwJtIyLN2uVdne6Knc/WOwB4RjNO/iNSxIQhwSiuxdp2vZqfHI0R7hAu4x7gSwMUhGTyu5kOlzpb5G0Y8bdr+U41Cc4hgSFeRkTs2fBe4aMZTz3tmX0Qie7xEZm8RtAoq3D0lEjNGq+YziBlrDLhwTldaiR3itsQdqhXVweEq0b6jotou4eFPYwya+DayQikFGXVczwaWEFNRljNzMZ3MWQSbghLFJ3ZGwwrgSBk6yx grid@tqdb22
ssh-dss AAAAB3NzaC1kc3MAAACBALXYa30uXp5U1GHO64E/5hXPBxnfs6yq3Snzz4omSSPbrTkXodWhXsUx5ywCEDC9j35KdjP6yFpzQgL5f+5/8PLsxuBoC/m/r3sMrjsTYXA/302WpalZuAXFUU8EkRdSYlpMhLB3PfWgwn9XYfAJJ3G1bDXmmAFECmlMhruGGI2DAAAAFQC4BY4Fjoh8CHdzZHEZwbC+iYYpgQAAAIBF5XDGw0oBoSAOpNysk+rI4AZSQmTwVhuNVnJIFkARdbW1rHLLALFak+BdOSgwg0JkPqPAA3l/cthT8TdzxDKE5H4WhQVpM8noYYo8V5MuE78vtHObRVwZ8APOr9NAbQ8QvdgG5huhnMx1M6esWFJ8GORtZ1r/pcyfHf1oDubOrwAAAIAjU9QOuuwNyKQaJZM2v+8l6T1Qv8psAtve1nHGOk0repiBvG5B6ucmB7e3Ae6EMj5Gw/M8jhocs+uspB1FcKNhHyT/SW7lMoAfFKtT+PzZmaWTKsNZSGQ/HVCWwUr8o3uIgcnW0SpEDrthfsApEM+d5Mpr7Hxuz2vyccBU9g0WJA== grid@tqdb22
[grid@tqdb21: ~/.ssh]$ scp /home/grid/.ssh/authorized_keys grid@tqdb22:~/.ssh/authorized_keys
grid@tqdb22's password: 
authorized_keys                                                                                                                                                      100% 1988     2.3MB/s   00:00    
[grid@tqdb21: ~/.ssh]$ 
​```

-- 4. 在 tqdb22 节点2 执行:  ( 查看从 tqdb21 复制(scp) 过来的 `authorized_keys` 文件)
​```
[grid@tqdb22: ~]$ cd ~/.ssh/
[grid@tqdb22: ~/.ssh]$ ll
total 20
-rw-r--r--. 1 grid oinstall 1988 Feb 12 00:43 authorized_keys
-rw-------. 1 grid oinstall  668 Feb 12 00:39 id_dsa
-rw-r--r--. 1 grid oinstall  601 Feb 12 00:39 id_dsa.pub
-rw-------. 1 grid oinstall 1675 Feb 12 00:39 id_rsa
-rw-r--r--. 1 grid oinstall  393 Feb 12 00:39 id_rsa.pub
[grid@tqdb22: ~/.ssh]$ cat authorized_keys 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCyn//M6EKUXPcLk9lcennl3mN4CxFHupvFa6vR2eoGn40SNA1Yqtzhm+EI5I7O5JLT7zCUHu3aV8kIoXZdnivrw93VB8INeUIi9ZN98KDakq+nrORC3c9fZLnwCsNti+4Qy6vl2gy+fH0yR/vyhv8AVmIUe86jl8ql6TX5Xo2aS0YidD7okLumnKbCCK8HF58sPvr5j5fyMrq/w7xNhuq8jdrjOurVjcutu7u/xfSXCHvYnqbQDJbRj03bBOlhk63HYIgZYsB04/aMvngaq0lZ2XrXjBqfOq8OYBKLATgPyhoFDoD4IDymvrAm3Jsc3ZEy2wV3oOhiV895Kkvp4DtH grid@tqdb21
ssh-dss AAAAB3NzaC1kc3MAAACBAMmx0zcjtMOo8cIDWQamFqTKYN/ac0dHmRzpd2XeKTOUe7l2TeRiGilMIeyH+5i9CQ2bZzCPszd+KyJpf2BpKotsRKKlM+P09sDtDyteoTvCMyIGIvT24yQfriSFP3R5yKo0XJv0NEI6VX00wnsG5wzyaEad0FaSYdi/HlYar3MBAAAAFQCfQMu2F1kOm86SraJ2dL2m8ZZ8lwAAAIB9Tu2oCjL8wr2mDhBfaM7vg0bl6XEyYVm0b71uEzkklLKingyP38Jr3BIQ4+DSUv3+anfpJFGz1FouKI6Sow8oJCstJAx5CYM1AiHHVn5wTwBNW475kqWHTYaqwldYGjj2GwnB81QhVz+i4k4RLoMDi3BTOwoHC+hIwX6CrwumEgAAAIAWWDUYYg0b1ppVijCqU7VdXeS9FIdnlaA8puhVccSF5mzHcg4x0cQ1mWEnQjpFv1+/NsGdUoPidHWV2YQY6CRKP0xDN4nae/Cw1tKHOhnICGvrMVG8nZmDTnBUGDEGn22v4Mn5YGMAo+AclxcfYbpRnITCi4lqIHqgfm/YYWL1rA== grid@tqdb21
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCyTO5PjRN3LAYgvXN6vVgTuRT6/ZMIi1ZwKiKYAzfDECwfVMZpgO27/67JhsS4QYkf0qpAHrRrNsTLiMJJ2n0xAQ4y1PcGeKLttsUH3w5+bkwJtIyLN2uVdne6Knc/WOwB4RjNO/iNSxIQhwSiuxdp2vZqfHI0R7hAu4x7gSwMUhGTyu5kOlzpb5G0Y8bdr+U41Cc4hgSFeRkTs2fBe4aMZTz3tmX0Qie7xEZm8RtAoq3D0lEjNGq+YziBlrDLhwTldaiR3itsQdqhXVweEq0b6jotou4eFPYwya+DayQikFGXVczwaWEFNRljNzMZ3MWQSbghLFJ3ZGwwrgSBk6yx grid@tqdb22
ssh-dss AAAAB3NzaC1kc3MAAACBALXYa30uXp5U1GHO64E/5hXPBxnfs6yq3Snzz4omSSPbrTkXodWhXsUx5ywCEDC9j35KdjP6yFpzQgL5f+5/8PLsxuBoC/m/r3sMrjsTYXA/302WpalZuAXFUU8EkRdSYlpMhLB3PfWgwn9XYfAJJ3G1bDXmmAFECmlMhruGGI2DAAAAFQC4BY4Fjoh8CHdzZHEZwbC+iYYpgQAAAIBF5XDGw0oBoSAOpNysk+rI4AZSQmTwVhuNVnJIFkARdbW1rHLLALFak+BdOSgwg0JkPqPAA3l/cthT8TdzxDKE5H4WhQVpM8noYYo8V5MuE78vtHObRVwZ8APOr9NAbQ8QvdgG5huhnMx1M6esWFJ8GORtZ1r/pcyfHf1oDubOrwAAAIAjU9QOuuwNyKQaJZM2v+8l6T1Qv8psAtve1nHGOk0repiBvG5B6ucmB7e3Ae6EMj5Gw/M8jhocs+uspB1FcKNhHyT/SW7lMoAfFKtT+PzZmaWTKsNZSGQ/HVCWwUr8o3uIgcnW0SpEDrthfsApEM+d5Mpr7Hxuz2vyccBU9g0WJA== grid@tqdb22
[grid@tqdb22: ~/.ssh]$ 
​```

在每个节点上测试连接。验证当您再次运行以下命令时,系统是否不再提示您输入口令。
-- 5. 在 tqdb21 节点1 执行: 
-- 第一次,需要输入 `yes`
​```
[grid@tqdb21: ~]$ ssh tqdb21 date
The authenticity of host 'tqdb21 (192.168.6.21)' can't be established.
ECDSA key fingerprint is SHA256:P/+G/d30l5VTPHHL9N6D+RpzvZ63gAIm+g9F6PeX80A.
ECDSA key fingerprint is MD5:20:22:52:f2:51:c2:bf:a3:80:29:0b:e3:3c:c7:07:49.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'tqdb21,192.168.6.21' (ECDSA) to the list of known hosts.
Wed Feb 12 00:45:51 CST 2020
[grid@tqdb21: ~]$ ssh tqdb22 date
Wed Feb 12 00:45:59 CST 2020
[grid@tqdb21: ~]$ ssh tqdb21-priv date
The authenticity of host 'tqdb21-priv (172.16.8.21)' can't be established.
ECDSA key fingerprint is SHA256:P/+G/d30l5VTPHHL9N6D+RpzvZ63gAIm+g9F6PeX80A.
ECDSA key fingerprint is MD5:20:22:52:f2:51:c2:bf:a3:80:29:0b:e3:3c:c7:07:49.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'tqdb21-priv,172.16.8.21' (ECDSA) to the list of known hosts.
Wed Feb 12 00:46:14 CST 2020
[grid@tqdb21: ~]$ ssh tqdb22-priv date
The authenticity of host 'tqdb22-priv (172.16.8.22)' can't be established.
ECDSA key fingerprint is SHA256:QT8z0WN0dmX3S0jnMcLe/MeraabCFvwlYKTmX/kKJ+o.
ECDSA key fingerprint is MD5:de:f8:90:99:5d:f1:05:5c:65:4b:fb:8b:0f:bc:63:7d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'tqdb22-priv,172.16.8.22' (ECDSA) to the list of known hosts.
Wed Feb 12 00:46:25 CST 2020
[grid@tqdb21: ~]$ 
​```
-- 第二次,不再需要输入 `yes`, 可以直接返回结果。 说明 SSH 等效性已配置好了。
​```
[grid@tqdb21: ~]$ ssh tqdb21 date
Wed Feb 12 00:47:08 CST 2020
[grid@tqdb21: ~]$ ssh tqdb22 date
Wed Feb 12 00:47:13 CST 2020
[grid@tqdb21: ~]$ ssh tqdb21-priv date
Wed Feb 12 00:47:18 CST 2020
[grid@tqdb21: ~]$ ssh tqdb22-priv date 
Wed Feb 12 00:47:24 CST 2020
[grid@tqdb21: ~]$ date; ssh tqdb22 date
Wed Feb 12 00:47:36 CST 2020
Wed Feb 12 00:47:36 CST 2020
[grid@tqdb21: ~]$ date; ssh tqdb22-priv date
Wed Feb 12 00:47:52 CST 2020
Wed Feb 12 00:47:52 CST 2020
[grid@tqdb21: ~]$ 
​```

-- 6. 在 tqdb22 节点2 执行:
-- 第一次,需要输入 `yes`
​```
[grid@tqdb22: ~]$ ssh tqdb21 date
The authenticity of host 'tqdb21 (192.168.6.21)' can't be established.
ECDSA key fingerprint is SHA256:P/+G/d30l5VTPHHL9N6D+RpzvZ63gAIm+g9F6PeX80A.
ECDSA key fingerprint is MD5:20:22:52:f2:51:c2:bf:a3:80:29:0b:e3:3c:c7:07:49.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'tqdb21,192.168.6.21' (ECDSA) to the list of known hosts.
Wed Feb 12 00:49:04 CST 2020
[grid@tqdb22: ~]$ ssh tqdb22 date
The authenticity of host 'tqdb22 (192.168.6.22)' can't be established.
ECDSA key fingerprint is SHA256:QT8z0WN0dmX3S0jnMcLe/MeraabCFvwlYKTmX/kKJ+o.
ECDSA key fingerprint is MD5:de:f8:90:99:5d:f1:05:5c:65:4b:fb:8b:0f:bc:63:7d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'tqdb22,192.168.6.22' (ECDSA) to the list of known hosts.
Wed Feb 12 00:49:14 CST 2020
[grid@tqdb22: ~]$ ssh tqdb21-priv date
The authenticity of host 'tqdb21-priv (172.16.8.21)' can't be established.
ECDSA key fingerprint is SHA256:P/+G/d30l5VTPHHL9N6D+RpzvZ63gAIm+g9F6PeX80A.
ECDSA key fingerprint is MD5:20:22:52:f2:51:c2:bf:a3:80:29:0b:e3:3c:c7:07:49.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'tqdb21-priv,172.16.8.21' (ECDSA) to the list of known hosts.
Wed Feb 12 00:49:24 CST 2020
[grid@tqdb22: ~]$ ssh tqdb22-priv date
The authenticity of host 'tqdb22-priv (172.16.8.22)' can't be established.
ECDSA key fingerprint is SHA256:QT8z0WN0dmX3S0jnMcLe/MeraabCFvwlYKTmX/kKJ+o.
ECDSA key fingerprint is MD5:de:f8:90:99:5d:f1:05:5c:65:4b:fb:8b:0f:bc:63:7d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'tqdb22-priv,172.16.8.22' (ECDSA) to the list of known hosts.
Wed Feb 12 00:49:34 CST 2020
[grid@tqdb22: ~]$ 
​```
-- 第二次,不再需要输入 `yes`, 可以直接返回结果。 说明 SSH 等效性已配置好了。
​```
[grid@tqdb22: ~]$ ssh tqdb21 date
Wed Feb 12 00:49:58 CST 2020
[grid@tqdb22: ~]$ ssh tqdb22 date
Wed Feb 12 00:50:02 CST 2020
[grid@tqdb22: ~]$ ssh tqdb21-priv date
Wed Feb 12 00:50:08 CST 2020
[grid@tqdb22: ~]$ ssh tqdb22-priv date 
Wed Feb 12 00:50:13 CST 2020
[grid@tqdb22: ~]$ date; ssh tqdb22 date
Wed Feb 12 00:50:27 CST 2020
Wed Feb 12 00:50:27 CST 2020
[grid@tqdb22: ~]$ date; ssh tqdb22-priv date
Wed Feb 12 00:50:39 CST 2020
Wed Feb 12 00:50:39 CST 2020
[grid@tqdb22: ~]$ 
​```

2.22 配置 udev

说明: /dev/sda 为本地系统盘; /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf 为共享磁盘.
其中:
/dev/sdb /dev/sdc /dev/sdd 大小为 2G, 用于: OCR & voting disk
/dev/sde /dev/sdf 大小为 50G, 用于: DATA DiskGroup

2.22.1 配置 multipath 多路径

# 获取本地硬盘在系统中的`scsi id`
[root@tqdb21: /dev]# /lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sda
1ATA_VBOX_HARDDISK_VB83906d1c-ce109a80
[root@tqdb21: /dev]# 


# 修改配置文件,将上面 `/dev/sda` 本地系统盘的 `scsi id` 加入黑名单
# 将 sdb ~ sdf 配置多路径别名
root# vim /etc/multipath.conf
​```
## virtualbox 虚拟环境中,这里必须添加getuid_callout字段中的"--replace-whitespace",因为上面的scsi_id拥有"_"
# 注释掉 getuid_callout 参数
# vbox中,wwid 为 `multipath -v3` 命令中 `paths list` 的 `uuid` (如:`VBOX_HARDDISK_VB043f2aa4-f6c46e2f`)
# 而使用 `/lib/udev/scsi_id` 命令输出的结果为 `1ATA_VBOX_HARDDISK_VB043f2aa4-f6c46e2f` (多了 `1ATA_`),所以如下报错。
# ```
#[root@tq1: /etc/multipath]# multipath -ll
#Jan 15 17:20:22 | /etc/multipath.conf line 3, invalid keyword: getuid_callout
# ```
#

# 
defaults {
  #getuid_callout          "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"
  #getuid_callout          "/lib/udev/scsi_id -g -u -d /dev/%n"
  user_friendly_names     no
}

# 禁掉本地磁盘 (/dev/sda)
blacklist {
  wwid VBOX_HARDDISK_VB83906d1c-ce109a80
  devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
  devnode "^hd[a-z]"
  devnode "^cciss.*"
}



# 多路径别名
multipaths {
  multipath {
          wwid                    VBOX_HARDDISK_VB0ca65770-cab71c4b
          alias                   ocr1
  }
  multipath {
          wwid                    VBOX_HARDDISK_VB724aeb83-ea9f4e0d
          alias                   ocr2
  }
  multipath {
          wwid                    VBOX_HARDDISK_VBe8c6318c-edea7981
          alias                   ocr3
  }
  multipath {
          wwid                    VBOX_HARDDISK_VBa1560564-d49dac72
          alias                   data01
  }
  multipath {
          wwid                    VBOX_HARDDISK_VB27f01f95-b61e143b
          alias                   data02
  }
}
​```

-- 开机启动 multipathd.service
# systemctl enable multipathd.service
# systemctl restart multipathd.service
# systemctl status multipathd.service

2.22.2 配置 udev (99-oracle-asmdevices.rules) 规则

​```
-- 两个节点的 `99-oracle-asmdevices.rules` 规则内容相同.
[root@tqdb21: /etc/udev/rules.d]# vim 99-oracle-asmdevices.rules 
​```
# /dev/sdb multipath ==> `/dev/mapper/ocr1 -> ../dm-0`
KERNEL=="dm*",SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB0ca65770-cab71c4b", SYMLINK+="asm-ocr1", OWNER="grid",GROUP="asmadmin", MODE="0660"
# /dev/sdc multipath ==> `/dev/mapper/ocr2 -> ../dm-1`
KERNEL=="dm*",SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB724aeb83-ea9f4e0d", SYMLINK+="asm-ocr2", OWNER="grid",GROUP="asmadmin", MODE="0660"
# /dev/sdd multipath ==> `/dev/mapper/ocr3 -> ../dm-2`
KERNEL=="dm*",SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VBe8c6318c-edea7981", SYMLINK+="asm-ocr3", OWNER="grid",GROUP="asmadmin", MODE="0660"

# /dev/sde multipath ==> `/dev/mapper/data01 -> ../dm-3`
KERNEL=="dm*",SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VBa1560564-d49dac72", SYMLINK+="asm-data01", OWNER="grid",GROUP="asmadmin", MODE="0660"
# /dev/sdf multipath ==> `/dev/mapper/data02 -> ../dm-4`
KERNEL=="dm*",SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB27f01f95-b61e143b", SYMLINK+="asm-data02", OWNER="grid",GROUP="asmadmin", MODE="0660"
​```
[root@tqdb21: /etc/udev/rules.d]# 
​```

-- 加载规则并重启 udev 规则
# udevadm control --reload-rules 
# udevadm trigger 
# systemctl status systemd-udevd.service
# systemctl enable systemd-udevd.service

-- 查看 `99-oracle-asmdevices.rules` 规则是否生效 (包括: 规则名 和 权限)
# ll /dev/asm-*
# ll /dev/dm-*

-- 查看块设备相关存储信息
(echo -e "\n输出结果: \n1. 查看'/dev'和'/dev/mapper'目录: (注意查看权限)" && ls -l /dev/asm* /dev/mapper/data* /dev/mapper/ocr* /dev/dm* /dev/sd[b-f]) && 
(echo -e "\n2. 查看块(block)设备: "&& lsblk -f) && 
(echo -e "\n3. 查看多路径配置" && cat /etc/multipath.conf | grep -A3 "multipath {") && 
(echo -e "\n4. '/dev/sdb'的设备ID: (注意:'设备ID' 比 'wwid' 多了前缀 '1ATA_')" && /lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sdb) && 
(echo -e "\n   '/dev/sdc'的设备ID: (注意:'设备ID' 比 'wwid' 多了前缀 '1ATA_')" && /lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sdc) && 
(echo -e "\n   '/dev/sdd'的设备ID: (注意:'设备ID' 比 'wwid' 多了前缀 '1ATA_')" && /lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sdd) && 
(echo -e "\n   '/dev/sde'的设备ID: (注意:'设备ID' 比 'wwid' 多了前缀 '1ATA_')" && /lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sde) && 
(echo -e "\n   '/dev/sdf'的设备ID: (注意:'设备ID' 比 'wwid' 多了前缀 '1ATA_')" && /lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sdf)

tqdb21 操作记录:

[root@tqdb21: /etc/udev/rules.d]# cat 99-oracle-asmdevices.rules 
# /dev/sdb multipath ==> `/dev/mapper/ocr1 -> ../dm-0`
KERNEL=="dm*",SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB0ca65770-cab71c4b", SYMLINK+="asm-ocr1", OWNER="grid",GROUP="asmadmin", MODE="0660"
# /dev/sdc multipath ==> `/dev/mapper/ocr2 -> ../dm-1`
KERNEL=="dm*",SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB724aeb83-ea9f4e0d", SYMLINK+="asm-ocr2", OWNER="grid",GROUP="asmadmin", MODE="0660"
# /dev/sdd multipath ==> `/dev/mapper/ocr3 -> ../dm-2`
KERNEL=="dm*",SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VBe8c6318c-edea7981", SYMLINK+="asm-ocr3", OWNER="grid",GROUP="asmadmin", MODE="0660"

# /dev/sde multipath ==> `/dev/mapper/data01 -> ../dm-3`
KERNEL=="dm*",SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VBa1560564-d49dac72", SYMLINK+="asm-data01", OWNER="grid",GROUP="asmadmin", MODE="0660"
# /dev/sdf multipath ==> `/dev/mapper/data02 -> ../dm-4`
KERNEL=="dm*",SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB27f01f95-b61e143b", SYMLINK+="asm-data02", OWNER="grid",GROUP="asmadmin", MODE="0660"

[root@tqdb21: /etc/udev/rules.d]# 
[root@tqdb21: /etc/udev/rules.d]# udevadm control --reload-rules 
[root@tqdb21: /etc/udev/rules.d]# udevadm trigger 
[root@tqdb21: /etc/udev/rules.d]# systemctl status systemd-udevd.service
● systemd-udevd.service - udev Kernel Device Manager
Loaded: loaded (/usr/lib/systemd/system/systemd-udevd.service; static; vendor preset: disabled)
Active: active (running) since Tue 2020-02-11 22:06:25 CST; 4h 34min ago
  Docs: man:systemd-udevd.service(8)
        man:udev(7)
Main PID: 484 (systemd-udevd)
Status: "Processing with 10 children at max"
 Tasks: 1
CGroup: /system.slice/systemd-udevd.service
        └─484 /usr/lib/systemd/systemd-udevd

Feb 11 22:06:25 tqdb21 systemd[1]: Starting udev Kernel Device Manager...
Feb 11 22:06:25 tqdb21 systemd-udevd[484]: starting version 219
Feb 11 22:06:25 tqdb21 systemd[1]: Started udev Kernel Device Manager.
Feb 11 22:06:36 tqdb21 kvm[1374]: 1 guest now active
Feb 11 22:06:36 tqdb21 kvm[1375]: 0 guests now active
Feb 11 22:06:36 tqdb21 kvm[1377]: 1 guest now active
Feb 11 22:06:36 tqdb21 kvm[1379]: 0 guests now active
Feb 11 22:06:36 tqdb21 kvm[1381]: 1 guest now active
Feb 11 22:06:36 tqdb21 kvm[1386]: 0 guests now active
[root@tqdb21: /etc/udev/rules.d]# systemctl enable systemd-udevd.service
[root@tqdb21: /etc/udev/rules.d]# 
[root@tqdb21: /etc/udev/rules.d]# ll /dev/asm-*
lrwxrwxrwx. 1 root root 4 Feb 12 02:40 /dev/asm-data01 -> dm-3
lrwxrwxrwx. 1 root root 4 Feb 12 02:40 /dev/asm-data02 -> dm-4
lrwxrwxrwx. 1 root root 4 Feb 12 02:40 /dev/asm-ocr1 -> dm-0
lrwxrwxrwx. 1 root root 4 Feb 12 02:40 /dev/asm-ocr2 -> dm-1
lrwxrwxrwx. 1 root root 4 Feb 12 02:40 /dev/asm-ocr3 -> dm-2
[root@tqdb21: /etc/udev/rules.d]# 
[root@tqdb21: /etc/udev/rules.d]# ll /dev/dm-*
brw-rw----. 1 grid asmadmin 253, 0 Feb 12 02:40 /dev/dm-0
brw-rw----. 1 grid asmadmin 253, 1 Feb 12 02:40 /dev/dm-1
brw-rw----. 1 grid asmadmin 253, 2 Feb 12 02:40 /dev/dm-2
brw-rw----. 1 grid asmadmin 253, 3 Feb 12 02:40 /dev/dm-3
brw-rw----. 1 grid asmadmin 253, 4 Feb 12 02:40 /dev/dm-4
[root@tqdb21: /etc/udev/rules.d]# 
[root@tqdb21: ~]# (echo -e "\n输出结果: \n1. 查看'/dev'和'/dev/mapper'目录: (注意查看权限)" && ls -l /dev/asm* /dev/mapper/data* /dev/mapper/ocr* /dev/dm* /dev/sd[b-f]) && 
> (echo -e "\n2. 查看块(block)设备: "&& lsblk -f) && 
> (echo -e "\n3. 查看多路径配置" && cat /etc/multipath.conf | grep -A3 "multipath {") && 
> (echo -e "\n4. '/dev/sdb'的设备ID: (注意:'设备ID' 比 'wwid' 多了前缀 '1ATA_')" && /lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sdb) && 
> (echo -e "\n   '/dev/sdc'的设备ID: (注意:'设备ID' 比 'wwid' 多了前缀 '1ATA_')" && /lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sdc) && 
> (echo -e "\n   '/dev/sdd'的设备ID: (注意:'设备ID' 比 'wwid' 多了前缀 '1ATA_')" && /lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sdd) && 
> (echo -e "\n   '/dev/sde'的设备ID: (注意:'设备ID' 比 'wwid' 多了前缀 '1ATA_')" && /lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sde) && 
> (echo -e "\n   '/dev/sdf'的设备ID: (注意:'设备ID' 比 'wwid' 多了前缀 '1ATA_')" && /lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sdf)

输出结果: 
1. 查看'/dev'和'/dev/mapper'目录: (注意查看权限)
lrwxrwxrwx. 1 root root           4 Feb 12 03:24 /dev/asm-data01 -> dm-3
lrwxrwxrwx. 1 root root           4 Feb 12 03:24 /dev/asm-data02 -> dm-4
lrwxrwxrwx. 1 root root           4 Feb 12 03:24 /dev/asm-ocr1 -> dm-0
lrwxrwxrwx. 1 root root           4 Feb 12 03:24 /dev/asm-ocr2 -> dm-1
lrwxrwxrwx. 1 root root           4 Feb 12 03:24 /dev/asm-ocr3 -> dm-2
brw-rw----. 1 grid asmadmin 253,  0 Feb 12 03:24 /dev/dm-0
brw-rw----. 1 grid asmadmin 253,  1 Feb 12 03:24 /dev/dm-1
brw-rw----. 1 grid asmadmin 253,  2 Feb 12 03:24 /dev/dm-2
brw-rw----. 1 grid asmadmin 253,  3 Feb 12 03:24 /dev/dm-3
brw-rw----. 1 grid asmadmin 253,  4 Feb 12 03:24 /dev/dm-4
lrwxrwxrwx. 1 root root           7 Feb 12 03:24 /dev/mapper/data01 -> ../dm-3
lrwxrwxrwx. 1 root root           7 Feb 12 03:24 /dev/mapper/data02 -> ../dm-4
lrwxrwxrwx. 1 root root           7 Feb 12 03:24 /dev/mapper/ocr1 -> ../dm-0
lrwxrwxrwx. 1 root root           7 Feb 12 03:24 /dev/mapper/ocr2 -> ../dm-1
lrwxrwxrwx. 1 root root           7 Feb 12 03:24 /dev/mapper/ocr3 -> ../dm-2
brw-rw----. 1 root disk       8, 16 Feb 12 02:40 /dev/sdb
brw-rw----. 1 root disk       8, 32 Feb 12 02:40 /dev/sdc
brw-rw----. 1 root disk       8, 48 Feb 12 02:40 /dev/sdd
brw-rw----. 1 root disk       8, 64 Feb 12 02:40 /dev/sde
brw-rw----. 1 root disk       8, 80 Feb 12 02:40 /dev/sdf

2. 查看块(block)设备: 
NAME     FSTYPE       LABEL UUID                                 MOUNTPOINT
sda                                                              
├─sda1   swap               323f3142-ccef-4b0d-a799-04007c4aa0a6 [SWAP]
└─sda2   xfs                2579915f-aead-4a30-977c-8e39f5f4d491 /
sdb      mpath_member                                            
└─ocr1                                                           
sdc      mpath_member                                            
└─ocr2                                                           
sdd      mpath_member                                            
└─ocr3                                                           
sde      mpath_member                                            
└─data01                                                         
sdf      mpath_member                                            
└─data02                                                         
sr0                                                              

3. 查看多路径配置
        multipath {
                wwid                    VBOX_HARDDISK_VB0ca65770-cab71c4b
                alias                   ocr1
        }
        multipath {
                wwid                    VBOX_HARDDISK_VB724aeb83-ea9f4e0d
                alias                   ocr2
        }
        multipath {
                wwid                    VBOX_HARDDISK_VBe8c6318c-edea7981
                alias                   ocr3
        }
        multipath {
                wwid                    VBOX_HARDDISK_VBa1560564-d49dac72
                alias                   data01
        }
        multipath {
                wwid                    VBOX_HARDDISK_VB27f01f95-b61e143b
                alias                   data02
        }

4. '/dev/sdb'的设备ID: (注意:'设备ID' 比 'wwid' 多了前缀 '1ATA_')
1ATA_VBOX_HARDDISK_VB0ca65770-cab71c4b

   '/dev/sdc'的设备ID: (注意:'设备ID' 比 'wwid' 多了前缀 '1ATA_')
1ATA_VBOX_HARDDISK_VB724aeb83-ea9f4e0d

   '/dev/sdd'的设备ID: (注意:'设备ID' 比 'wwid' 多了前缀 '1ATA_')
1ATA_VBOX_HARDDISK_VBe8c6318c-edea7981

   '/dev/sde'的设备ID: (注意:'设备ID' 比 'wwid' 多了前缀 '1ATA_')
1ATA_VBOX_HARDDISK_VBa1560564-d49dac72

   '/dev/sdf'的设备ID: (注意:'设备ID' 比 'wwid' 多了前缀 '1ATA_')
1ATA_VBOX_HARDDISK_VB27f01f95-b61e143b
[root@tqdb21: ~]# 

tqdb22 操作记录:

[root@tqdb22: /etc/udev/rules.d]# cat 99-oracle-asmdevices.rules 

# /dev/sdb multipath ==> `/dev/mapper/ocr1 -> ../dm-0`
KERNEL=="dm*",SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB0ca65770-cab71c4b", SYMLINK+="asm-ocr1", OWNER="grid",GROUP="asmadmin", MODE="0660"
# /dev/sdc multipath ==> `/dev/mapper/ocr2 -> ../dm-1`
KERNEL=="dm*",SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB724aeb83-ea9f4e0d", SYMLINK+="asm-ocr2", OWNER="grid",GROUP="asmadmin", MODE="0660"
# /dev/sdd multipath ==> `/dev/mapper/ocr3 -> ../dm-2`
KERNEL=="dm*",SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VBe8c6318c-edea7981", SYMLINK+="asm-ocr3", OWNER="grid",GROUP="asmadmin", MODE="0660"

# /dev/sde multipath ==> `/dev/mapper/data01 -> ../dm-3`
KERNEL=="dm*",SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VBa1560564-d49dac72", SYMLINK+="asm-data01", OWNER="grid",GROUP="asmadmin", MODE="0660"
# /dev/sdf multipath ==> `/dev/mapper/data02 -> ../dm-4`
KERNEL=="dm*",SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB27f01f95-b61e143b", SYMLINK+="asm-data02", OWNER="grid",GROUP="asmadmin", MODE="0660"

[root@tqdb22: /etc/udev/rules.d]# 
[root@tqdb22: /etc/udev/rules.d]# udevadm control --reload-rules 
[root@tqdb22: /etc/udev/rules.d]# udevadm trigger 
[root@tqdb22: /etc/udev/rules.d]# systemctl status systemd-udevd.service
● systemd-udevd.service - udev Kernel Device Manager
   Loaded: loaded (/usr/lib/systemd/system/systemd-udevd.service; static; vendor preset: disabled)
   Active: active (running) since Tue 2020-02-11 22:06:28 CST; 4h 44min ago
     Docs: man:systemd-udevd.service(8)
           man:udev(7)
 Main PID: 486 (systemd-udevd)
   Status: "Processing with 10 children at max"
    Tasks: 1
   CGroup: /system.slice/systemd-udevd.service
           └─486 /usr/lib/systemd/systemd-udevd

Feb 11 22:06:28 tqdb22 systemd[1]: Starting udev Kernel Device Manager...
Feb 11 22:06:28 tqdb22 systemd-udevd[486]: starting version 219
Feb 11 22:06:28 tqdb22 systemd[1]: Started udev Kernel Device Manager.
Feb 11 22:06:39 tqdb22 kvm[1426]: 1 guest now active
Feb 11 22:06:39 tqdb22 kvm[1432]: 0 guests now active
Feb 11 22:06:39 tqdb22 kvm[1436]: 1 guest now active
Feb 11 22:06:39 tqdb22 kvm[1441]: 0 guests now active
Feb 11 22:06:39 tqdb22 kvm[1444]: 1 guest now active
Feb 11 22:06:39 tqdb22 kvm[1448]: 0 guests now active
[root@tqdb22: /etc/udev/rules.d]# systemctl enable systemd-udevd.service
[root@tqdb22: /etc/udev/rules.d]# 
[root@tqdb22: /etc/udev/rules.d]# ll /dev/asm-*
lrwxrwxrwx. 1 root root 4 Feb 12 02:50 /dev/asm-data01 -> dm-3
lrwxrwxrwx. 1 root root 4 Feb 12 02:50 /dev/asm-data02 -> dm-4
lrwxrwxrwx. 1 root root 4 Feb 12 02:50 /dev/asm-ocr1 -> dm-0
lrwxrwxrwx. 1 root root 4 Feb 12 02:50 /dev/asm-ocr2 -> dm-1
lrwxrwxrwx. 1 root root 4 Feb 12 02:50 /dev/asm-ocr3 -> dm-2
[root@tqdb22: /etc/udev/rules.d]# ll /dev/dm-*
brw-rw----. 1 grid asmadmin 253, 0 Feb 12 02:50 /dev/dm-0
brw-rw----. 1 grid asmadmin 253, 1 Feb 12 02:50 /dev/dm-1
brw-rw----. 1 grid asmadmin 253, 2 Feb 12 02:50 /dev/dm-2
brw-rw----. 1 grid asmadmin 253, 3 Feb 12 02:50 /dev/dm-3
brw-rw----. 1 grid asmadmin 253, 4 Feb 12 02:50 /dev/dm-4
[root@tqdb22: /etc/udev/rules.d]# 
[root@tqdb22: ~]# (echo -e "\n输出结果: \n1. 查看'/dev'和'/dev/mapper'目录: (注意查看权限)" && ls -l /dev/asm* /dev/mapper/data* /dev/mapper/ocr* /dev/dm* /dev/sd[b-f]) && 
> (echo -e "\n2. 查看块(block)设备: "&& lsblk -f) && 
> (echo -e "\n3. 查看多路径配置" && cat /etc/multipath.conf | grep -A3 "multipath {") && 
> (echo -e "\n4. '/dev/sdb'的设备ID: (注意:'设备ID' 比 'wwid' 多了前缀 '1ATA_')" && /lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sdb) && 
> (echo -e "\n   '/dev/sdc'的设备ID: (注意:'设备ID' 比 'wwid' 多了前缀 '1ATA_')" && /lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sdc) && 
> (echo -e "\n   '/dev/sdd'的设备ID: (注意:'设备ID' 比 'wwid' 多了前缀 '1ATA_')" && /lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sdd) && 
> (echo -e "\n   '/dev/sde'的设备ID: (注意:'设备ID' 比 'wwid' 多了前缀 '1ATA_')" && /lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sde) && 
> (echo -e "\n   '/dev/sdf'的设备ID: (注意:'设备ID' 比 'wwid' 多了前缀 '1ATA_')" && /lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sdf)

输出结果: 
1. 查看'/dev'和'/dev/mapper'目录: (注意查看权限)
lrwxrwxrwx. 1 root root           4 Feb 12 03:30 /dev/asm-data01 -> dm-3
lrwxrwxrwx. 1 root root           4 Feb 12 03:30 /dev/asm-data02 -> dm-4
lrwxrwxrwx. 1 root root           4 Feb 12 03:30 /dev/asm-ocr1 -> dm-0
lrwxrwxrwx. 1 root root           4 Feb 12 03:30 /dev/asm-ocr2 -> dm-1
lrwxrwxrwx. 1 root root           4 Feb 12 03:30 /dev/asm-ocr3 -> dm-2
brw-rw----. 1 grid asmadmin 253,  0 Feb 12 03:30 /dev/dm-0
brw-rw----. 1 grid asmadmin 253,  1 Feb 12 03:30 /dev/dm-1
brw-rw----. 1 grid asmadmin 253,  2 Feb 12 03:30 /dev/dm-2
brw-rw----. 1 grid asmadmin 253,  3 Feb 12 03:30 /dev/dm-3
brw-rw----. 1 grid asmadmin 253,  4 Feb 12 03:30 /dev/dm-4
lrwxrwxrwx. 1 root root           7 Feb 12 03:30 /dev/mapper/data01 -> ../dm-3
lrwxrwxrwx. 1 root root           7 Feb 12 03:30 /dev/mapper/data02 -> ../dm-4
lrwxrwxrwx. 1 root root           7 Feb 12 03:30 /dev/mapper/ocr1 -> ../dm-0
lrwxrwxrwx. 1 root root           7 Feb 12 03:30 /dev/mapper/ocr2 -> ../dm-1
lrwxrwxrwx. 1 root root           7 Feb 12 03:30 /dev/mapper/ocr3 -> ../dm-2
brw-rw----. 1 root disk       8, 16 Feb 12 02:50 /dev/sdb
brw-rw----. 1 root disk       8, 32 Feb 12 02:50 /dev/sdc
brw-rw----. 1 root disk       8, 48 Feb 12 02:50 /dev/sdd
brw-rw----. 1 root disk       8, 64 Feb 12 02:50 /dev/sde
brw-rw----. 1 root disk       8, 80 Feb 12 02:50 /dev/sdf

2. 查看块(block)设备: 
NAME     FSTYPE       LABEL UUID                                 MOUNTPOINT
sda                                                              
├─sda1   swap               3114372b-8427-47ed-b2a6-092d33efcf5a [SWAP]
└─sda2   xfs                4e2d3b8d-2afa-447c-ae56-1cc0e2d39fe2 /
sdb      mpath_member                                            
└─ocr1                                                           
sdc      mpath_member                                            
└─ocr2                                                           
sdd      mpath_member                                            
└─ocr3                                                           
sde      mpath_member                                            
└─data01                                                         
sdf      mpath_member                                            
└─data02                                                         
sr0                                                              

3. 查看多路径配置
        multipath {
                wwid                    VBOX_HARDDISK_VB0ca65770-cab71c4b
                alias                   ocr1
        }
        multipath {
                wwid                    VBOX_HARDDISK_VB724aeb83-ea9f4e0d
                alias                   ocr2
        }
        multipath {
                wwid                    VBOX_HARDDISK_VBe8c6318c-edea7981
                alias                   ocr3
        }
        multipath {
                wwid                    VBOX_HARDDISK_VBa1560564-d49dac72
                alias                   data01
        }
        multipath {
                wwid                    VBOX_HARDDISK_VB27f01f95-b61e143b
                alias                   data02
        }

4. '/dev/sdb'的设备ID: (注意:'设备ID' 比 'wwid' 多了前缀 '1ATA_')
1ATA_VBOX_HARDDISK_VB0ca65770-cab71c4b

   '/dev/sdc'的设备ID: (注意:'设备ID' 比 'wwid' 多了前缀 '1ATA_')
1ATA_VBOX_HARDDISK_VB724aeb83-ea9f4e0d

   '/dev/sdd'的设备ID: (注意:'设备ID' 比 'wwid' 多了前缀 '1ATA_')
1ATA_VBOX_HARDDISK_VBe8c6318c-edea7981

   '/dev/sde'的设备ID: (注意:'设备ID' 比 'wwid' 多了前缀 '1ATA_')
1ATA_VBOX_HARDDISK_VBa1560564-d49dac72

   '/dev/sdf'的设备ID: (注意:'设备ID' 比 'wwid' 多了前缀 '1ATA_')
1ATA_VBOX_HARDDISK_VB27f01f95-b61e143b
[root@tqdb22: ~]# 

2.23 重启 OS

当上述操作执行完毕, 重启操作系统。

3. 软件安装与配置

3.1 GRID 安装

将介质解压到 GRID$ORACLE_HOME 中:这里一定要将文件解压到 $OACLE_HOME 中不然将会将当前目录设置为 $ORACLE_HOME

[grid@tqdb21: /Software]$ ll
total 5922412
drwxr-xr-x 2 root root             47 Feb 12 18:53 DB RU 19.6.0.0.200114
drwxr-xr-x 2 root root             47 Feb 12 18:53 GI RU 19.6.0.0.200114
-rwx------ 1 root root     3059705302 Feb 12 18:13 LINUX.X64_193000_db_home.zip
-rwx------ 1 grid oinstall 2889184573 Feb 12 18:13 LINUX.X64_193000_grid_home.zip
-rwx------ 1 root root      115653541 Feb 12 18:53 p6880880_190000_Linux-x86-64.zip
[grid@tqdb21: /Software]$ echo $ORACLE_HOME
/u01/app/19c/grid
[grid@tqdb21: /Software]$ unzip LINUX.X64_193000_grid_home.zip -d $ORACLE_HOME
[grid@tqdb21: /Software]$ 

安装 cvuqdisk-1.0.10-1.rpm 包,这个包linux的光盘内并不包含,需要到解压后的grid的安装文件中去找,在cv目录下面的rpm目录里面。

(即:步骤「2.11 安装 cvuqdisk 包」)

[root@tqdb21: ~]# cd $ORACLE_HOME/cv/rpm
[root@tqdb21: /u01/app/19c/grid/cv/rpm]# ll cvuqdisk-1.0.10-1.rpm 
-rw-r--r-- 1 grid oinstall 11412 Mar 13  2019 cvuqdisk-1.0.10-1.rpm
[root@tqdb21: /u01/app/19c/grid/cv/rpm]# yum install cvuqdisk-1.0.10-1.rpm 
Loaded plugins: fastestmirror, langpacks
Examining cvuqdisk-1.0.10-1.rpm: cvuqdisk-1.0.10-1.x86_64
Marking cvuqdisk-1.0.10-1.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package cvuqdisk.x86_64 0:1.0.10-1 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=======================================================================================================================================================================================================
Package                                       Arch                                        Version                                       Repository                                               Size
=======================================================================================================================================================================================================
Installing:
cvuqdisk                                      x86_64                                      1.0.10-1                                      /cvuqdisk-1.0.10-1                                       22 k

Transaction Summary
=======================================================================================================================================================================================================
Install  1 Package

Total size: 22 k
Installed size: 22 k
Is this ok [y/d/N]: y
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Using default group oinstall to install package
Installing : cvuqdisk-1.0.10-1.x86_64                                                                                                                                                            1/1 
Verifying  : cvuqdisk-1.0.10-1.x86_64                                                                                                                                                            1/1 

Installed:
cvuqdisk.x86_64 0:1.0.10-1                                                                                                                                                                           

Complete!
[root@tqdb21: /u01/app/19c/grid/cv/rpm]# 

3.1.1 启动 sshd 的 X11 转发,启动图形界面

启动 sshd 的 X11 转发,启动图形界面

开启 ssh 的 X11 转发,用于开启图形界面
# vim /etc/ssh/sshd_config 
​```
X11Forwarding yes
X11DisplayOffset 10
X11UseLocalhost no
​```

重启 sshd 服务
# systemctl restart sshd.service  
# systemctl status sshd.service  

操作记录:

[root@tqdb21: ~]# vim /etc/ssh/sshd_config 
​```
X11Forwarding yes
X11DisplayOffset 10
X11UseLocalhost no
​```
[root@tqdb21: ~]# systemctl restart sshd.service             
[root@tqdb21: ~]# systemctl status sshd.service  
● sshd.service - OpenSSH server daemon
Loaded: loaded (/usr/lib/systemd/system/sshd.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2020-02-12 23:18:33 CST; 2s ago
  Docs: man:sshd(8)
        man:sshd_config(5)
Main PID: 5748 (sshd)
 Tasks: 1
CGroup: /system.slice/sshd.service
        └─5748 /usr/sbin/sshd -D

Feb 12 23:18:33 tqdb21 systemd[1]: Starting OpenSSH server daemon...
Feb 12 23:18:33 tqdb21 sshd[5748]: Server listening on 0.0.0.0 port 22.
Feb 12 23:18:33 tqdb21 systemd[1]: Started OpenSSH server daemon.
[root@tqdb21: ~]# 
[root@tqdb21: ~]# xdpyinfo | head
name of display:    :0
version number:    11.0
vendor string:    The X.Org Foundation
vendor release number:    12004000
X.Org version: 1.20.4
maximum request size:  16777212 bytes
motion buffer size:  256
bitmap unit, bit order, padding:    32, LSBFirst, 32
image byte order:    LSBFirst
number of supported pixmap formats:    7
[root@tqdb21: ~]# 
[root@tqdb21: ~]# xdpyinfo | head
name of display:    :0
version number:    11.0
vendor string:    The X.Org Foundation
vendor release number:    12004000
X.Org version: 1.20.4
maximum request size:  16777212 bytes
motion buffer size:  256
bitmap unit, bit order, padding:    32, LSBFirst, 32
image byte order:    LSBFirst
number of supported pixmap formats:    7
[root@tqdb21: ~]# su - grid
Last login: Wed Feb 12 23:24:52 CST 2020 on pts/1
[grid@tqdb21: ~]$ export DISPLAY=192.168.6.21:0
[grid@tqdb21: ~]$ echo $DISPLAY
192.168.6.21:0
[grid@tqdb21: ~]$ 
[grid@tqdb21: /u01/app/19c/grid]$ ./gridSetup.sh 
Launching Oracle Grid Infrastructure Setup Wizard...


macOS 使用 XQuartz 启动图形说明

--------------------------------------------------------------------------------
-- 启动 sshd 的 X11 转发,启动图形界面 -- Begin ------------------------------------
--------------------------------------------------------------------------------
开启 ssh 的 X11 转发,用于开启图形界面
vim /etc/ssh/sshd_config 
​```
X11Forwarding yes
X11DisplayOffset 10
X11UseLocalhost no
​```

重启 sshd 服务
systemctl restart sshd.service  
systemctl status sshd.service  
​```
[root@tq1: ~]# systemctl status sshd.service  
● sshd.service - OpenSSH server daemon
Loaded: loaded (/usr/lib/systemd/system/sshd.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2020-01-17 12:47:42 CST; 5s ago
  Docs: man:sshd(8)
        man:sshd_config(5)
Main PID: 18971 (sshd)
 Tasks: 1
CGroup: /system.slice/sshd.service
        └─18971 /usr/sbin/sshd -D

Jan 17 12:47:42 tq1 systemd[1]: Stopped OpenSSH server daemon.
Jan 17 12:47:42 tq1 systemd[1]: Starting OpenSSH server daemon...
Jan 17 12:47:42 tq1 sshd[18971]: Server listening on 0.0.0.0 port 22.
Jan 17 12:47:42 tq1 systemd[1]: Started OpenSSH server daemon.
[root@tq1: ~]# 
​```


​```
[root@tq1: ~]# xdpyinfo | head
name of display:    192.168.6.10:10.0
version number:    11.0
vendor string:    The X.Org Foundation
vendor release number:    11804000
X.Org version: 1.18.4
maximum request size:  16777212 bytes
motion buffer size:  256
bitmap unit, bit order, padding:    32, LSBFirst, 32
image byte order:    LSBFirst
number of supported pixmap formats:    7
[root@tq1: ~]# su - grid
Last login: Fri Jan 17 11:44:18 CST 2020 on pts/4
[grid@tq1: ~]$ export DISPLAY=192.168.6.10:10.0
[grid@tq1: ~]$ echo $DISPLAY
192.168.6.10:10.0
[grid@tq1: ~]$ 
​```

实际操作:
1. 使用 XQuartz 登陆服务器 root 用户,执行 `xhost +`
â /Users/tq > ssh -X [email protected]
​```
â /Users/tq > ssh -X [email protected]
[email protected]'s password: 
Last login: Fri Jan 17 11:43:40 2020 from 192.168.6.6
[root@tq1: ~]# xhost +
access control disabled, clients can connect from any host
[root@tq1: ~]# 
​```

2. 使用 XQuartz 登陆服务器 grid/oracle 用户,就可以直接启用图形了。
​```
â /Users/tq > ssh -X [email protected]
[email protected]'s password: 
Last login: Fri Jan 17 12:41:26 2020 from 192.168.6.6
[grid@tq1: ~]$ echo $DISPLAY
192.168.6.10:10.0
[grid@tq1: ~]$ xauth list
tq1:10  MIT-MAGIC-COOKIE-1  62299f9d0c67b1804c36fe7ea6783fda
[grid@tq1: ~]$ 
[grid@tq1: ~]$ xclock 
[grid@tq1: ~]$ xeyes 
[grid@tq1: ~]$
​```
--------------------------------------------------------------------------------
-- 启动 sshd 的 X11 转发,启动图形界面 -- End --------------------------------------
--------------------------------------------------------------------------------

3.1.2 使用grid用户登入图形安装:

[root@tqdb21: ~]# xhost +
access control disabled, clients can connect from any host
[root@tqdb21: ~]# 
[root@tqdb21: ~]# xdpyinfo | head
name of display:    :0
version number:    11.0
vendor string:    The X.Org Foundation
vendor release number:    12004000
X.Org version: 1.20.4
maximum request size:  16777212 bytes
motion buffer size:  256
bitmap unit, bit order, padding:    32, LSBFirst, 32
image byte order:    LSBFirst
number of supported pixmap formats:    7
[root@tqdb21: ~]# su - grid
Last login: Wed Feb 12 23:24:52 CST 2020 on pts/1
[grid@tqdb21: ~]$ export DISPLAY=192.168.6.21:0
[grid@tqdb21: ~]$ echo $DISPLAY
192.168.6.21:0
[grid@tqdb21: ~]$ cd $ORACLE_HOME
[grid@tqdb21: /u01/app/19c/grid]$ ./gridSetup.sh

GRID 安装截图:

  • 19c RAC GRID 安装 01
    19cRACGRID安装01

  • 19c RAC GRID 安装 02
    19cRACGRID安装02

  • 19c RAC GRID 安装 03
    19cRACGRID安装03

  • 19c RAC GRID 安装 04 ==注意 SCAN Name==
    19cRACGRID安装04注意SCANName

  • 19c RAC GRID 安装 05
    19cRACGRID安装05

  • 19c RAC GRID 安装 06
    19cRACGRID安装06

  • 19c RAC GRID 安装 07
    19cRACGRID安装07

  • 19c RAC GRID 安装 08
    19cRACGRID安装08

  • 19c RAC GRID 安装 09
    19cRACGRID安装09

  • 19c RAC GRID 安装 10 选定 Private Interfaces
    19cRACGRID安装10选定PrivateInterfaces

  • 19c RAC GRID 安装 11

    如果使用 Oracle Flex ASM 内网接口需要选 ASM & Private
    19cRACGRID安装11

  • 19c RAC GRID 安装 12
    19cRACGRID安装12

  • 19c RAC GRID 安装 13 私有网络选 ASM & Private
    19cRACGRID安装13私有网络选ASM&Private

  • 19c RAC GRID 安装 14
    19cRACGRID安装14

  • 19c RAC GRID 安装 15

    不安装集群配置管理库。如果安装建议单独分配磁盘。在这有点区别,12c选no也会强制装,而且不能将mgmtdb单独装在一个磁盘,导致ocr磁盘不能少于40g。18c的时候可以单独分,19c选择no不装。
    19cRACGRID安装15

  • 19c RAC GRID 安装 16
    19cRACGRID安装16

  • 19c RAC GRID 安装 17 创建 `OCR` 磁盘组
    19cRACGRID安装17

  • 19c RAC GRID 安装 18 ASM Password
    19cRACGRID安装18

  • 19c RAC GRID 安装 19 ASM Password Yes
    19cRACGRID安装19

  • 19c RAC GRID 安装 20
    19cRACGRID安装20

  • 19c RAC GRID 安装 21
    19cRACGRID安装21

  • 19c RAC GRID 安装 22
    19cRACGRID安装22

  • 19c RAC GRID 安装 23
    19cRACGRID安装23

  • 19c RAC GRID 安装 24
    19cRACGRID安装24

  • 19c RAC GRID 安装 25
    19cRACGRID安装25

  • 19c RAC GRID 安装 26
    19cRACGRID安装26

  • 19c RAC GRID 安装 27
    19cRACGRID安装27

  • 19c RAC GRID 安装 28
    19cRACGRID安装28

  • 19c RAC GRID 安装 29 Yes
    19cRACGRID安装29Yes

  • 19c RAC GRID 安装 30
    19cRACGRID安装30

  • 19c RAC GRID 安装 31
    19cRACGRID安装31

  • 19c RAC GRID 安装 32
    19cRACGRID安装32

  • 19c RAC GRID 安装 33
    19cRACGRID安装33

  • 19c RAC GRID 安装 34 各个节点依次执行 2 个 root 脚本
    ==跑完脚本点ok==
    19cRACGRID安装34各个节点依次执行2个root脚本

root用户运行脚本
第一个脚本:
节点一:
​```
[root@tqdb21: ~]# /u01/app/oraInventory/orainstRoot.sh 
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@tqdb21: ~]# 
​```
节点二:
​```
[root@tqdb22: ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@tqdb22: ~]# 
​```
第二个脚本:
节点一:
​```
[root@tqdb21: ~]# /u01/app/19c/grid/root.sh
Performing root user operation.
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME=  /u01/app/19c/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]: 
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/19c/grid/crs/install/crsconfig_params
The log of current session can be found at:
/u01/app/grid/crsdata/tqdb21/crsconfig/rootcrs_tqdb21_2020-02-13_00-38-01AM.log
2020/02/13 00:38:09 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
2020/02/13 00:38:10 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
2020/02/13 00:38:10 CLSRSC-363: User ignored prerequisites during installation
2020/02/13 00:38:10 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
2020/02/13 00:38:12 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
2020/02/13 00:38:12 CLSRSC-594: Executing installation step 5 of 19: 'SetupOSD'.
2020/02/13 00:38:12 CLSRSC-594: Executing installation step 6 of 19: 'CheckCRSConfig'.
2020/02/13 00:38:13 CLSRSC-594: Executing installation step 7 of 19: 'SetupLocalGPNP'.
2020/02/13 00:38:48 CLSRSC-594: Executing installation step 8 of 19: 'CreateRootCert'.
2020/02/13 00:38:54 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
2020/02/13 00:39:03 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2020/02/13 00:39:13 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
2020/02/13 00:39:13 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
2020/02/13 00:39:20 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
2020/02/13 00:39:20 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
2020/02/13 00:39:43 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
2020/02/13 00:39:48 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
2020/02/13 00:39:53 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
2020/02/13 00:39:58 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.
ASM has been created and started successfully.
[DBT-30001] Disk groups created successfully. Check /u01/app/grid/cfgtoollogs/asmca/asmca-200213AM124031.log for details.
2020/02/13 00:41:59 CLSRSC-482: Running command: '/u01/app/19c/grid/bin/ocrconfig -upgrade grid oinstall'
CRS-4256: Updating the profile
Successful addition of voting disk 12492c49df6f4f04bf57d5a668e6adaa.
Successful addition of voting disk 4da1547a561f4f61bf69b1af64e4a486.
Successful addition of voting disk 02061e2a235d4f45bf4c7b306e8c2c48.
Successfully replaced voting disk group with +OCR.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
1. ONLINE   12492c49df6f4f04bf57d5a668e6adaa (/dev/asm-ocr1) [OCR]
2. ONLINE   4da1547a561f4f61bf69b1af64e4a486 (/dev/asm-ocr2) [OCR]
3. ONLINE   02061e2a235d4f45bf4c7b306e8c2c48 (/dev/asm-ocr3) [OCR]
Located 3 voting disk(s).
2020/02/13 00:44:23 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
2020/02/13 00:45:58 CLSRSC-343: Successfully started Oracle Clusterware stack
2020/02/13 00:45:58 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
2020/02/13 00:48:42 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
2020/02/13 00:50:11 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@tqdb21: ~]# 
​```
节点二:
​```
[root@tqdb22: ~]# /u01/app/19c/grid/root.sh
Performing root user operation.
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME=  /u01/app/19c/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]: 
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/19c/grid/crs/install/crsconfig_params
The log of current session can be found at:
/u01/app/grid/crsdata/tqdb22/crsconfig/rootcrs_tqdb22_2020-02-13_00-52-25AM.log
2020/02/13 00:52:29 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
2020/02/13 00:52:29 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
2020/02/13 00:52:29 CLSRSC-363: User ignored prerequisites during installation
2020/02/13 00:52:29 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
2020/02/13 00:52:31 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
2020/02/13 00:52:31 CLSRSC-594: Executing installation step 5 of 19: 'SetupOSD'.
2020/02/13 00:52:31 CLSRSC-594: Executing installation step 6 of 19: 'CheckCRSConfig'.
2020/02/13 00:52:31 CLSRSC-594: Executing installation step 7 of 19: 'SetupLocalGPNP'.
2020/02/13 00:52:32 CLSRSC-594: Executing installation step 8 of 19: 'CreateRootCert'.
2020/02/13 00:52:32 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
2020/02/13 00:52:42 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
2020/02/13 00:52:42 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
2020/02/13 00:52:47 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
2020/02/13 00:52:48 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
2020/02/13 00:53:06 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2020/02/13 00:53:12 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
2020/02/13 00:53:13 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
2020/02/13 00:53:15 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
2020/02/13 00:53:16 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.
2020/02/13 00:53:25 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
2020/02/13 00:54:12 CLSRSC-343: Successfully started Oracle Clusterware stack
2020/02/13 00:54:12 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
2020/02/13 00:54:50 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
2020/02/13 00:55:06 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@tqdb22: ~]# 
​```
此时,两个节点都已经可以查询集群服务了。
​```节点1
[grid@tqdb21: ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
ONLINE  ONLINE       tqdb21                   STABLE
ONLINE  ONLINE       tqdb22                   STABLE
ora.chad
ONLINE  ONLINE       tqdb21                   STABLE
ONLINE  ONLINE       tqdb22                   STABLE
ora.net1.network
ONLINE  ONLINE       tqdb21                   STABLE
ONLINE  ONLINE       tqdb22                   STABLE
ora.ons
ONLINE  ONLINE       tqdb21                   STABLE
ONLINE  ONLINE       tqdb22                   STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
1        ONLINE  ONLINE       tqdb21                   STABLE
2        ONLINE  ONLINE       tqdb22                   STABLE
3        OFFLINE OFFLINE                               STABLE
ora.LISTENER_SCAN1.lsnr
1        ONLINE  ONLINE       tqdb21                   STABLE
ora.OCR.dg(ora.asmgroup)
1        ONLINE  ONLINE       tqdb21                   STABLE
2        ONLINE  ONLINE       tqdb22                   STABLE
3        OFFLINE OFFLINE                               STABLE
ora.asm(ora.asmgroup)
1        ONLINE  ONLINE       tqdb21                   Started,STABLE
2        ONLINE  ONLINE       tqdb22                   Started,STABLE
3        OFFLINE OFFLINE                               STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
1        ONLINE  ONLINE       tqdb21                   STABLE
2        ONLINE  ONLINE       tqdb22                   STABLE
3        OFFLINE OFFLINE                               STABLE
ora.cvu
1        ONLINE  ONLINE       tqdb21                   STABLE
ora.qosmserver
1        ONLINE  ONLINE       tqdb21                   STABLE
ora.scan1.vip
1        ONLINE  ONLINE       tqdb21                   STABLE
ora.tqdb21.vip
1        ONLINE  ONLINE       tqdb21                   STABLE
ora.tqdb22.vip
1        ONLINE  ONLINE       tqdb22                   STABLE
--------------------------------------------------------------------------------
[grid@tqdb21: ~]$ 
​```
​```节点2
[grid@tqdb22: ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
ONLINE  ONLINE       tqdb21                   STABLE
ONLINE  ONLINE       tqdb22                   STABLE
ora.chad
ONLINE  ONLINE       tqdb21                   STABLE
ONLINE  ONLINE       tqdb22                   STABLE
ora.net1.network
ONLINE  ONLINE       tqdb21                   STABLE
ONLINE  ONLINE       tqdb22                   STABLE
ora.ons
ONLINE  ONLINE       tqdb21                   STABLE
ONLINE  ONLINE       tqdb22                   STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
1        ONLINE  ONLINE       tqdb21                   STABLE
2        ONLINE  ONLINE       tqdb22                   STABLE
3        OFFLINE OFFLINE                               STABLE
ora.LISTENER_SCAN1.lsnr
1        ONLINE  ONLINE       tqdb21                   STABLE
ora.OCR.dg(ora.asmgroup)
1        ONLINE  ONLINE       tqdb21                   STABLE
2        ONLINE  ONLINE       tqdb22                   STABLE
3        OFFLINE OFFLINE                               STABLE
ora.asm(ora.asmgroup)
1        ONLINE  ONLINE       tqdb21                   Started,STABLE
2        ONLINE  ONLINE       tqdb22                   Started,STABLE
3        OFFLINE OFFLINE                               STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
1        ONLINE  ONLINE       tqdb21                   STABLE
2        ONLINE  ONLINE       tqdb22                   STABLE
3        OFFLINE OFFLINE                               STABLE
ora.cvu
1        ONLINE  ONLINE       tqdb21                   STABLE
ora.qosmserver
1        ONLINE  ONLINE       tqdb21                   STABLE
ora.scan1.vip
1        ONLINE  ONLINE       tqdb21                   STABLE
ora.tqdb21.vip
1        ONLINE  ONLINE       tqdb21                   STABLE
ora.tqdb22.vip
1        ONLINE  ONLINE       tqdb22                   STABLE
--------------------------------------------------------------------------------
[grid@tqdb22: ~]$ 
[grid@tqdb22: ~]$ asmcmd -p
ASMCMD [+] > lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  NORMAL  N         512             512   4096  4194304      6144     5228             2048            1590              0             Y  OCR/
ASMCMD [+] > ls -l
State    Type    Rebal  Name
MOUNTED  NORMAL  N      OCR/
ASMCMD [+] > cd OCR
ASMCMD [+OCR] > ls -l
Type      Redund  Striped  Time             Sys  Name
Y    ASM/
PASSWORD  HIGH    COARSE   FEB 13 00:00:00  N    orapwasm => +OCR/ASM/PASSWORD/pwdasm.256.1032223315
PASSWORD  HIGH    COARSE   FEB 13 00:00:00  N    orapwasm_backup => +OCR/ASM/PASSWORD/pwdasm.257.1032223745
Y    tqdb-cluster/
ASMCMD [+OCR] > quit
[grid@tqdb22: ~]$ 
[grid@tqdb22: ~]$ sqlplus / as sysasm 
SQL*Plus: Release 19.0.0.0.0 - Production on Thu Feb 13 01:00:43 2020
Version 19.3.0.0.0
Copyright (c) 1982, 2019, Oracle.  All rights reserved.
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
SQL> -- ASM 目录属性                                                        
SQL> -- Oracle ASM Attributes Directory                                     
SQL> set pagesize 200                                                       
SQL> set linesize 200                                                       
SQL> col "group" for a30                                                    
SQL> col "attribute" for a50                                                
SQL> col "value" for a50                                                    
SQL> select g.name "group", a.name "attribute", a.value "value"             
2  from v$asm_diskgroup g, v$asm_attribute a                              
3  where g.group_number=a.group_number and a.name not like 'template%';   
group                          attribute                                          value
------------------------------ -------------------------------------------------- --------------------------------------------------
OCR                            idp.type                                           dynamic
OCR                            vam_migration_done                                 true
OCR                            disk_repair_time                                   12.0h
OCR                            phys_meta_replicated                               true
OCR                            failgroup_repair_time                              24.0h
OCR                            thin_provisioned                                   FALSE
OCR                            preferred_read.enabled                             FALSE
OCR                            ate_conversion_done                                true
OCR                            sector_size                                        512
OCR                            logical_sector_size                                512
OCR                            content.type                                       data
OCR                            content.check                                      FALSE
OCR                            au_size                                            4194304
OCR                            appliance._partnering_type                         GENERIC
OCR                            compatible.asm                                     19.0.0.0.0
OCR                            compatible.rdbms                                   10.1.0.0.0
OCR                            cell.smart_scan_capable                            FALSE
OCR                            cell.sparse_dg                                     allnonsparse
OCR                            access_control.enabled                             FALSE
OCR                            access_control.umask                               066
OCR                            content_hardcheck.enabled                          FALSE
OCR                            scrub_async_limit                                  1
OCR                            scrub_metadata.enabled                             TRUE
OCR                            idp.boundary                                       auto
24 rows selected.
SQL> -- ASM 磁盘组信息                                                      
SQL> set linesize 200;                                                      
SQL> col GROUP_NAME for a20;                                                
SQL> col STATE for a20;                                                     
SQL> SELECT                                                                 
2      name                                     group_name                
3    , sector_size                              sector_size               
4    , block_size                               block_size                
5    , allocation_unit_size                     allocation_unit_size      
6    , state                                    state                     
7    , type                                     type                      
8    , total_mb                                 total_mb                  
9    , (total_mb - free_mb)                     used_mb                   
10    , free_mb                                  free_mb                   
11    , ROUND((1- (free_mb / total_mb))*100, 2)  pct_used                  
12  FROM                                                                   
13      v$asm_diskgroup                                                    
14  ORDER BY                                                               
15      name                                                               
16  ;                                                                      
GROUP_NAME           SECTOR_SIZE BLOCK_SIZE ALLOCATION_UNIT_SIZE STATE                TYPE                 TOTAL_MB    USED_MB    FREE_MB   PCT_USED
-------------------- ----------- ---------- -------------------- -------------------- ------------------ ---------- ---------- ---------- ----------
OCR                          512       4096              4194304 MOUNTED              NORMAL                   6144        916       5228      14.91
SQL> -- 查看 ASM 磁盘组剩余空间                 
SQL> set linesize 200;                          
SQL> col name for a30;                          
SQL> select group_number,                       
2         name,                               
3         state,                              
4         type,                               
5         total_mb,                           
6         free_mb,                            
7         total_mb - free_mb as used_mb       
8    from v$asm_diskgroup;                    
GROUP_NUMBER NAME                           STATE                TYPE                 TOTAL_MB    FREE_MB    USED_MB
------------ ------------------------------ -------------------- ------------------ ---------- ---------- ----------
1 OCR                            MOUNTED              NORMAL                   6144       5228        916
SQL> -- ASM 磁盘使用情况                                                         
SQL> set linesize 200;                                                           
SQL> set pagesize 200;                                                           
SQL> col disk_group_name for a30;                                                
SQL> col disk_file_path for a30;                                                 
SQL> col disk_file_name for a20;                                                 
SQL> col disk_file_fail_group for a20;                                           
SQL> SELECT                                                                      
2      NVL(a.name, '[CANDIDATE]')                       disk_group_name        
3    , b.path                                           disk_file_path         
4    , b.name                                           disk_file_name         
5    , b.failgroup                                      disk_file_fail_group   
6    , b.total_mb                                       total_mb               
7    , (b.total_mb - b.free_mb)                         used_mb                
8    , ROUND((1- (b.free_mb / b.total_mb))*100, 2)      pct_used               
9  FROM                                                                        
10      v$asm_diskgroup a RIGHT OUTER JOIN v$asm_disk b USING (group_number)    
11  WHERE b.total_mb <> 0                                                       
12  ORDER BY                                                                    
13      a.name, b.name                                                          
14  ;                                                                           
DISK_GROUP_NAME                DISK_FILE_PATH                 DISK_FILE_NAME       DISK_FILE_FAIL_GROUP   TOTAL_MB    USED_MB   PCT_USED
------------------------------ ------------------------------ -------------------- -------------------- ---------- ---------- ----------
OCR                            /dev/asm-ocr1                  OCR_0000             OCR_0000                   2048        300      14.65
OCR                            /dev/asm-ocr2                  OCR_0001             OCR_0001                   2048        312      15.23
OCR                            /dev/asm-ocr3                  OCR_0002             OCR_0002                   2048        304      14.84
SQL> quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
[grid@tqdb22: ~]$ 
​```
  • 19c RAC GRID 安装 35
    19cRACGRID安装35

  • 19c RAC GRID 安装 36 点ok-next 这个报错是由于scan引起的可以忽略
    19cRACGRID安装36点ok-next这个报错是由于scan引起的可以忽略

  • 19c RAC GRID 安装 37
    19cRACGRID安装37

  • 19c RAC GRID 安装 38 Yes
    19cRACGRID安装38Yes

  • 19c RAC GRID 安装 39 Close 集群安装完毕
    19cRACGRID安装39Close集群安装完毕

3.2 禁用 ASM 实例的 AMM

Linux下开启 HugePage 必须禁用 AMM 。

# su – grid
$ sqlplus / as sysasm
> alter system set sga_max_size=1088M scope=spfile sid='*'; 
> alter system set sga_target=1088M scope=spfile sid='*'; 
> alter system set pga_aggregate_target=1024M scope=spfile sid='*';
> alter system set memory_target=0 scope=spfile sid='*';
> alter system set memory_max_target=0 scope=spfile sid='*'; 
> alter system reset memory_max_target scope=spfile sid='*';
> alter system set processes=300 scope=spfile;

重启HAS生效。(root 用户)

# crsctl stop has 
# crsctl start has

操作记录:

[grid@tqdb21: ~]$ sqlplus / as sysasm
SQL*Plus: Release 19.0.0.0.0 - Production on Thu Feb 13 02:52:56 2020
Version 19.3.0.0.0
Copyright (c) 1982, 2019, Oracle.  All rights reserved.
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
SQL> 
SQL> alter system set sga_max_size=1088M scope=spfile sid='*'; 
System altered.
SQL> alter system set sga_target=1088M scope=spfile sid='*'; 
System altered.
SQL> alter system set pga_aggregate_target=1024M scope=spfile sid='*';
System altered.
SQL> 
SQL> alter system set memory_target=0 scope=spfile sid='*';
System altered.
SQL> alter system set memory_max_target=0 scope=spfile sid='*'; 
System altered.
SQL> alter system reset memory_max_target scope=spfile sid='*';
System altered.
SQL> alter system set processes=300 scope=spfile;
System altered.
SQL> 
SQL> quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
[grid@tqdb21: ~]$ 
[root@tqdb21: ~]# crsctl stop has 
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'tqdb21'
CRS-2673: Attempting to stop 'ora.crsd' on 'tqdb21'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'tqdb21'
CRS-2673: Attempting to stop 'ora.qosmserver' on 'tqdb21'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'tqdb21'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'tqdb21'
CRS-2673: Attempting to stop 'ora.cvu' on 'tqdb21'
CRS-33673: Attempting to stop resource group 'ora.asmgroup' on server 'tqdb21'
CRS-2673: Attempting to stop 'ora.OCR.dg' on 'tqdb21'
CRS-2673: Attempting to stop 'ora.chad' on 'tqdb21'
CRS-2677: Stop of 'ora.OCR.dg' on 'tqdb21' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'tqdb21'
CRS-2677: Stop of 'ora.asm' on 'tqdb21' succeeded
CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'tqdb21'
CRS-2677: Stop of 'ora.cvu' on 'tqdb21' succeeded
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'tqdb21' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'tqdb21'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'tqdb21' succeeded
CRS-2673: Attempting to stop 'ora.tqdb21.vip' on 'tqdb21'
CRS-2677: Stop of 'ora.scan1.vip' on 'tqdb21' succeeded
CRS-2677: Stop of 'ora.tqdb21.vip' on 'tqdb21' succeeded
CRS-2677: Stop of 'ora.chad' on 'tqdb21' succeeded
CRS-2677: Stop of 'ora.qosmserver' on 'tqdb21' succeeded
CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'tqdb21' succeeded
CRS-2673: Attempting to stop 'ora.asmnet1.asmnetwork' on 'tqdb21'
CRS-2677: Stop of 'ora.asmnet1.asmnetwork' on 'tqdb21' succeeded
CRS-33677: Stop of resource group 'ora.asmgroup' on server 'tqdb21' succeeded.
CRS-2672: Attempting to start 'ora.qosmserver' on 'tqdb22'
CRS-2672: Attempting to start 'ora.scan1.vip' on 'tqdb22'
CRS-2672: Attempting to start 'ora.cvu' on 'tqdb22'
CRS-2672: Attempting to start 'ora.tqdb21.vip' on 'tqdb22'
CRS-2676: Start of 'ora.cvu' on 'tqdb22' succeeded
CRS-2676: Start of 'ora.tqdb21.vip' on 'tqdb22' succeeded
CRS-2676: Start of 'ora.scan1.vip' on 'tqdb22' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'tqdb22'
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'tqdb22' succeeded
CRS-2676: Start of 'ora.qosmserver' on 'tqdb22' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'tqdb21'
CRS-2677: Stop of 'ora.ons' on 'tqdb21' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'tqdb21'
CRS-2677: Stop of 'ora.net1.network' on 'tqdb21' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'tqdb21' has completed
CRS-2677: Stop of 'ora.crsd' on 'tqdb21' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'tqdb21'
CRS-2673: Attempting to stop 'ora.crf' on 'tqdb21'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'tqdb21'
CRS-2677: Stop of 'ora.crf' on 'tqdb21' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'tqdb21' succeeded
CRS-2677: Stop of 'ora.asm' on 'tqdb21' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'tqdb21'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'tqdb21' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'tqdb21'
CRS-2673: Attempting to stop 'ora.evmd' on 'tqdb21'
CRS-2677: Stop of 'ora.ctssd' on 'tqdb21' succeeded
CRS-2677: Stop of 'ora.evmd' on 'tqdb21' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'tqdb21'
CRS-2677: Stop of 'ora.cssd' on 'tqdb21' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'tqdb21'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'tqdb21'
CRS-2677: Stop of 'ora.gipcd' on 'tqdb21' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'tqdb21' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'tqdb21' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[root@tqdb21: ~]# 
[root@tqdb21: ~]# crsctl start has
CRS-4123: Oracle High Availability Services has been started.
[root@tqdb21: ~]# 
[root@tqdb21: ~]# crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
ONLINE  ONLINE       tqdb21                   STABLE
ONLINE  ONLINE       tqdb22                   STABLE
ora.chad
ONLINE  ONLINE       tqdb21                   STABLE
ONLINE  ONLINE       tqdb22                   STABLE
ora.net1.network
ONLINE  ONLINE       tqdb21                   STABLE
ONLINE  ONLINE       tqdb22                   STABLE
ora.ons
ONLINE  ONLINE       tqdb21                   STABLE
ONLINE  ONLINE       tqdb22                   STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
1        ONLINE  ONLINE       tqdb21                   STABLE
2        ONLINE  ONLINE       tqdb22                   STABLE
3        ONLINE  OFFLINE                               STABLE
ora.LISTENER_SCAN1.lsnr
1        ONLINE  ONLINE       tqdb22                   STABLE
ora.OCR.dg(ora.asmgroup)
1        ONLINE  ONLINE       tqdb21                   STABLE
2        ONLINE  ONLINE       tqdb22                   STABLE
3        OFFLINE OFFLINE                               STABLE
ora.asm(ora.asmgroup)
1        ONLINE  ONLINE       tqdb21                   Started,STABLE
2        ONLINE  ONLINE       tqdb22                   Started,STABLE
3        OFFLINE OFFLINE                               STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
1        ONLINE  ONLINE       tqdb21                   STABLE
2        ONLINE  ONLINE       tqdb22                   STABLE
3        OFFLINE OFFLINE                               STABLE
ora.cvu
1        ONLINE  ONLINE       tqdb22                   STABLE
ora.qosmserver
1        ONLINE  ONLINE       tqdb22                   STABLE
ora.scan1.vip
1        ONLINE  ONLINE       tqdb22                   STABLE
ora.tqdb21.vip
1        ONLINE  ONLINE       tqdb21                   STABLE
ora.tqdb22.vip
1        ONLINE  ONLINE       tqdb22                   STABLE
--------------------------------------------------------------------------------
[root@tqdb21: ~]# 

3.3 DB 安装与配置

Oracle用户解压软件:

[oracle@tqdb21: /Software]$ ll
total 5922412
drwxr-xr-x 2 root   root             47 Feb 12 18:53 DB RU 19.6.0.0.200114
drwxr-xr-x 2 root   root             47 Feb 12 18:53 GI RU 19.6.0.0.200114
-rwx------ 1 oracle oinstall 3059705302 Feb 12 18:13 LINUX.X64_193000_db_home.zip
-rwx------ 1 grid   oinstall 2889184573 Feb 12 18:13 LINUX.X64_193000_grid_home.zip
-rwx------ 1 root   root      115653541 Feb 12 18:53 p6880880_190000_Linux-x86-64.zip
[oracle@tqdb21: /Software]$ echo $ORACLE_HOME
/u01/app/oracle/product/19c/dbhome
[oracle@tqdb21: /Software]$ unzip LINUX.X64_193000_db_home.zip -d $ORACLE_HOME
[oracle@tqdb21: /Software]$ 

使用 oracle 用户登入图形安装:(选择 RAC, 只安装数据库软件)

[root@tqdb21: ~]# xhost +
access control disabled, clients can connect from any host
[root@tqdb21: ~]# xdpyinfo | head
name of display:    :0
version number:    11.0
vendor string:    The X.Org Foundation
vendor release number:    12004000
X.Org version: 1.20.4
maximum request size:  16777212 bytes
motion buffer size:  256
bitmap unit, bit order, padding:    32, LSBFirst, 32
image byte order:    LSBFirst
number of supported pixmap formats:    7
[root@tqdb21: ~]# su - oracle
Last login: Thu Feb 13 17:56:33 CST 2020 on pts/7
[oracle@tqdb21: ~]$ export DISPLAY=192.168.6.21:0
[oracle@tqdb21: ~]$ echo $DISPLAY
192.168.6.21:0
[oracle@tqdb21: ~]$ 
[oracle@tqdb21: ~]$ cd $ORACLE_HOME
[oracle@tqdb21: /u01/app/oracle/product/19c/dbhome]$ ./runInstaller 

DB 安装截图:

  • 19c RAC DB 安装 01
    19cRACDB安装01

  • 19c RAC DB 安装 02
    19cRACDB安装02

  • 19c RAC DB 安装 03
    19cRACDB安装03

  • 19c RAC DB 安装 04
    19cRACDB安装04

  • 19c RAC DB 安装 05 SSH 等效性
    19cRACDB安装05SSH等效性

  • 19c RAC DB 安装 06 SSH 等效性 配置完成
    19cRACDB安装06SSH等效性配置完成

  • 19c RAC DB 安装 07 SSH 等效性 验证通过
    19cRACDB安装07SSH等效性验证通过

  • 19c RAC DB 安装 08
    19cRACDB安装08

  • 19c RAC DB 安装 09
    19cRACDB安装09

  • 19c RAC DB 安装 10
    19cRACDB安装10

  • 19c RAC DB 安装 11
    19cRACDB安装11

  • 19c RAC DB 安装 12
    19cRACDB安装12

  • 19c RAC DB 安装 13
    19cRACDB安装13

  • 19c RAC DB 安装 14 Ignore All
    19cRACDB安装14IgnoreAll

  • 19c RAC DB 安装 15 Yes
    19cRACDB安装15Yes

  • 19c RAC DB 安装 16
    19cRACDB安装16

  • 19c RAC DB 安装 17
    19cRACDB安装17

  • 19c RAC DB 安装 18 各个节点依次执行 2 个 root 脚本
    ==跑完脚本点ok==
    19cRACDB安装18各个节点依次执行2个root脚本

运行root.sh脚本

节点一:
​```
[root@tqdb21: ~]# /u01/app/oracle/product/19c/dbhome/root.sh
Performing root user operation.
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME=  /u01/app/oracle/product/19c/dbhome
Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
[root@tqdb21: ~]# 
​```
节点二:
​```
[root@tqdb22: ~]# /u01/app/oracle/product/19c/dbhome/root.sh
Performing root user operation.
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME=  /u01/app/oracle/product/19c/dbhome
Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
[root@tqdb22: ~]# 
​```
  • 19c RAC DB 安装 19 DB 软件安装完成
    19cRACDB安装19DB软件安装完成

3.4 升级:GI 和 DB 打 RU (RELEASE UPDATE)补丁

补丁链接:Assistant: Download Reference for Oracle Database/GI Update, Revision, PSU, SPU(CPU), Bundle Patches, Patchsets and Base Releases (Doc ID 2118136.2)

DescriptionDatabase UpdateGI UpdateWindows Bundle Patch
JAN2020 (19.6.0.0.0)305574333050191030445947
OCT2019 (19.5.0.0.0)301251333011678930151705
JUL2019 (19.4.0.0.0)2983471729708769NA
APR2019 (19.3.0.0.0)2951724229517302NA

OPatch 下载地址: https://updates.oracle.com/download/6880880.html

  • OPatch Patch6880880_OPatch19.0.0.0.0

OPatchPatch6880880_OPatch19.0.0.0.0

参见:https://oracleblog.org/study-note/apply-patch-dpbp-170418/
步骤为:
1. 升级opatch到最新版本。注,在opatch 12.2.0.1.5之前,执行opatchauto时需要加-ocmrf [ocm response file]参数。如果使用这个版本之后,就不需要再加响应文件的参数了。另外,170418这个DPBP要求使用opatch版本至少为12.2.0.1.7。
2. [GRID_HOME]/OPatch/opatchauto apply [UNZIPPED_PATCH_LOCATION]/25433352。注意,这个命令需要在各个节点上依次(非并行)执行。执行的时候,会bring down crs和database,会给grid home和oracle home打上补丁。依次打的方式,也减少了停机时间。
3. datapatch -verbose。注,上面说了依次打减少了停机时间,但是停机时间还是需要的,就是在这里的运行datapatch的时间。这个步骤是升级数据字典,针对整个database的数据字典,因此只需在一个节点上跑就可以了。主要注意的是,如果是cdb模式,需要`alter pluggable database all open`,打开所有的pdb之后,再运行datapatch。
文档: Datapatch: Database 12c or later Post Patch SQL Automation (Doc ID 1585822.1)
​```只在一个节点执行即可
在任何一个节点执行加载sql部分到数据库
oracle$ cd $ORACLE_HOME/OPatch
如果是pdb数据库,需要保证所有的pdb属于read write状态。
show pdbs;
oracle$ ./datapatch -verbose
重启一下db
[root@tqdb21: ~]# srvctl stop database -db tqdb
[root@tqdb21: ~]# srvctl start database -db tqdb
查看补丁
19:29:25 sys@TQDB(tqdb21)> set linesize 300;
19:29:34 sys@TQDB(tqdb21)> col TARGET_BUILD_TIMESTAMP for a10;
19:29:34 sys@TQDB(tqdb21)> col SOURCE_BUILD_TIMESTAMP for a20;
19:29:34 sys@TQDB(tqdb21)> col SOURCE_BUILD_DESCRIPTION for a20;
19:29:34 sys@TQDB(tqdb21)> col TARGET_VERSIONT for a20;
19:29:34 sys@TQDB(tqdb21)> col TARGET_BUILD_DESCRIPTION for a20;
19:29:34 sys@TQDB(tqdb21)> 
19:29:34 sys@TQDB(tqdb21)> select install_id, PATCH_ID,PATCH_UID,ACTION,STATUS, DESCRIPTION, SOURCE_VERSION,SOURCE_BUILD_DESCRIPTION,SOURCE_BUILD_TIMESTAMP, TARGET_VERSION, TARGET_BUILD_DESCRIPTION, to_char(TARGET_BUILD_TIMESTAMP, 'yyyy-mm-dd hh24:mi:ss') from dba_registry_sqlpatch;
INSTALL_ID   PATCH_ID  PATCH_UID ACTION          STATUS          DESCRIPTION                                                  SOURCE_VERSION  SOURCE_BUILD_DESCRIP SOURCE_BUILD_TIMESTA TARGET_VERSION  TARGET_BUILD_DESCRIP TO_CHAR(TARGET_BUIL
---------- ---------- ---------- --------------- --------------- ------------------------------------------------------------ --------------- -------------------- -------------------- --------------- -------------------- -------------------
1   30557433   23305305 APPLY           SUCCESS         Database Release Update : 19.6.0.0.200114 (30557433)         19.1.0.0.0      Feature Release                           19.6.0.0.0      Release_Update       2019-12-17 15:50:04
19:29:36 sys@TQDB(tqdb21)> 
​```
说明:我这里只安装了数据库软件,还没有创建数据库,所以不需要升级数据字典。(没创建数据库,还没有数据字典呢)
4. 打完之后建议用orachk检查一下。

3.4.1 更新 OPatch

==说明:分别在两个节点更新 grid 以及 database OPatch 版本。==

-- 更新 opatch 版本到当前最新版本 12.2.0.1.19 
-- 两个节点都执行。
-- p6880880_190000_Linux-x86-64.zip
-- 解压到 grid 用户的 $ORACLE_HOME (/u01/app/19c/grid)
grid$ unzip p6880880_190000_Linux-x86-64.zip -d $ORACLE_HOME
输入:A
grid$ opatch version      ==> 检查opatch版本
-- 解压到 oracle 用户的 $ORACLE_HOME (/u01/app/oracle/product/19c/dbhome)
oracle$ unzip p6880880_190000_Linux-x86-64.zip -d $ORACLE_HOME
输入:A
oracle$ opatch version      ==> 检查opatch版本

操作记录:

1. 查看:两个节点 grid 以及 oracle 当前 opatch 版本
-- 查看:节点1 grid 以及 oracle 当前 opatch 版本
​```
[grid@tqdb21: ~]$ opatch version
OPatch Version: 12.2.0.1.17
OPatch succeeded.
[grid@tqdb21: ~]$ 
[grid@tqdb21: ~]$ su - oracle
Password: 
Last login: Thu Feb 13 19:21:21 CST 2020 on pts/7
[oracle@tqdb21: ~]$ opatch version
OPatch Version: 12.2.0.1.17
OPatch succeeded.
[oracle@tqdb21: ~]$ 
-- 查看:节点2 grid 以及 oracle 当前 opatch 版本
[grid@tqdb22: ~]$ opatch version
OPatch Version: 12.2.0.1.17
OPatch succeeded.
[grid@tqdb22: ~]$ su - oracle
Password: 
Last login: Thu Feb 13 18:55:38 CST 2020
[oracle@tqdb22: ~]$ opatch version
OPatch Version: 12.2.0.1.17
OPatch succeeded.
[oracle@tqdb22: ~]$ 
​```
-- 2. 两个节点:分别备份 grid 以及 oracle 用户的当前 OPatch 目录
​```
-- 节点1 
[root@tqdb21: ~]# mv /u01/app/19c/grid/OPatch/ /u01/app/19c/grid/OPatch.bak
[root@tqdb21: ~]# mv /u01/app/oracle/product/19c/dbhome/OPatch/ /u01/app/oracle/product/19c/dbhome/OPatch.bak
[root@tqdb21: ~]# mkdir /u01/app/19c/grid/OPatch/
[root@tqdb21: ~]# chown grid:oinstall /u01/app/19c/grid/OPatch/
[root@tqdb21: ~]# mkdir /u01/app/oracle/product/19c/dbhome/OPatch/
[root@tqdb21: ~]# chown oracle:oinstall /u01/app/oracle/product/19c/dbhome/OPatch/
-- 节点2
[root@tqdb22: ~]# mv /u01/app/19c/grid/OPatch/ /u01/app/19c/grid/OPatch.bak
[root@tqdb22: ~]# mv /u01/app/oracle/product/19c/dbhome/OPatch/ /u01/app/oracle/product/19c/dbhome/OPatch.bak
[root@tqdb22: ~]# mkdir /u01/app/19c/grid/OPatch/
[root@tqdb22: ~]# chown grid:oinstall /u01/app/19c/grid/OPatch/
[root@tqdb22: ~]# mkdir /u01/app/oracle/product/19c/dbhome/OPatch/
[root@tqdb22: ~]# chown oracle:oinstall /u01/app/oracle/product/19c/dbhome/OPatch/
​```
-- 3. 两个节点 grid 目录 OPatch 替换
​```
-- 节点1
[root@tqdb21: /Software]# ll p6880880_190000_Linux-x86-64.zip 
-rwx------ 1 root root 115653541 Feb 12 18:53 p6880880_190000_Linux-x86-64.zip
[root@tqdb21: /Software]# 
[root@tqdb21: /Software]# chown grid:oinstall p6880880_190000_Linux-x86-64.zip 
[root@tqdb21: /Software]# ll p6880880_190000_Linux-x86-64.zip                  
-rwx------ 1 grid oinstall 115653541 Feb 12 18:53 p6880880_190000_Linux-x86-64.zip
[root@tqdb21: /Software]# 
[root@tqdb21: /Software]# su - grid
Last login: Fri Feb 14 00:35:39 CST 2020
[grid@tqdb21: ~]$ cd /Software/
[grid@tqdb21: /Software]$ ll p6880880_190000_Linux-x86-64.zip 
-rwx------ 1 grid oinstall 115653541 Feb 12 18:53 p6880880_190000_Linux-x86-64.zip
[grid@tqdb21: /Software]$ echo $ORACLE_HOME
/u01/app/19c/grid
[grid@tqdb21: /Software]$ unzip p6880880_190000_Linux-x86-64.zip -d $ORACLE_HOME
[grid@tqdb21: /Software]$ du -sh $ORACLE_HOME/OPatch
252M    /u01/app/19c/grid/OPatch
[grid@tqdb21: /Software]$ opatch version
OPatch Version: 12.2.0.1.19
OPatch succeeded.
[grid@tqdb21: /Software]$ 
-- 节点2
[root@tqdb22: /Software]# ll p6880880_190000_Linux-x86-64.zip 
-rwx------ 1 root root 115653541 Feb 14 00:40 p6880880_190000_Linux-x86-64.zip
[root@tqdb22: /Software]# chown grid:oinstall p6880880_190000_Linux-x86-64.zip 
[root@tqdb22: /Software]# ll p6880880_190000_Linux-x86-64.zip                  
-rwx------ 1 grid oinstall 115653541 Feb 14 00:40 p6880880_190000_Linux-x86-64.zip
[root@tqdb22: /Software]# su - grid
Last login: Fri Feb 14 01:01:19 CST 2020
[grid@tqdb22: ~]$ cd /Software/
[grid@tqdb22: /Software]$ echo $ORACLE_HOME
/u01/app/19c/grid
[grid@tqdb22: /Software]$ unzip p6880880_190000_Linux-x86-64.zip -d $ORACLE_HOME
[grid@tqdb22: /Software]$ du -sh $ORACLE_HOME/OPatch
252M    /u01/app/19c/grid/OPatch
[grid@tqdb22: /Software]$ opatch version
OPatch Version: 12.2.0.1.19
OPatch succeeded.
[grid@tqdb22: /Software]$ 
​```
-- 4. 两个节点 oracle 目录 OPatch 替换
​```
-- 节点1
[root@tqdb21: /Software]# ll p6880880_190000_Linux-x86-64.zip 
-rwx------ 1 grid oinstall 115653541 Feb 12 18:53 p6880880_190000_Linux-x86-64.zip
[root@tqdb21: /Software]# chown oracle:oinstall p6880880_190000_Linux-x86-64.zip 
[root@tqdb21: /Software]# ll p6880880_190000_Linux-x86-64.zip                    
-rwx------ 1 oracle oinstall 115653541 Feb 12 18:53 p6880880_190000_Linux-x86-64.zip
[root@tqdb21: /Software]# su - oracle
Last login: Fri Feb 14 00:28:55 CST 2020 on pts/7
[oracle@tqdb21: ~]$ cd /Software/
[oracle@tqdb21: /Software]$ echo $ORACLE_HOME
/u01/app/oracle/product/19c/dbhome
[oracle@tqdb21: /Software]$ unzip p6880880_190000_Linux-x86-64.zip -d $ORACLE_HOME
[oracle@tqdb21: /Software]$ du -sh $ORACLE_HOME/OPatch
252M    /u01/app/oracle/product/19c/dbhome/OPatch
[oracle@tqdb21: /Software]$ opatch version
OPatch Version: 12.2.0.1.19
OPatch succeeded.
[oracle@tqdb21: /Software]$ 
-- 节点2
[root@tqdb22: /Software]# ll p6880880_190000_Linux-x86-64.zip
total 112944
-rwx------ 1 grid oinstall 115653541 Feb 14 00:40 p6880880_190000_Linux-x86-64.zip
[root@tqdb22: /Software]# chown oracle:oinstall p6880880_190000_Linux-x86-64.zip 
[root@tqdb22: /Software]# ll p6880880_190000_Linux-x86-64.zip 
-rwx------ 1 oracle oinstall 115653541 Feb 14 00:40 p6880880_190000_Linux-x86-64.zip
[root@tqdb22: /Software]# su - oracle
Last login: Fri Feb 14 00:55:36 CST 2020 on pts/1
[oracle@tqdb22: ~]$ cd /Software/
[oracle@tqdb22: /Software]$ echo $ORACLE_HOME
/u01/app/oracle/product/19c/dbhome
[oracle@tqdb22: /Software]$ unzip p6880880_190000_Linux-x86-64.zip -d $ORACLE_HOME
[oracle@tqdb22: /Software]$ du -sh $ORACLE_HOME/OPatch
252M    /u01/app/oracle/product/19c/dbhome/OPatch
[oracle@tqdb22: /Software]$ opatch version
OPatch Version: 12.2.0.1.19
OPatch succeeded.
[oracle@tqdb22: /Software]$ 
​```

3.4.2 开始升级 GI RU (RELEASE UPDATE) 补丁

说明: (两个节点都要执行)

(1) 升级过程会自动关闭和启动集群。

(2) 先升级节点1 grid,再升级节点2 grid。

-- 两个节点都要执行
-- 1. grid 用户下解压 GI RU 补丁包
root# cd /Software/19.6.0.0.0/Patch_30501910_GI_RU
root# chown -R grid:oinstall /Software/19.6.0.0.0/Patch_30501910_GI_RU
root# su - grid
grid$ cd /Software/19.6.0.0.0/Patch_30501910_GI_RU
grid$ unzip p30501910_190000_Linux-x86-64.zip 
-- 2. root 用户下使用 `-analyze` 命令预安装 RU,测试兼容性(必须要在 root 用户下,否则报错)
-- (grid 用户的 $ORACLE_HOME 为 /u01/app/19c/grid)
root# /u01/app/19c/grid/OPatch/opatchauto apply /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/ -oh /u01/app/19c/grid -analyze
-- 3. root 用户下安装 GI RU
root# /u01/app/19c/grid/OPatch/opatchauto apply /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/ -oh /u01/app/19c/grid

节点1,操作记录:

-- 节点1: 1. grid 用户下解压 GI RU 补丁包
[root@tqdb21: /Software/19.6.0.0.0]# cd Patch_30501910_GI_RU
[root@tqdb21: /Software/19.6.0.0.0/Patch_30501910_GI_RU]# cd
[root@tqdb21: ~]# cd /Software/19.6.0.0.0/Patch_30501910_GI_RU
[root@tqdb21: /Software/19.6.0.0.0/Patch_30501910_GI_RU]# ll p30501910_190000_Linux-x86-64.zip 
-rwx------ 1 root root 2160976478 Feb 13 23:55 p30501910_190000_Linux-x86-64.zip
[root@tqdb21: /Software/19.6.0.0.0/Patch_30501910_GI_RU]# 
[root@tqdb21: /Software/19.6.0.0.0/Patch_30501910_GI_RU]# chown -R grid:oinstall /Software/19.6.0.0.0/Patch_30501910_GI_RU
[root@tqdb21: /Software/19.6.0.0.0/Patch_30501910_GI_RU]# ll p30501910_190000_Linux-x86-64.zip 
-rwx------ 1 grid oinstall 2160976478 Feb 13 23:55 p30501910_190000_Linux-x86-64.zip
[root@tqdb21: /Software/19.6.0.0.0/Patch_30501910_GI_RU]# su - grid
Last login: Fri Feb 14 01:38:22 CST 2020 on pts/0
[grid@tqdb21: ~]$ cd /Software/19.6.0.0.0/Patch_30501910_GI_RU
[grid@tqdb21: /Software/19.6.0.0.0/Patch_30501910_GI_RU]$ ll
total 2110332
-rwx------ 1 grid oinstall 2160976478 Feb 13 23:55 p30501910_190000_Linux-x86-64.zip
[grid@tqdb21: /Software/19.6.0.0.0/Patch_30501910_GI_RU]$ 
[grid@tqdb21: /Software/19.6.0.0.0/Patch_30501910_GI_RU]$ unzip p30501910_190000_Linux-x86-64.zip 
[grid@tqdb21: /Software/19.6.0.0.0/Patch_30501910_GI_RU]$ ll
total 2110640
drwxr-x--- 7 grid oinstall        143 Jan  7 13:22 30501910
-rwx------ 1 grid oinstall 2160976478 Feb 13 23:55 p30501910_190000_Linux-x86-64.zip
-rw-rw-r-- 1 grid oinstall     314753 Jan 15 03:57 PatchSearch.xml
[grid@tqdb21: /Software/19.6.0.0.0/Patch_30501910_GI_RU]$ 
-- 节点1: 查看当前 patches
[root@tqdb21: ~]# su - grid
Last login: Fri Feb 14 01:35:53 CST 2020
[grid@tqdb21: ~]$ opatch lspatches
29585399;OCW RELEASE UPDATE 19.3.0.0.0 (29585399)
29517247;ACFS RELEASE UPDATE 19.3.0.0.0 (29517247)
29517242;Database Release Update : 19.3.0.0.190416 (29517242)
29401763;TOMCAT RELEASE UPDATE 19.0.0.0.0 (29401763)
OPatch succeeded.
[grid@tqdb21: ~]$ 
-- 节点1: 2. root 用户下使用 `-analyze` 命令预安装 RU,测试兼容性(必须要在 root 用户下,否则报错)
​```
[root@tqdb21: ~]# /u01/app/19c/grid/OPatch/opatchauto apply /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/ -oh /u01/app/19c/grid -analyze
OPatchauto session is initiated at Fri Feb 14 02:22:26 2020
System initialization log file is /u01/app/19c/grid/cfgtoollogs/opatchautodb/systemconfig2020-02-14_02-22-31AM.log.
Session log file is /u01/app/19c/grid/cfgtoollogs/opatchauto/opatchauto2020-02-14_02-22-46AM.log
The id for this session is QAVQ
Executing OPatch prereq operations to verify patch applicability on home /u01/app/19c/grid
Patch applicability verified successfully on home /u01/app/19c/grid
OPatchAuto successful.
--------------------------------Summary--------------------------------
Analysis for applying patches has completed successfully:
Host:tqdb21
CRS Home:/u01/app/19c/grid
Version:19.0.0.0.0
==Following patches were SUCCESSFULLY analyzed to be applied:
Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30489227
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_02-23-11AM_1.log
Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30489632
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_02-23-11AM_1.log
Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30655595
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_02-23-11AM_1.log
Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30557433
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_02-23-11AM_1.log
OPatchauto session completed at Fri Feb 14 02:23:28 2020
Time taken to complete the session 1 minute, 2 seconds
[root@tqdb21: ~]# 
​```
-- 节点1: 3. root 用户下安装 GI RU
​```
[root@tqdb21: ~]# /u01/app/19c/grid/OPatch/opatchauto apply /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/ -oh /u01/app/19c/grid
OPatchauto session is initiated at Fri Feb 14 02:30:04 2020
System initialization log file is /u01/app/19c/grid/cfgtoollogs/opatchautodb/systemconfig2020-02-14_02-30-09AM.log.
Session log file is /u01/app/19c/grid/cfgtoollogs/opatchauto/opatchauto2020-02-14_02-30-21AM.log
The id for this session is A77D
Executing OPatch prereq operations to verify patch applicability on home /u01/app/19c/grid
Patch applicability verified successfully on home /u01/app/19c/grid
Bringing down CRS service on home /u01/app/19c/grid
CRS service brought down successfully on home /u01/app/19c/grid
Start applying binary patch on home /u01/app/19c/grid
Binary patch applied successfully on home /u01/app/19c/grid
Starting CRS service on home /u01/app/19c/grid
CRS service started successfully on home /u01/app/19c/grid
OPatchAuto successful.
--------------------------------Summary--------------------------------
Patching is completed successfully. Please find the summary as follows:
Host:tqdb21
CRS Home:/u01/app/19c/grid
Version:19.0.0.0.0
Summary:
==Following patches were SUCCESSFULLY applied:
Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30489227
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_02-33-49AM_1.log
Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30489632
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_02-33-49AM_1.log
Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30557433
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_02-33-49AM_1.log
Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30655595
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_02-33-49AM_1.log
OPatchauto session completed at Fri Feb 14 02:42:10 2020
Time taken to complete the session 12 minutes, 6 seconds
[root@tqdb21: ~]# 
​```
-- 节点1: 查看打完 GI RU 后的 patches,以及 sqlplus 登陆时的版本提示已经是 `Version 19.6.0.0.0`
[grid@tqdb21: ~]$ opatch lspatches
30655595;TOMCAT RELEASE UPDATE 19.0.0.0.0 (30655595)
30557433;Database Release Update : 19.6.0.0.200114 (30557433)
30489632;ACFS RELEASE UPDATE 19.6.0.0.0 (30489632)
30489227;OCW RELEASE UPDATE 19.6.0.0.0 (30489227)
OPatch succeeded.
[grid@tqdb21: ~]$ 
[grid@tqdb21: ~]$ 
[grid@tqdb21: ~]$ sqlplus / as sysasm
SQL*Plus: Release 19.0.0.0.0 - Production on Fri Feb 14 02:51:10 2020
Version 19.6.0.0.0
Copyright (c) 1982, 2019, Oracle.  All rights reserved.
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.6.0.0.0
SQL> quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.6.0.0.0
[grid@tqdb21: ~]$ 

节点2, 操作记录:

-- 节点2: 1. grid 用户下解压 GI RU 补丁包
root@tqdb22: /Software/19.6.0.0.0]# cd Patch_30501910_GI_RU/
[root@tqdb22: /Software/19.6.0.0.0/Patch_30501910_GI_RU]# ll p30501910_190000_Linux-x86-64.zip 
-rwx------ 1 root root 2160976478 Feb 14 02:58 p30501910_190000_Linux-x86-64.zip
[root@tqdb22: /Software/19.6.0.0.0/Patch_30501910_GI_RU]# chown -R grid:oinstall /Software/19.6.0.0.0/Patch_30501910_GI_RU
[root@tqdb22: /Software/19.6.0.0.0/Patch_30501910_GI_RU]# ll p30501910_190000_Linux-x86-64.zip 
-rwx------ 1 grid oinstall 2160976478 Feb 14 02:58 p30501910_190000_Linux-x86-64.zip
[root@tqdb22: /Software/19.6.0.0.0/Patch_30501910_GI_RU]# su - grid
Last login: Fri Feb 14 03:01:47 CST 2020
[grid@tqdb22: ~]$ cd /Software/19.6.0.0.0/Patch_30501910_GI_RU/
[grid@tqdb22: /Software/19.6.0.0.0/Patch_30501910_GI_RU]$ ll p30501910_190000_Linux-x86-64.zip 
-rwx------ 1 grid oinstall 2160976478 Feb 14 02:58 p30501910_190000_Linux-x86-64.zip
[grid@tqdb22: /Software/19.6.0.0.0/Patch_30501910_GI_RU]$ unzip p30501910_190000_Linux-x86-64.zip 
[grid@tqdb22: /Software/19.6.0.0.0/Patch_30501910_GI_RU]$ ll
total 2110640
drwxr-x--- 7 grid oinstall        143 Jan  7 13:22 30501910
-rwx------ 1 grid oinstall 2160976478 Feb 14 02:58 p30501910_190000_Linux-x86-64.zip
-rw-rw-r-- 1 grid oinstall     314753 Jan 15 03:57 PatchSearch.xml
[grid@tqdb22: /Software/19.6.0.0.0/Patch_30501910_GI_RU]$ 
-- 节点2: 查看当前 patches, 以及 sqlplus 登陆时的版本提示是 `Version 19.3.0.0.0`
[grid@tqdb22: ~]$ opatch lspatches
29585399;OCW RELEASE UPDATE 19.3.0.0.0 (29585399)
29517247;ACFS RELEASE UPDATE 19.3.0.0.0 (29517247)
29517242;Database Release Update : 19.3.0.0.190416 (29517242)
29401763;TOMCAT RELEASE UPDATE 19.0.0.0.0 (29401763)
OPatch succeeded.
[grid@tqdb22: ~]$ sqlplus / as sysasm
SQL*Plus: Release 19.0.0.0.0 - Production on Fri Feb 14 03:06:11 2020
Version 19.3.0.0.0
Copyright (c) 1982, 2019, Oracle.  All rights reserved.
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
SQL> quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
[grid@tqdb22: ~]$ 
-- 节点2: 2. root 用户下使用 `-analyze` 命令预安装 RU,测试兼容性(必须要在 root 用户下,否则报错)
​```
[root@tqdb22: ~]# /u01/app/19c/grid/OPatch/opatchauto apply /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/ -oh /u01/app/19c/grid -analyze
OPatchauto session is initiated at Fri Feb 14 03:08:44 2020
System initialization log file is /u01/app/19c/grid/cfgtoollogs/opatchautodb/systemconfig2020-02-14_03-08-49AM.log.
Session log file is /u01/app/19c/grid/cfgtoollogs/opatchauto/opatchauto2020-02-14_03-09-03AM.log
The id for this session is GCM2
Executing OPatch prereq operations to verify patch applicability on home /u01/app/19c/grid
Patch applicability verified successfully on home /u01/app/19c/grid
OPatchAuto successful.
--------------------------------Summary--------------------------------
Analysis for applying patches has completed successfully:
Host:tqdb22
CRS Home:/u01/app/19c/grid
Version:19.0.0.0.0
==Following patches were SUCCESSFULLY analyzed to be applied:
Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30489227
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_03-09-19AM_1.log
Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30489632
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_03-09-19AM_1.log
Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30655595
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_03-09-19AM_1.log
Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30557433
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_03-09-19AM_1.log
OPatchauto session completed at Fri Feb 14 03:09:36 2020
Time taken to complete the session 0 minute, 52 seconds
[root@tqdb22: ~]# 
​```
-- 节点2: 3. root 用户下安装 GI RU
​```
[root@tqdb22: ~]# /u01/app/19c/grid/OPatch/opatchauto apply /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/ -oh /u01/app/19c/grid
OPatchauto session is initiated at Fri Feb 14 03:10:49 2020
System initialization log file is /u01/app/19c/grid/cfgtoollogs/opatchautodb/systemconfig2020-02-14_03-10-55AM.log.
Session log file is /u01/app/19c/grid/cfgtoollogs/opatchauto/opatchauto2020-02-14_03-11-07AM.log
The id for this session is S64Q
Executing OPatch prereq operations to verify patch applicability on home /u01/app/19c/grid
Patch applicability verified successfully on home /u01/app/19c/grid
Bringing down CRS service on home /u01/app/19c/grid
CRS service brought down successfully on home /u01/app/19c/grid
Start applying binary patch on home /u01/app/19c/grid
Binary patch applied successfully on home /u01/app/19c/grid
Starting CRS service on home /u01/app/19c/grid
CRS service started successfully on home /u01/app/19c/grid
OPatchAuto successful.
--------------------------------Summary--------------------------------
Patching is completed successfully. Please find the summary as follows:
Host:tqdb22
CRS Home:/u01/app/19c/grid
Version:19.0.0.0.0
Summary:
==Following patches were SUCCESSFULLY applied:
Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30489227
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_03-14-53AM_1.log
Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30489632
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_03-14-53AM_1.log
Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30557433
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_03-14-53AM_1.log
Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30655595
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_03-14-53AM_1.log
OPatchauto session completed at Fri Feb 14 03:24:56 2020
Time taken to complete the session 14 minutes, 7 seconds
[root@tqdb22: ~]# 
​```
-- 节点2: 查看打完 GI RU 后的 patches,以及 sqlplus 登陆时的版本提示已经是 `Version 19.6.0.0.0`
[grid@tqdb22: ~]$ opatch lspatches
30655595;TOMCAT RELEASE UPDATE 19.0.0.0.0 (30655595)
30557433;Database Release Update : 19.6.0.0.200114 (30557433)
30489632;ACFS RELEASE UPDATE 19.6.0.0.0 (30489632)
30489227;OCW RELEASE UPDATE 19.6.0.0.0 (30489227)
OPatch succeeded.
[grid@tqdb22: ~]$ 
[grid@tqdb22: ~]$ sqlplus / as sysasm
SQL*Plus: Release 19.0.0.0.0 - Production on Fri Feb 14 03:43:16 2020
Version 19.6.0.0.0
Copyright (c) 1982, 2019, Oracle.  All rights reserved.
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.6.0.0.0
SQL> quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.6.0.0.0
[grid@tqdb22: ~]$ 

3.4.3 开始升级 DB RU (RELEASE UPDATE) 补丁

Patch 30501910: GI RELEASE UPDATE 19.6.0.0.0 (p30501910_190000_Linux-x86-64.zip)

说明:由于 GI RU 包含 DB RU,所以 RAC 环境升级 DB 时,还将使用此 Patch 即可。

说明: (两个节点都要执行)

(1) 升级过程会自动关闭和启动集群。

(2) 先升级节点1 database,再升级节点2 database。

-- 两个节点都要执行
-- 1. 将之前解压 GI RU 补丁包目录(`/Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/`)授权给 oracle 用户
root# chown -R oracle:oinstall /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/
-- 2. root 用户下使用 `-analyze` 命令预安装 RU,测试兼容性
(oracle 用户的 $ORACLE_HOME 为 /u01/app/oracle/product/19c/dbhome)
root# /u01/app/oracle/product/19c/dbhome/OPatch/opatchauto apply /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/ -oh /u01/app/oracle/product/19c/dbhome -analyze
-- 3. root 用户下安装 database RU
root# /u01/app/oracle/product/19c/dbhome/OPatch/opatchauto apply /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/ -oh /u01/app/oracle/product/19c/dbhome

节点1, 操作记录:

-- 节点1: 1. 将之前解压 GI RU 补丁包目录(`/Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/`)授权给 oracle 用户
[root@tqdb21: ~]# ll /Software/19.6.0.0.0/Patch_30501910_GI_RU/
total 2110640
drwxr-x--- 7 grid oinstall        143 Jan  7 13:22 30501910
-rwx------ 1 grid oinstall 2160976478 Feb 13 23:55 p30501910_190000_Linux-x86-64.zip
-rw-rw-r-- 1 grid oinstall     314753 Jan 15 03:57 PatchSearch.xml
[root@tqdb21: ~]# 
[root@tqdb21: ~]# chown -R oracle:oinstall /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/
[root@tqdb21: ~]# ll /Software/19.6.0.0.0/Patch_30501910_GI_RU/                               
total 2110640
drwxr-x--- 7 oracle oinstall        143 Jan  7 13:22 30501910
-rwx------ 1 grid   oinstall 2160976478 Feb 13 23:55 p30501910_190000_Linux-x86-64.zip
-rw-rw-r-- 1 grid   oinstall     314753 Jan 15 03:57 PatchSearch.xml
[root@tqdb21: ~]# 
-- 节点1: 查看当前 patches, 以及 sqlplus 登陆时的版本提示是 `Version 19.3.0.0.0`
[root@tqdb21: ~]# su - oracle
Last login: Fri Feb 14 01:15:34 CST 2020 on pts/0
[oracle@tqdb21: ~]$ opatch lspatches
29585399;OCW RELEASE UPDATE 19.3.0.0.0 (29585399)
29517242;Database Release Update : 19.3.0.0.190416 (29517242)
OPatch succeeded.
[oracle@tqdb21: ~]$ sqlplus / as sysdba
SQL*Plus: Release 19.0.0.0.0 - Production on Fri Feb 14 04:20:52 2020
Version 19.3.0.0.0
Copyright (c) 1982, 2019, Oracle.  All rights reserved.
Connected to an idle instance.
SQL> quit
Disconnected
[oracle@tqdb21: ~]$ 
-- 节点1: 2. root 用户下使用 `-analyze` 命令预安装 RU,测试兼容性
-- (oracle 用户的 $ORACLE_HOME 为 /u01/app/oracle/product/19c/dbhome)
​```
[root@tqdb21: ~]# /u01/app/oracle/product/19c/dbhome/OPatch/opatchauto apply /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/ -oh /u01/app/oracle/product/19c/dbhome -analyze
OPatchauto session is initiated at Fri Feb 14 04:22:06 2020
System initialization log file is /u01/app/oracle/product/19c/dbhome/cfgtoollogs/opatchautodb/systemconfig2020-02-14_04-22-12AM.log.
Session log file is /u01/app/oracle/product/19c/dbhome/cfgtoollogs/opatchauto/opatchauto2020-02-14_04-22-47AM.log
The id for this session is RC8U
Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/19c/dbhome
Patch applicability verified successfully on home /u01/app/oracle/product/19c/dbhome
Verifying SQL patch applicability on home /u01/app/oracle/product/19c/dbhome
No step execution required.........
OPatchAuto successful.
--------------------------------Summary--------------------------------
Analysis for applying patches has completed successfully:
Host:tqdb21
RAC Home:/u01/app/oracle/product/19c/dbhome
Version:19.0.0.0.0
==Following patches were SKIPPED:
Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30489632
Reason: This patch is not applicable to this specified target type - "rac_database"
Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30655595
Reason: This patch is not applicable to this specified target type - "rac_database"
==Following patches were SUCCESSFULLY analyzed to be applied:
Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30489227
Log: /u01/app/oracle/product/19c/dbhome/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_04-23-20AM_1.log
Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30557433
Log: /u01/app/oracle/product/19c/dbhome/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_04-23-20AM_1.log
OPatchauto session completed at Fri Feb 14 04:23:37 2020
Time taken to complete the session 1 minute, 31 seconds
[root@tqdb21: ~]# 
​```
-- 节点1: 3. root 用户下安装 database RU
​```
[root@tqdb21: ~]# /u01/app/oracle/product/19c/dbhome/OPatch/opatchauto apply /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/ -oh /u01/app/oracle/product/19c/dbhome
OPatchauto session is initiated at Fri Feb 14 04:25:02 2020
System initialization log file is /u01/app/oracle/product/19c/dbhome/cfgtoollogs/opatchautodb/systemconfig2020-02-14_04-25-07AM.log.
Session log file is /u01/app/oracle/product/19c/dbhome/cfgtoollogs/opatchauto/opatchauto2020-02-14_04-25-26AM.log
The id for this session is Y439
Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/19c/dbhome
Patch applicability verified successfully on home /u01/app/oracle/product/19c/dbhome
Verifying SQL patch applicability on home /u01/app/oracle/product/19c/dbhome
No step execution required.........
Preparing to bring down database service on home /u01/app/oracle/product/19c/dbhome
No step execution required.........
Performing prepatch operation on home /u01/app/oracle/product/19c/dbhome
Perpatch operation completed successfully on home /u01/app/oracle/product/19c/dbhome
Start applying binary patch on home /u01/app/oracle/product/19c/dbhome
Binary patch applied successfully on home /u01/app/oracle/product/19c/dbhome
Performing postpatch operation on home /u01/app/oracle/product/19c/dbhome
Postpatch operation completed successfully on home /u01/app/oracle/product/19c/dbhome
Preparing home /u01/app/oracle/product/19c/dbhome after database service restarted
No step execution required.........
Trying to apply SQL patch on home /u01/app/oracle/product/19c/dbhome
No step execution required.........
OPatchAuto successful.
--------------------------------Summary--------------------------------
Patching is completed successfully. Please find the summary as follows:
Host:tqdb21
RAC Home:/u01/app/oracle/product/19c/dbhome
Version:19.0.0.0.0
Summary:
==Following patches were SKIPPED:
Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30489632
Reason: This patch is not applicable to this specified target type - "rac_database"
Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30655595
Reason: This patch is not applicable to this specified target type - "rac_database"
==Following patches were SUCCESSFULLY applied:
Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30489227
Log: /u01/app/oracle/product/19c/dbhome/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_04-26-01AM_1.log
Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30557433
Log: /u01/app/oracle/product/19c/dbhome/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_04-26-01AM_1.log
OPatchauto session completed at Fri Feb 14 04:32:37 2020
Time taken to complete the session 7 minutes, 35 seconds
[root@tqdb21: ~]# 
​```
-- 节点1: 查看打完 GI RU 后的 patches,以及 sqlplus 登陆时的版本提示已经是 `Version 19.6.0.0.0`
[oracle@tqdb21: ~]$ opatch lspatches
30557433;Database Release Update : 19.6.0.0.200114 (30557433)
30489227;OCW RELEASE UPDATE 19.6.0.0.0 (30489227)
OPatch succeeded.
[oracle@tqdb21: ~]$ 
[oracle@tqdb21: ~]$ sqlplus / as sysdba
SQL*Plus: Release 19.0.0.0.0 - Production on Fri Feb 14 04:38:35 2020
Version 19.6.0.0.0
Copyright (c) 1982, 2019, Oracle.  All rights reserved.
Connected to an idle instance.
SQL> quit
Disconnected
[oracle@tqdb21: ~]$ 

节点2, 操作记录:

-- 节点2: 1. 将之前解压 GI RU 补丁包目录(`/Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/`)授权给 oracle 用户
[root@tqdb22: ~]# ll /Software/19.6.0.0.0/Patch_30501910_GI_RU/
total 2110640
drwxr-x--- 7 grid oinstall        143 Jan  7 13:22 30501910
-rwx------ 1 grid oinstall 2160976478 Feb 14 02:58 p30501910_190000_Linux-x86-64.zip
-rw-rw-r-- 1 grid oinstall     314753 Jan 15 03:57 PatchSearch.xml
[root@tqdb22: ~]# chown -R oracle:oinstall /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/
[root@tqdb22: ~]# ll /Software/19.6.0.0.0/Patch_30501910_GI_RU/                               
total 2110640
drwxr-x--- 7 oracle oinstall        143 Jan  7 13:22 30501910
-rwx------ 1 grid   oinstall 2160976478 Feb 14 02:58 p30501910_190000_Linux-x86-64.zip
-rw-rw-r-- 1 grid   oinstall     314753 Jan 15 03:57 PatchSearch.xml
[root@tqdb22: ~]# 
-- 节点2: 查看当前 patches, 以及 sqlplus 登陆时的版本提示是 `Version 19.3.0.0.0`
[oracle@tqdb22: ~]$ opatch lspatches
29585399;OCW RELEASE UPDATE 19.3.0.0.0 (29585399)
29517242;Database Release Update : 19.3.0.0.190416 (29517242)
OPatch succeeded.
[oracle@tqdb22: ~]$ sqlplus / as sysdba
SQL*Plus: Release 19.0.0.0.0 - Production on Fri Feb 14 04:49:59 2020
Version 19.3.0.0.0
Copyright (c) 1982, 2019, Oracle.  All rights reserved.
Connected to an idle instance.
SQL> quit
Disconnected
[oracle@tqdb22: ~]$ 
-- 节点2: 2. root 用户下使用 `-analyze` 命令预安装 RU,测试兼容性
-- (oracle 用户的 $ORACLE_HOME 为 /u01/app/oracle/product/19c/dbhome)
​```
[root@tqdb22: ~]# /u01/app/oracle/product/19c/dbhome/OPatch/opatchauto apply /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/ -oh /u01/app/oracle/product/19c/dbhome -analyze
OPatchauto session is initiated at Fri Feb 14 04:51:25 2020
System initialization log file is /u01/app/oracle/product/19c/dbhome/cfgtoollogs/opatchautodb/systemconfig2020-02-14_04-51-30AM.log.
Session log file is /u01/app/oracle/product/19c/dbhome/cfgtoollogs/opatchauto/opatchauto2020-02-14_04-51-49AM.log
The id for this session is 63EB
Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/19c/dbhome
Patch applicability verified successfully on home /u01/app/oracle/product/19c/dbhome
Verifying SQL patch applicability on home /u01/app/oracle/product/19c/dbhome
No step execution required.........
OPatchAuto successful.
--------------------------------Summary--------------------------------
Analysis for applying patches has completed successfully:
Host:tqdb22
RAC Home:/u01/app/oracle/product/19c/dbhome
Version:19.0.0.0.0
==Following patches were SKIPPED:
Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30489632
Reason: This patch is not applicable to this specified target type - "rac_database"
Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30655595
Reason: This patch is not applicable to this specified target type - "rac_database"
==Following patches were SUCCESSFULLY analyzed to be applied:
Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30489227
Log: /u01/app/oracle/product/19c/dbhome/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_04-52-04AM_1.log
Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30557433
Log: /u01/app/oracle/product/19c/dbhome/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_04-52-04AM_1.log
OPatchauto session completed at Fri Feb 14 04:52:19 2020
Time taken to complete the session 0 minute, 54 seconds
[root@tqdb22: ~]# 
​```
-- 节点2: 3. root 用户下安装 database RU 
-- (包括报错处理:RAC的第二个节点在安装DB补丁时会遇到如下报错)
​```小插曲:RAC的第二个节点在安装DB补丁时会遇到报错,以及处理过程
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
​``` 升级报错了。权限问题
[root@tqdb22: ~]# /u01/app/oracle/product/19c/dbhome/OPatch/opatchauto apply /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/ -oh /u01/app/oracle/product/19c/dbhome
OPatchauto session is initiated at Fri Feb 14 04:53:37 2020
System initialization log file is /u01/app/oracle/product/19c/dbhome/cfgtoollogs/opatchautodb/systemconfig2020-02-14_04-53-42AM.log.
Session log file is /u01/app/oracle/product/19c/dbhome/cfgtoollogs/opatchauto/opatchauto2020-02-14_04-53-59AM.log
The id for this session is HS71
Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/19c/dbhome
Patch applicability verified successfully on home /u01/app/oracle/product/19c/dbhome
Verifying SQL patch applicability on home /u01/app/oracle/product/19c/dbhome
No step execution required.........
Preparing to bring down database service on home /u01/app/oracle/product/19c/dbhome
No step execution required.........
Performing prepatch operation on home /u01/app/oracle/product/19c/dbhome
Perpatch operation completed successfully on home /u01/app/oracle/product/19c/dbhome
Start applying binary patch on home /u01/app/oracle/product/19c/dbhome
Failed while applying binary patches on home /u01/app/oracle/product/19c/dbhome
Execution of [OPatchAutoBinaryAction] patch action failed, check log for more details. Failures:
Patch Target : tqdb22->/u01/app/oracle/product/19c/dbhome Type[rac]
Details: [
---------------------------Patching Failed---------------------------------
Command execution failed during patching in home: /u01/app/oracle/product/19c/dbhome, host: tqdb22.
Command failed:  /u01/app/oracle/product/19c/dbhome/OPatch/opatchauto  apply /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/ -oh /u01/app/oracle/product/19c/dbhome -target_type rac_database -binary -invPtrLoc /u01/app/oracle/product/19c/dbhome/oraInst.loc -jre /u01/app/oracle/product/19c/dbhome/OPatch/jre -persistresult /u01/app/oracle/product/19c/dbhome/opatchautocfg/db/sessioninfo/sessionresult_tqdb22_rac.ser -analyzedresult /u01/app/oracle/product/19c/dbhome/opatchautocfg/db/sessioninfo/sessionresult_analyze_tqdb22_rac.ser
Command failure output: 
==Following patches FAILED in apply:
Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30489227
Log: /u01/app/oracle/product/19c/dbhome/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_04-54-53AM_1.log
Reason: Failed during Patching: oracle.opatch.opatchsdk.OPatchException: ApplySession failed in system modification phase... 'ApplySession::apply failed: java.io.IOException: oracle.sysman.oui.patch.PatchException: java.io.FileNotFoundException: /u01/app/oraInventory/ContentsXML/oui-patch.xml (Permission denied)' 
After fixing the cause of failure Run opatchauto resume
]
OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.
OPatchAuto failed.
OPatchauto session completed at Fri Feb 14 04:55:41 2020
Time taken to complete the session 2 minutes, 4 seconds
opatchauto failed with error code 42
[root@tqdb22: ~]# 
​```
​```赋权限之后重新执行
[root@tqdb22: ~]# chmod g+w /u01/app/oraInventory/ContentsXML/oui-patch.xml
[root@tqdb22: ~]# ll /u01/app/oraInventory/ContentsXML/oui-patch.xml       
-rw-rw-r-- 1 grid oinstall 174 Feb 14 03:19 /u01/app/oraInventory/ContentsXML/oui-patch.xml
[root@tqdb22: ~]# 
​```
​```rac1
[oracle@tqdb21: /u01/app/oracle/product/19c/dbhome/inventory/oneoffs]$ ll /u01/app/oraInventory/ContentsXML/oui-patch.xml
-rw-rw---- 1 grid oinstall 174 Feb 14 04:30 /u01/app/oraInventory/ContentsXML/oui-patch.xml
​```
​```rac2
[root@tqdb22: ~]# ll /u01/app/oraInventory/ContentsXML/oui-patch.xml
-rw-r--r-- 1 grid oinstall 174 Feb 14 03:19 /u01/app/oraInventory/ContentsXML/oui-patch.xml
[root@tqdb22: ~]# chmod g+w /u01/app/oraInventory/ContentsXML/oui-patch.xml
[root@tqdb22: ~]# ll /u01/app/oraInventory/ContentsXML/oui-patch.xml       
-rw-rw-r-- 1 grid oinstall 174 Feb 14 03:19 /u01/app/oraInventory/ContentsXML/oui-patch.xml
[root@tqdb22: ~]# 
​```
​```rac2 赋权限之后重新执行,还是有报错
[root@tqdb22: ~]# /u01/app/oracle/product/19c/dbhome/OPatch/opatchauto apply /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/ -oh /u01/app/oracle/product/19c/dbhome
OPatchauto session is initiated at Fri Feb 14 05:31:28 2020
System initialization log file is /u01/app/oracle/product/19c/dbhome/cfgtoollogs/opatchautodb/systemconfig2020-02-14_05-31-33AM.log.
Session log file is /u01/app/oracle/product/19c/dbhome/cfgtoollogs/opatchauto/opatchauto2020-02-14_05-31-51AM.log
The id for this session is WEY7
Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/19c/dbhome
Patch applicability verified successfully on home /u01/app/oracle/product/19c/dbhome
Verifying SQL patch applicability on home /u01/app/oracle/product/19c/dbhome
No step execution required.........
Preparing to bring down database service on home /u01/app/oracle/product/19c/dbhome
No step execution required.........
Performing prepatch operation on home /u01/app/oracle/product/19c/dbhome
Perpatch operation completed successfully on home /u01/app/oracle/product/19c/dbhome
Start applying binary patch on home /u01/app/oracle/product/19c/dbhome
Failed while applying binary patches on home /u01/app/oracle/product/19c/dbhome
Execution of [OPatchAutoBinaryAction] patch action failed, check log for more details. Failures:
Patch Target : tqdb22->/u01/app/oracle/product/19c/dbhome Type[rac]
Details: [
---------------------------Patching Failed---------------------------------
Command execution failed during patching in home: /u01/app/oracle/product/19c/dbhome, host: tqdb22.
Command failed:  /u01/app/oracle/product/19c/dbhome/OPatch/opatchauto  apply /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/ -oh /u01/app/oracle/product/19c/dbhome -target_type rac_database -binary -invPtrLoc /u01/app/oracle/product/19c/dbhome/oraInst.loc -jre /u01/app/oracle/product/19c/dbhome/OPatch/jre -persistresult /u01/app/oracle/product/19c/dbhome/opatchautocfg/db/sessioninfo/sessionresult_tqdb22_rac.ser -analyzedresult /u01/app/oracle/product/19c/dbhome/opatchautocfg/db/sessioninfo/sessionresult_analyze_tqdb22_rac.ser
Command failure output: 
==Following patches FAILED in apply:
Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30557433
Log: /u01/app/oracle/product/19c/dbhome/cfgtoollogs/opatchauto/core/opatch/opatch2020-02-14_05-32-23AM_1.log
Reason: Failed during Patching: oracle.opatch.opatchsdk.OPatchException: ApplySession failed in system modification phase... 'ApplySession::apply failed: java.io.IOException: oracle.sysman.oui.patch.PatchException: java.io.FileNotFoundException: /u01/app/oraInventory/ContentsXML/oui-patch.xml (Permission denied)' 
After fixing the cause of failure Run opatchauto resume
]
OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.
OPatchAuto failed.
OPatchauto session completed at Fri Feb 14 05:36:07 2020
Time taken to complete the session 4 minutes, 39 seconds
opatchauto failed with error code 42
[root@tqdb22: ~]# 
​```
-- ok 
​```rac2 将第一个节点的补丁oneoffs目录拷贝到第二个节点,继续打补丁
[root@tqdb22: /Software/19.6.0.0.0]# chmod 777 Patch_30501910_GI_RU/
[root@tqdb22: /Software/19.6.0.0.0]# ll
total 0
drwxrwxrwx 3 oracle oinstall 86 Feb 14 05:20 Patch_30501910_GI_RU
drwxr-xr-x 2 oracle oinstall 47 Feb 14 02:59 Patch_30557433_DATABASE_RU
[root@tqdb22: /Software/19.6.0.0.0]# cd Patch_30501910_GI_RU/
[root@tqdb22: /Software/19.6.0.0.0/Patch_30501910_GI_RU]# ll
total 2110640
drwxr-x--- 7 oracle oinstall        143 Jan  7 13:22 30501910
-rwx------ 1 oracle oinstall 2160976478 Feb 14 02:58 p30501910_190000_Linux-x86-64.zip
-rw-rw-r-- 1 oracle oinstall     314753 Jan 15 03:57 PatchSearch.xml
[root@tqdb22: /Software/19.6.0.0.0/Patch_30501910_GI_RU]# chmod 777 30501910/
[root@tqdb22: /Software/19.6.0.0.0/Patch_30501910_GI_RU]# ll
total 2110640
drwxrwxrwx 7 oracle oinstall        143 Jan  7 13:22 30501910
-rwx------ 1 oracle oinstall 2160976478 Feb 14 02:58 p30501910_190000_Linux-x86-64.zip
-rw-rw-r-- 1 oracle oinstall     314753 Jan 15 03:57 PatchSearch.xml
[root@tqdb22: /Software/19.6.0.0.0/Patch_30501910_GI_RU]# 
[oracle@tqdb22: /u01/app/oracle/product/19c/dbhome/inventory]$ cp -r oneoffs/ oneoffs.bak
[oracle@tqdb22: /u01/app/oracle/product/19c/dbhome/inventory]$ cd oneoffs.bak/
[oracle@tqdb22: /u01/app/oracle/product/19c/dbhome/inventory/oneoffs.bak]$ ls
29517242  29585399  30489227
[oracle@tqdb22: /u01/app/oracle/product/19c/dbhome/inventory/oneoffs.bak]$ cd ..
[oracle@tqdb22: /u01/app/oracle/product/19c/dbhome/inventory]$ cd oneoffs
[oracle@tqdb22: /u01/app/oracle/product/19c/dbhome/inventory/oneoffs]$ ls
29517242  29585399  30489227
[oracle@tqdb22: /u01/app/oracle/product/19c/dbhome/inventory/oneoffs]$ rm -rf 29517242  29585399  30489227
[oracle@tqdb22: /u01/app/oracle/product/19c/dbhome/inventory/oneoffs]$ ls
​```
​```rac1: 从 节点1 拷贝到 节点2: 将第一个节点的补丁oneoffs目录拷贝到第二个节点
[oracle@tqdb21: /u01/app/oracle/product/19c/dbhome/inventory/oneoffs]$ scp -r * oracle@tqdb22:/u01/app/oracle/product/19c/dbhome/inventory/oneoffs
actions.xml                                                                                                                                                          100%   98KB  35.8MB/s   00:00    
inventory.xml                                                                                                                                                        100%   64KB  28.4MB/s   00:00    
actions.xml                                                                                                                                                          100%  347   532.2KB/s   00:00    
inventory.xml                                                                                                                                                        100%   18KB  17.4MB/s   00:00    
inventory.xml                                                                                                                                                        100%   45KB  32.8MB/s   00:00    
actions.xml                                                                                                                                                          100%   95KB  36.9MB/s   00:00    
inventory.xml                                                                                                                                                        100%  163KB  46.2MB/s   00:00    
actions.xml                                                                                                                                                          100% 1806KB  66.9MB/s   00:00    
[oracle@tqdb21: /u01/app/oracle/product/19c/dbhome/inventory/oneoffs]$ 
​```
​```rac2
[oracle@tqdb22: /u01/app/oracle/product/19c/dbhome/inventory/oneoffs]$ ls
29517242  29585399  30489227  30557433
[oracle@tqdb22: /u01/app/oracle/product/19c/dbhome/inventory/oneoffs]$ ll -th
total 0
drwxr-xr-x 4 oracle oinstall 29 Feb 14 06:04 30557433
drwxr-xr-x 4 oracle oinstall 29 Feb 14 06:04 30489227
drwxr-x--- 4 oracle oinstall 29 Feb 14 06:04 29585399
drwxr-x--- 4 oracle oinstall 29 Feb 14 06:04 29517242
​```
​```rac2
[root@tqdb22: ~]# /u01/app/oracle/product/19c/dbhome/OPatch/opatchauto apply /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/ -oh /u01/app/oracle/product/19c/dbhome
OPatchauto session is initiated at Fri Feb 14 06:04:27 2020
System initialization log file is /u01/app/oracle/product/19c/dbhome/cfgtoollogs/opatchautodb/systemconfig2020-02-14_06-04-33AM.log.
Session log file is /u01/app/oracle/product/19c/dbhome/cfgtoollogs/opatchauto/opatchauto2020-02-14_06-04-51AM.log
The id for this session is UY7B
Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/19c/dbhome
Patch applicability verified successfully on home /u01/app/oracle/product/19c/dbhome
Verifying SQL patch applicability on home /u01/app/oracle/product/19c/dbhome
No step execution required.........
Preparing to bring down database service on home /u01/app/oracle/product/19c/dbhome
No step execution required.........
Preparing home /u01/app/oracle/product/19c/dbhome after database service restarted
No step execution required.........
OPatchAuto successful.
--------------------------------Summary--------------------------------
Patching is completed successfully. Please find the summary as follows:
Host:tqdb22
RAC Home:/u01/app/oracle/product/19c/dbhome
Version:19.0.0.0.0
Summary:
==Following patches were SKIPPED:
Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30489632
Reason: This patch is not applicable to this specified target type - "rac_database"
Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30655595
Reason: This patch is not applicable to this specified target type - "rac_database"
Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30489227
Reason: This patch is already been applied, so not going to apply again.
Patch: /Software/19.6.0.0.0/Patch_30501910_GI_RU/30501910/30557433
Reason: This patch is already been applied, so not going to apply again.
OPatchauto session completed at Fri Feb 14 06:05:14 2020
Time taken to complete the session 0 minute, 48 seconds
[root@tqdb22: ~]# 
​```
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
​```
-- 节点2: 查看打完 GI RU 后的 patches,以及 sqlplus 登陆时的版本提示已经是 `Version 19.6.0.0.0`
​```
[oracle@tqdb22: ~]$ opatch lspatches
30557433;Database Release Update : 19.6.0.0.200114 (30557433)
30489227;OCW RELEASE UPDATE 19.6.0.0.0 (30489227)
OPatch succeeded.
[oracle@tqdb22: ~]$ sqlplus / as sysdba
SQL*Plus: Release 19.0.0.0.0 - Production on Fri Feb 14 06:46:12 2020
Version 19.6.0.0.0
Copyright (c) 1982, 2019, Oracle.  All rights reserved.
Connected to an idle instance.
SQL> quit
Disconnected
[oracle@tqdb22: ~]$ 
​```

oracle $ /u01/app/oracle/product/19c/dbhome/OPatch/datapatch -verbose。==注意== :上面说了依次打减少了停机时间,但是停机时间还是需要的,就是在这里的运行datapatch的时间。这个步骤是升级数据字典,针对整个database的数据字典,因此只需在一个节点上跑就可以了。主要注意的是,如果是cdb模式,需要 alter pluggable database all open,打开所有的pdb之后,再运行datapatch。

==说明==:我这里只安装了数据库软件,还没有创建数据库,所以不需要升级数据字典。(没创建数据库,还没有数据字典呢)

3.5 创建数据库

3.5.1 asmca 创建 DATA 磁盘组

grid 用户图形界面,使用 asmcd 创建磁盘组

[root@tqdb21: ~]# xhost +
access control disabled, clients can connect from any host
[root@tqdb21: ~]# xdpyinfo | head
name of display:    192.168.6.21:0
version number:    11.0
vendor string:    The X.Org Foundation
vendor release number:    12004000
X.Org version: 1.20.4
maximum request size:  16777212 bytes
motion buffer size:  256
bitmap unit, bit order, padding:    32, LSBFirst, 32
image byte order:    LSBFirst
number of supported pixmap formats:    7
[root@tqdb21: ~]# su - grid
Last login: Fri Feb 14 07:12:53 CST 2020
[grid@tqdb21: ~]$ export DISPLAY=192.168.6.21:0
[grid@tqdb21: ~]$ echo $DISPLAY
192.168.6.21:0
[grid@tqdb21: ~]$ asmca

asmca 安装截图:

  • 19c RAC asmca 创建 DATA 磁盘组 01
    19cRACasmca创建DATA磁盘组01

  • 19c RAC asmca 创建 DATA 磁盘组 02
    19cRACasmca创建DATA磁盘组02

  • 19c RAC asmca 创建 DATA 磁盘组 03
    19cRACasmca创建DATA磁盘组03

  • 19c RAC asmca 创建 DATA 磁盘组 04
    19cRACasmca创建DATA磁盘组04

  • 19c RAC asmca 创建 DATA 磁盘组 05
    19cRACasmca创建DATA磁盘组05

  • 19c RAC asmca 创建 DATA 磁盘组 06 Add DATA Disk Group
    19cRACasmca创建DATA磁盘组06AddDATADiskGroup

  • 19c RAC asmca 创建 DATA 磁盘组 07
    19cRACasmca创建DATA磁盘组07

  • 19c RAC asmca 创建 DATA 磁盘组 08
    19cRACasmca创建DATA磁盘组08

3.5.2 dbca 创建数据库

oracle 用户图形界面,使用 dbca 创建磁盘组

[root@tqdb21: ~]# xhost +
access control disabled, clients can connect from any host
[root@tqdb21: ~]# xdpyinfo | head
name of display:    :0
version number:    11.0
vendor string:    The X.Org Foundation
vendor release number:    12004000
X.Org version: 1.20.4
maximum request size:  16777212 bytes
motion buffer size:  256
bitmap unit, bit order, padding:    32, LSBFirst, 32
image byte order:    LSBFirst
number of supported pixmap formats:    7
[root@tqdb21: ~]# su - oracle
Last login: Fri Feb 14 07:45:36 CST 2020 on pts/1
[oracle@tqdb21: ~]$ export DISPLAY=192.168.6.21:0
[oracle@tqdb21: ~]$ echo $DISPLAY
192.168.6.21:0
[oracle@tqdb21: ~]$ dbca

dbca 安装截图:

  • 19c RAC dbca 建库 01
    19cRACdbca建库01

  • 19c RAC dbca 建库 02
    19cRACdbca建库02

  • 19c RAC dbca 建库 03
    19cRACdbca建库03

  • 19c RAC dbca 建库 04
    19cRACdbca建库04

  • 19c RAC dbca 建库 05
    19cRACdbca建库05

  • 19c RAC dbca 建库 06
    19cRACdbca建库06

  • 19c RAC dbca 建库 07
    19cRACdbca建库07

  • 19c RAC dbca 建库 08
    19cRACdbca建库08

  • 19c RAC dbca 建库 09
    19cRACdbca建库09

  • 19c RAC dbca 建库 10
    19cRACdbca建库10-ok

  • 19c RAC dbca 建库 11
    19cRACdbca建库11

  • 19c RAC dbca 建库 12
    19cRACdbca建库12

  • 19c RAC dbca 建库 13
    19cRACdbca建库13

  • 19c RAC dbca 建库 14
    19cRACdbca建库14

  • 19c RAC dbca 建库 15
    19cRACdbca建库15

  • 19c RAC dbca 建库 16 SYS 口令
    19cRACdbca建库16

  • 19c RAC dbca 建库 17
    19cRACdbca建库17

  • 19c RAC dbca 建库 18 Edit Control Files
    19cRACdbca建库18EditControlFiles

  • 19c RAC dbca 建库 19 Redo 200M
    19cRACdbca建库19Redo200M

  • 19c RAC dbca 建库 20
    19cRACdbca建库20

  • 19c RAC dbca 建库 21
    19cRACdbca建库21

  • 19c RAC dbca 建库 22 Yes
    19cRACdbca建库22Yes

  • 19c RAC dbca 建库 23
    19cRACdbca建库23

  • 19c RAC dbca 建库 24
    19cRACdbca建库24

  • 19c RAC dbca 建库 25
    19cRACdbca建库25

  • 19c RAC dbca 建库 26
    19cRACdbca建库26

  • 19c RAC dbca 建库 27
    19cRACdbca建库27

  • 19c RAC dbca 建库 28
    19cRACdbca建库28

  • 19c RAC dbca 建库 29
    19cRACdbca建库29

  • 19c RAC dbca 建库 30
    19cRACdbca建库30

  • 19c RAC dbca 建库 31
    19cRACdbca建库31

-- 查看补丁信息
19:29:25 sys@TQDB(tqdb21)> set linesize 300;
19:29:34 sys@TQDB(tqdb21)> col TARGET_BUILD_TIMESTAMP for a10;
19:29:34 sys@TQDB(tqdb21)> col SOURCE_BUILD_TIMESTAMP for a20;
19:29:34 sys@TQDB(tqdb21)> col SOURCE_BUILD_DESCRIPTION for a20;
19:29:34 sys@TQDB(tqdb21)> col TARGET_VERSIONT for a20;
19:29:34 sys@TQDB(tqdb21)> col TARGET_BUILD_DESCRIPTION for a20;
19:29:34 sys@TQDB(tqdb21)> 
19:29:34 sys@TQDB(tqdb21)> select install_id, PATCH_ID,PATCH_UID,ACTION,STATUS, DESCRIPTION, SOURCE_VERSION,SOURCE_BUILD_DESCRIPTION,SOURCE_BUILD_TIMESTAMP, TARGET_VERSION, TARGET_BUILD_DESCRIPTION, to_char(TARGET_BUILD_TIMESTAMP, 'yyyy-mm-dd hh24:mi:ss') from dba_registry_sqlpatch;
INSTALL_ID   PATCH_ID  PATCH_UID ACTION          STATUS          DESCRIPTION                                                  SOURCE_VERSION  SOURCE_BUILD_DESCRIP SOURCE_BUILD_TIMESTA TARGET_VERSION  TARGET_BUILD_DESCRIP TO_CHAR(TARGET_BUIL
---------- ---------- ---------- --------------- --------------- ------------------------------------------------------------ --------------- -------------------- -------------------- --------------- -------------------- -------------------
1   30557433   23305305 APPLY           SUCCESS         Database Release Update : 19.6.0.0.200114 (30557433)         19.1.0.0.0      Feature Release                           19.6.0.0.0      Release_Update       2019-12-17 15:50:04
19:29:36 sys@TQDB(tqdb21)> 
  • 19c RAC sqlplus 查询补丁信息
    19cRACsqlplus查询补丁信息

至此,Oracle 19c RAC 安装以及升级 RU 部署已经完成。

单实例的安装以及升级RU的步骤 比 RAC环境相对简单很多,就步骤赘述了。

下一篇,我们将搭建 Oracle 19c MAA 高可用性体系结构 (Oracle 19c RAC + Active Data Guard)

-- The End --