`

RAC install error

阅读更多

while running:

 

/u01/app/oracle/product/10.2.0/db_1/root.sh

 

Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory
Setting up NS directories
PROT-1: Failed to initialize ocrconfig
Failed to upgrade Oracle Cluster Registry configuration

 

        dd if=/dev/zero f=/dev/rdsk/V1064_vote_01_20m.dbf bs=8192 count=2560
        dd if=/dev/zero f=/dev/rdsk/ocrV1064_100m.ora bs=8192 count=12800

 

解决方法1


Failed to upgrade Oracle Cluster Registry configuration
在安装CRS时,在第二个节点执行./root.sh时,出现如下提示,我在第一个节点执行正常.请大虾指点一些,不胜感激!谢谢!
[root@RACtest2 crs]# ./root.sh
WARNING: directory '/app/oracle/product/10.2.0' is not owned by root
WARNING: directory '/app/oracle/product' is not owned by root
WARNING: directory '/app/oracle' is not owned by root
WARNING: directory '/app' is not owned by root
Checking to see if Oracle CRS stack is already configured
Setting the permissions on OCR backup directory
Setting up NS directories
PROT-1: Failed to initialize ocrconfig
Failed to upgrade Oracle Cluster Registry configuration
错误原因:
是因为安装crs的设备权限有问题,例如我的设备用raw来放置ocr和vote,此时要设置好这些硬件设备以及连接的文件的权限,下面是我的环境:
[root@rac2 oracrs]#
lrwxrwxrwx 1 root root 13 Jan 27 12:49 ocr.crs -> /dev/raw/raw1
lrwxrwxrwx 1 root root 13 Jan 26 13:31 vote.crs -> /dev/raw/raw2
chown root:oinstall /dev/raw/raw1
chown root:oinstall /dev/raw/raw2
chmod 660 /dev/raw/raw1
chmod 660 /dev/raw/raw2
其中/dev/sdb1放置ocr,/dev/sdb2放置vote.
[root@rac2 oracrs]# service rawdevices reload
Assigning devices:
           /dev/raw/raw1 -->   /dev/sdb1
/dev/raw/raw1: bound to major 8, minor 17
           /dev/raw/raw2 -->   /dev/sdb2
/dev/raw/raw2: bound to major 8, minor 18
Done
然后再次执行就ok了.
[root@rac2 oracrs]# /oracle/app/oracle/product/crs/root.sh
WARNING: directory '/oracle/app/oracle/product' is not owned by root
WARNING: directory '/oracle/app/oracle' is not owned by root
Checking to see if Oracle CRS stack is already configured
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/oracle/app/oracle/product' is not owned by root
WARNING: directory '/oracle/app/oracle' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
assigning default hostname rac1 for node 1.
assigning default hostname rac2 for node 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: rac1 priv1 rac1
node 2: rac2 priv2 rac2
clscfg: Arguments check out successfully.

 

 

Oracle gave me a patch that allowed us to format the OCR and voting disk. Now the problem may just be I need to run root.sh on node1 or wipe both clean and start fresh.

Devices formatted, now we need to get the CRS daemon up and running on both nodes..

[root@dr2db2 ~]# dd if=/dev/zero of=/dev/raw/raw2 bs=1048576 count=1000
1000+0 records in
1000+0 records out
[root@dr2db2 ~]# /orahome/app/oracle/product/10.1.2.0.2/CRS/root.sh
WARNING: directory '/orahome/app/oracle/product/10.1.2.0.2' is not owned by root
WARNING: directory '/orahome/app/oracle/product' is not owned by root
WARNING: directory '/orahome/app/oracle' is not owned by root
WARNING: directory '/orahome/app' is not owned by root
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/orahome/app/oracle/product/10.1.2.0.2' is not owned by root
WARNING: directory '/orahome/app/oracle/product' is not owned by root
WARNING: directory '/orahome/app/oracle' is not owned by root
WARNING: directory '/orahome/app' is not owned by root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: dr2db2 dr2db2-eth2 dr2db2
node 2: dr2db1 dr2db1-eth2 dr2db1
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /dev/raw/raw6
Format of 1 voting devices complete.
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
dr2db2
CSS is inactive on these nodes.
dr2db1
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.

[root@dr2db2 ~]# ps -ef | grep ora
root 6572 2526 0 13:39 ? 00:00:00 sshd: oracle [priv]
oracle 6574 6572 0 13:39 ? 00:00:00 sshd: oracle@pts/1
oracle 6575 6574 0 13:39 pts/1 00:00:00 -bash
root 14172 2526 0 17:54 ? 00:00:00 sshd: oracle [priv]
oracle 14176 14172 0 17:55 ? 00:00:00 sshd: oracle@pts/2
oracle 14177 14176 0 17:55 pts/2 00:00:00 -bash
root 14682 1 0 17:57 ? 00:00:00 /bin/su -l oracle -c sh -c 'ulimit -c unlimited; cd /orahome/app/oracle/product/10.1.2.0.2/CRS/log/dr2db2/evmd; exec /orahome/app/oracle/product/10.1.2.0.2/CRS/bin/evmd '
root 14686 1 0 17:57 ? 00:00:00 /orahome/app/oracle/product/10.1.2.0.2/CRS/bin/crsd.bin reboot
oracle 14959 14682 0 17:58 ? 00:00:00 /orahome/app/oracle/product/10.1.2.0.2/CRS/bin/evmd.bin
root 15018 14942 0 17:58 ? 00:00:00 /bin/su -l oracle -c /bin/sh -c 'ulimit -c unlimited; cd /orahome/app/oracle/product/10.1.2.0.2/CRS/log/dr2db2/cssd; /orahome/app/oracle/product/10.1.2.0.2/CRS/bin/ocssd || exit $?'
oracle 15021 15018 0 17:58 ? 00:00:00 /bin/sh -c ulimit -c unlimited; cd /orahome/app/oracle/product/10.1.2.0.2/CRS/log/dr2db2/cssd; /orahome/app/oracle/product/10.1.2.0.2/CRS/bin/ocssd || exit $?
oracle 15057 15021 0 17:58 ? 00:00:00 /orahome/app/oracle/product/10.1.2.0.2/CRS/bin/ocssd.bin
root 15996 14206 0 17:59 pts/2 00:00:00 grep ora


crs_setperm crs_setperm.bin crs_start crs_start.bin crs_stat crs_stat.bin crs_stop crs_stop.bin
[root@dr2db2 ~]# /orahome/app/oracle/product/10.1.2.0.2/CRS/bin/crs_stat -t
CRS-0184: Cannot communicate with the CRS daemon.


on next node

[root@dr2db1 ~]# /orahome/app/oracle/product/10.1.2.0.2/CRS/root.sh
WARNING: directory '/orahome/app/oracle/product/10.1.2.0.2' is not owned by root
WARNING: directory '/orahome/app/oracle/product' is not owned by root
WARNING: directory '/orahome/app/oracle' is not owned by root
WARNING: directory '/orahome/app' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/orahome/app/oracle/product/10.1.2.0.2' is not owned by root
WARNING: directory '/orahome/app/oracle/product' is not owned by root
WARNING: directory '/orahome/app/oracle' is not owned by root
WARNING: directory '/orahome/app' is not owned by root
/orahome/app/oracle/product/10.1.2.0.2/CRS/bin/crsctl.bin: error while loading shared libraries: libstdc++.so.5: cannot open shared object file: No such file or directory
Failure initializing entries in /etc/oracle/scls_scr/dr2db1.


I think whats needed is a clean install.

 

 

http://www.puschitz.com/InstallingOracle10gRAC.shtml#CreatingPartitionsForRawDevices

 

 

 

 

 

在 node1 上执行:/opt/ora10g/product/10.2.0/crs_1/root.sh; 

在 node2 上执行:/opt/ora10g/product/10.2.0/crs_1/root.sh; 

通常在最后一个节点执行root.sh时会遇到错误,就我们的情况而言当然就是node2~~

提示:一般常见的错误有如下三种:

A) .如果你碰到了这个错误:

/opt/ora10g/product/10.2.0/crs_1/jdk/jre//bin/java: error while loading shared libraries:  libpthread.so.0: cannot open shared object file: No such file or directory

可以按照如下方式解决:

===============================

修改vipca文件

[root@node2 opt]# vi /opt/ora10g/product/10.2.0/crs_1/bin/vipca

找到如下内容:

       Remove this workaround when the bug 3937317 is fixed

       arch=`uname -m`

       if [ "$arch" = "i686" -o "$arch" = "ia64" ]

       then

            LD_ASSUME_KERNEL=2.4.19

            export LD_ASSUME_KERNEL

       fi

       #End workaround

在fi后新添加一行:

unset LD_ASSUME_KERNEL

以及srvctl文件

[root@node2 opt]# vi /opt/ora10g/product/10.2.0/crs_1/bin/srvctl

找到如下内容:

LD_ASSUME_KERNEL=2.4.19

export LD_ASSUME_KERNEL

同样在其后新增加一行:

unset LD_ASSUME_KERNEL

保存退出, 然后在node2重新执行root.sh

当然,既然我们已经知道了有这个问题,建议最好在node2执行root.sh之前,首先修改vipca。

其实同时需要你改的还有$ORACLE_HOME/bin/srvctl文件,不然等装完数据库 之后,srvctl命令也是会报这个错误地。要知道srvctl这么常用,如果它执行老报错,那可是相当致命啊。不过呢你现在才安装到crs,离create db还远着呢,大可以等到创建完数据库,待到需要管理 时再修改该文件。

B) .如果你碰到了这个错误:

The given interface(s), "eth0" is not public. Public interfaces should be used to configure virtual IPs.

解决方式如下:

==============================

图形界面上运行$CRS_HOME/bin/vipca,手工重新配置rac1-vip和rac2-vip。

[root@node2 opt]# xhost +

[root@node2 opt]# /opt/ora10g/product/10.2.0/crs_1/bin/vipca

按照提示点击下一步

点击finish即可

vipca开始自动配置

全部配置完成之后,点击exit退出操作窗口。

C) .如果你碰到了这个错误:

Error 0(Native: listNetInterfaces:[3])

 [Error 0(Native: listNetInterfaces:[3])]

解决方式如下:

===============================

[root@node2 bin]# ./oifcfg iflist

eth1  10.10.17.0

virbr0  192.168.122.0

eth0  192.168.100.0

[root@node2 bin]# ./oifcfg setif -global eth0/192.168.100.0:public

[root@node2 bin]# ./oifcfg setif -global eth1/10.10.10.0:cluster_interconnect

[root@node2 bin]# ./oifcfg getif

eth0  192.168.100.0  global  public

eth1  10.10.10.0  global  cluster_interconnect

然后在视窗界面重新执行vipca即可,如上b例中所示。

分享到:
评论

相关推荐

    12cR2_rac_install_7.3.txt

    Oracle 12cR2 RAC Install for Redhat Linux 7.3 的安装文档 使用Openfiler做共享存储,使用DNS,NTP

    oracle rac install guide

    ### Oracle RAC安装指南知识点概览 #### 一、Oracle RAC简介 Oracle Real Application Clusters (RAC) 是一种数据库集群技术,它允许多个Oracle数据库实例同时访问一个共享的数据库。这种架构提供了高可用性、可...

    oracle21c RAC install on linux 8.4.pdf

    oracle21c RAC install on linux 8.4.pdf

    ORACLE RAC INSTALL STUDY.docx

    Oracle RAC,即Real Application Clusters,是Oracle数据库的一个重要特性,它允许多个数据库实例同时访问同一物理数据文件,从而实现高可用性和负载均衡。在本文中,我们将深入探讨Oracle RAC的安装过程,主要涉及...

    oracle 12c rac install on linux with virtualbox

    /u01/app/12.1.0.2/grid/perl/bin/perl -I/u01/app/12.1.0.2/grid/perl/lib -I/u01/app/12.1.0.2/grid/crs/install /u01/app/12.1.0.2/grid/crs/install/rootcrs.pl -auto -lang=en_US.UTF-8 ``` ##### 3. 解决安装...

    oracle 11g rac install on linux with virtualbox

    - 安装命令示例:`yum install -y binutils compat-libcap1 compat-libstdc++-33 gcc gcc-c++ glibc ksh libgcc libstdc++ libaio make sysstat` ##### 2.3 DNS 配置 - 编辑 `/etc/resolv.conf` 文件,添加 DNS ...

    rac crs error example

    根据提供的文件信息,我们可以分析出一系列与Oracle RAC(Real Application Clusters)集群中的CRS(Cluster Ready Services)错误相关的知识点。以下是对这些错误信息的详细解析: ### CRS-2676: Start of 'ora....

    oracle 19c rac install on linux7.pdf

    Oracle 19c 是 Oracle 公司的一个数据库版本,支持集群技术,即 Real Application Clusters (RAC)。在 Linux 7 上安装 Oracle 19c RAC 需要一系列的系统配置和软件包安装。以下是详细步骤和涉及的知识点: 1. **...

    oracle linux7.7+oracle 12c rac install

    Oracle Linux 7.7 + Oracle 12c RAC 安装 Oracle Rac 是一种高可用性的数据库解决方案,通过使用共享存储和多个节点来实现负载均衡和故障转移。本文将详细介绍 Oracle Linux 7.7 + Oracle 12c RAC 的安装过程。 一...

    Oracle 10g Rac Install on Linux

    Oracle 10g Real Application Clusters (RAC) 是一种高可用性和可伸缩性的数据库解决方案,它允许多个实例共享同一数据库,从而提供故障切换和负载均衡的能力。在Linux环境下安装Oracle 10g RAC,需要进行一系列的...

    oracle rac install for aix61

    Oracle RAC(Real Application Clusters)是Oracle数据库的一个高级特性,允许在同一集群中的多个服务器上共享同一数据库,提供高可用性和可伸缩性。在本文中,我们将详细讨论如何在IBM AIX 6.1操作系统上安装Oracle...

    10g rac install 详细步骤文档

    Oracle RAC(Real Application Clusters)是Oracle数据库的一个高级特性,允许在多台服务器上部署一个共享的数据库实例,提供高可用性和负载均衡。在AIX操作系统环境下安装Oracle 11g RAC涉及多个步骤,以下是一份...

    11gr2_rac_install

    ### Oracle Database 11g Release 2 (11.2.0.3.0) RAC 安装指南 #### 概述 本文档详细介绍了如何在 Oracle Linux 6.1 上通过 Oracle VirtualBox 虚拟机环境安装 Oracle Database 11g Release 2 (11.2.0.3.0) RAC...

    RHEL5.8_11gR2RAC-install-command-list

    rhel5 oracle11g rac install main command

    rac_linux_install

    综上所述,“rac_linux_install”文档提供了一套全面而详细的指南,指导用户如何在Linux环境下成功安装并配置Oracle RAC。从准备虚拟环境、安装系统补丁、配置网络到优化系统参数,每个步骤都至关重要,共同构成了一...

    Oracle18C_RAC_install_RHEL6.8-x64.docx

    ### Oracle 18c RAC 安装指南:在 Red Hat Enterprise Linux 6.8 x64 上的部署 #### 概述 本文档详细介绍了如何在 Red Hat Enterprise Linux 6.8 x64(RHEL 6.8)上安装 Oracle 18c Real Application Clusters ...

    精oracle11gRAC-INSTALL.txt

    精oracle11gRAC-INSTALL

Global site tag (gtag.js) - Google Analytics