This guide shows how to remove a node from an existing 11gR2 Oracle RAC cluster. It is assumed that the node in question is available and is not part of a GNS/Grid Plug and Play cluster. In other words, the database is considered to be "Administrator-Managed". Also, the database software is non-shared. This guide uses a 3-node cluster running Oracle Linux 6.3 (x64). The three nodes are "node1", "node2", and "node3",we will be removing "node3" from the cluster.
Delete Node from Cluster
"Unpin" node
"Unpin" the node – in our case "node3" – from all nodes that are to remain in the cluster; in this case, "node1" and "node2". Specify the node you plan on deleting in the command and do so on each remaining node in the cluster.
- [root@node1 ~]# /u01/app/11.2.0/grid/bin/crsctl unpin css -n node3
- CRS-4667: Node node3 successfully unpinned.
- [root@node2 ~]# /u01/app/11.2.0/grid/bin/crsctl unpin css -n node3
- CRS-4667: Node node3 successfully unpinned.
Remove RAC Database Instance(s)
The node we are removing houses an instance – "zhongwc3" – which is part of a RAC database – "zhongwc".
Remove the "zhongwc3" instance from the "zhongwc" database using "dbca" in "Silent Mode"
- [oracle@node1 ~]$ dbca -silent -deleteInstance -nodeList node3 -gdbName zhongwc -instanceName zhongwc3 -sysDBAUserName sys -sysDBAPassword oracle
- Deleting instance
- 1% complete
- 2% complete
- 6% complete
- 13% complete
- 20% complete
- 26% complete
- 33% complete
- 40% complete
- 46% complete
- 53% complete
- 60% complete
- 66% complete
- Completing instance management.
- 100% complete
- Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/zhongwc.log" for further details.
In this case, we ran the command as "oracle" on "zhongwc1". Afterwards, the database should only show 2 threads and the configuration of the database should show "zhongwc1" and "zhongwc2" as its only instances.
- [oracle@node1 ~]$ sqlplus / as sysdba
- SQL*Plus: Release 11.2.0.3.0 Production on Sat Jan 5 15:20:12 2013
- Copyright (c) 1982, 2011, Oracle. All rights reserved.
- Connected to:
- Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
- With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
- Data Mining and Real Application Testing options
- SQL> alter session set nls_date_format='yyyy-mm-dd hh24:mi:ss';
- Session altered.
- SQL> col host_name format a11
- SQL> set line 300
- SQL> select INSTANCE_NAME,HOST_NAME,VERSION,STARTUP_TIME,STATUS,ACTIVE_STATE,INSTANCE_ROLE,DATABASE_STATUS from gv$INSTANCE;
- INSTANCE_NAME HOST_NAME VERSION STARTUP_TIME STATUS ACTIVE_ST INSTANCE_ROLE DATABASE_STATUS
- ---------------- ----------- ----------------- ------------------- ------------ --------- ------------------ -----------------
- zhongwc1 node1 11.2.0.3.0 2013-01-05 09:53:24 OPEN NORMAL PRIMARY_INSTANCE ACTIVE
- zhongwc2 node2 11.2.0.3.0 2013-01-04 17:34:40 OPEN NORMAL PRIMARY_INSTANCE ACTIVE
- [oracle@node1 ~]$ srvctl config database -d zhongwc -v
- Database unique name: zhongwc
- Database name: zhongwc
- Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
- Oracle user: oracle
- Spfile: +DATADG/zhongwc/spfilezhongwc.ora
- Domain:
- Start options: open
- Stop options: immediate
- Database role: PRIMARY
- Management policy: AUTOMATIC
- Server pools: zhongwc
- Database instances: zhongwc1,zhongwc2
- Disk Groups: DATADG,FRADG
- Mount point paths:
- Services:
- Type: RAC
- Database is administrator managed
Remove RAC Database Software
In this step, the Oracle RAC database software will be removed from the node that will be deleted. Additionally, the inventories of the remaining nodes will be updated to reflect the removal of the node’s Oracle RAC database software home, etc.
Any listener running on "node3" will need to be stopped.
- [oracle@node3 ~]$ srvctl disable listener -n node3
- [oracle@node3 ~]$ srvctl stop listener -n node3
Update inventory on "node3"
- [oracle@node3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={node3}" -local
- Starting Oracle Universal Installer...
- Checking swap space: must be greater than 500 MB. Actual 3930 MB Passed
- The inventory pointer is located at /etc/oraInst.loc
- The inventory is located at /u01/app/oraInventory
- 'UpdateNodeList' was successful.
Remove the RAC Database Software from "node3"
- [oracle@node3 ~]$ cd $ORACLE_HOME/deinstall
- [oracle@node3 deinstall]$ ./deinstall -local
- Checking for required files and bootstrapping ...
- Please wait ...
- Location of logs /u01/app/oraInventory/logs/
- ############ ORACLE DEINSTALL & DECONFIG TOOL START ############
- ######################### CHECK OPERATION START #########################
- ## [START] Install check configuration ##
- Checking for existence of the Oracle home location /u01/app/oracle/product/11.2.0/dbhome_1
- Oracle Home type selected for deinstall is: Oracle Real Application Cluster Database
- Oracle Base selected for deinstall is: /u01/app/oracle
- Checking for existence of central inventory location /u01/app/oraInventory
- Checking for existence of the Oracle Grid Infrastructure home /u01/app/11.2.0/grid
- The following nodes are part of this cluster: node3
- Checking for sufficient temp space availability on node(s) : 'node3'
- ## [END] Install check configuration ##
- Network Configuration check config START
- Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2013-01-05_03-34-48-PM.log
- Network Configuration check config END
- Database Check Configuration START
- Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_check2013-01-05_03-34-56-PM.log
- Database Check Configuration END
- Enterprise Manager Configuration Assistant START
- EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_check2013-01-05_03-35-03-PM.log
- Enterprise Manager Configuration Assistant END
- Oracle Configuration Manager check START
- OCM check log file location : /u01/app/oraInventory/logs//ocm_check3913.log
- Oracle Configuration Manager check END
- ######################### CHECK OPERATION END #########################
- ####################### CHECK OPERATION SUMMARY #######################
- Oracle Grid Infrastructure Home is: /u01/app/11.2.0/grid
- The cluster node(s) on which the Oracle home deinstallation will be performed are:node3
- Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'node3', and the global configuration will be removed.
- Oracle Home selected for deinstall is: /u01/app/oracle/product/11.2.0/dbhome_1
- Inventory Location where the Oracle home registered is: /u01/app/oraInventory
- The option -local will not modify any database configuration for this Oracle home.
- No Enterprise Manager configuration to be updated for any database(s)
- No Enterprise Manager ASM targets to update
- No Enterprise Manager listener targets to migrate
- Checking the config status for CCR
- Oracle Home exists with CCR directory, but CCR is not configured
- CCR check is finished
- Do you want to continue (y - yes, n - no)? [n]: y
- A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2013-01-05_03-34-20-PM.out'
- Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2013-01-05_03-34-20-PM.err'
- ######################## CLEAN OPERATION START ########################
- Enterprise Manager Configuration Assistant START
- EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_clean2013-01-05_03-35-03-PM.log
- Updating Enterprise Manager ASM targets (if any)
- Updating Enterprise Manager listener targets (if any)
- Enterprise Manager Configuration Assistant END
- Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean2013-01-05_03-35-33-PM.log
- Network Configuration clean config START
- Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2013-01-05_03-35-33-PM.log
- De-configuring Local Net Service Names configuration file...
- Local Net Service Names configuration file de-configured successfully.
- De-configuring backup files...
- Backup files de-configured successfully.
- The network configuration has been cleaned up successfully.
- Network Configuration clean config END
- Oracle Configuration Manager clean START
- OCM clean log file location : /u01/app/oraInventory/logs//ocm_clean3913.log
- Oracle Configuration Manager clean END
- Setting the force flag to false
- Setting the force flag to cleanup the Oracle Base
- Oracle Universal Installer clean START
- Detach Oracle home '/u01/app/oracle/product/11.2.0/dbhome_1' from the central inventory on the local node : Done
- Delete directory '/u01/app/oracle/product/11.2.0/dbhome_1' on the local node : Done
- The Oracle Base directory '/u01/app/oracle' will not be removed on local node. The directory is not empty.
- Oracle Universal Installer cleanup was successful.
- Oracle Universal Installer clean END
- ## [START] Oracle install clean ##
- Clean install operation removing temporary directory '/tmp/deinstall2013-01-05_03-32-07PM' on node 'node3'
- ## [END] Oracle install clean ##
- ######################### CLEAN OPERATION END #########################
- ####################### CLEAN OPERATION SUMMARY #######################
- Cleaning the config for CCR
- As CCR is not configured, so skipping the cleaning of CCR configuration
- CCR clean is finished
- Successfully detached Oracle home '/u01/app/oracle/product/11.2.0/dbhome_1' from the central inventory on the local node.
- Successfully deleted directory '/u01/app/oracle/product/11.2.0/dbhome_1' on the local node.
- Oracle Universal Installer cleanup was successful.
- Oracle deinstall tool successfully cleaned up temporary directories.
- #######################################################################
- ############# ORACLE DEINSTALL & DECONFIG TOOL END #############
Make sure to specify the "-local" flag as not to remove more than just the local node’s software, etc.
Update inventories/node list on the remaining nodes.
- [oracle@node1 ~]$ cd $ORACLE_HOME/oui/bin
- [oracle@node1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={node1,node2}"
- Starting Oracle Universal Installer...
- Checking swap space: must be greater than 500 MB. Actual 3598 MB Passed
- The inventory pointer is located at /etc/oraInst.loc
- The inventory is located at /u01/app/oraInventory
- 'UpdateNodeList' was successful.
Remove Clusterware
Disable the Clusterware on "node3" as root
- [root@node3 ~]# /u01/app/11.2.0/grid/crs/install/rootcrs.pl -deconfig -force
- Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
- Network exists: 1/192.168.0.0/255.255.0.0/eth0, type static
- VIP exists: /node1-vip/192.168.1.151/192.168.0.0/255.255.0.0/eth0, hosting node node1
- VIP exists: /node2-vip/192.168.1.152/192.168.0.0/255.255.0.0/eth0, hosting node node2
- VIP exists: /node3-vip/192.168.1.153/192.168.0.0/255.255.0.0/eth0, hosting node node3
- GSD exists
- ONS exists: Local port 6100, remote port 6200, EM port 2016
- CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'node3'
- CRS-2673: Attempting to stop 'ora.crsd' on 'node3'
- CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'node3'
- CRS-2673: Attempting to stop 'ora.oc4j' on 'node3'
- CRS-2673: Attempting to stop 'ora.CRS.dg' on 'node3'
- CRS-2673: Attempting to stop 'ora.DATADG.dg' on 'node3'
- CRS-2673: Attempting to stop 'ora.FRADG.dg' on 'node3'
- CRS-2677: Stop of 'ora.DATADG.dg' on 'node3' succeeded
- CRS-2677: Stop of 'ora.FRADG.dg' on 'node3' succeeded
- CRS-2677: Stop of 'ora.oc4j' on 'node3' succeeded
- CRS-2672: Attempting to start 'ora.oc4j' on 'node1'
- CRS-2676: Start of 'ora.oc4j' on 'node1' succeeded
- CRS-2677: Stop of 'ora.CRS.dg' on 'node3' succeeded
- CRS-2673: Attempting to stop 'ora.asm' on 'node3'
- CRS-2677: Stop of 'ora.asm' on 'node3' succeeded
- CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'node3' has completed
- CRS-2677: Stop of 'ora.crsd' on 'node3' succeeded
- CRS-2673: Attempting to stop 'ora.ctssd' on 'node3'
- CRS-2673: Attempting to stop 'ora.evmd' on 'node3'
- CRS-2673: Attempting to stop 'ora.asm' on 'node3'
- CRS-2673: Attempting to stop 'ora.mdnsd' on 'node3'
- CRS-2677: Stop of 'ora.evmd' on 'node3' succeeded
- CRS-2677: Stop of 'ora.mdnsd' on 'node3' succeeded
- CRS-2677: Stop of 'ora.ctssd' on 'node3' succeeded
- CRS-2677: Stop of 'ora.asm' on 'node3' succeeded
- CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'node3'
- CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'node3' succeeded
- CRS-2673: Attempting to stop 'ora.cssd' on 'node3'
- CRS-2677: Stop of 'ora.cssd' on 'node3' succeeded
- CRS-2673: Attempting to stop 'ora.crf' on 'node3'
- CRS-2677: Stop of 'ora.crf' on 'node3' succeeded
- CRS-2673: Attempting to stop 'ora.gipcd' on 'node3'
- CRS-2677: Stop of 'ora.gipcd' on 'node3' succeeded
- CRS-2673: Attempting to stop 'ora.gpnpd' on 'node3'
- CRS-2677: Stop of 'ora.gpnpd' on 'node3' succeeded
- CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'node3' has completed
- CRS-4133: Oracle High Availability Services has been stopped.
- Successfully deconfigured Oracle clusterware stack on this node
Delete "node3" node from Clusterware configuration (on a remaining node)
- [root@node1 ~]# /u01/app/11.2.0/grid/bin/crsctl delete node -n node3
- CRS-4661: Node node3 successfully deleted.
- [root@node1 ~]# /u01/app/11.2.0/grid/bin/olsnodes -t -s
- node1 Active Unpinned
- node2 Active Unpinned
Update inventory on "node3" as the "GRID" software owner.
- [grid@node3 ~]$ cd $ORACLE_HOME/oui/bin
- [grid@node3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={node3}" CRS=TRUE -local
- Starting Oracle Universal Installer...
- Checking swap space: must be greater than 500 MB. Actual 4075 MB Passed
- The inventory pointer is located at /etc/oraInst.loc
- The inventory is located at /u01/app/oraInventory
- 'UpdateNodeList' was successful.
Remove the Clusterware Software from "node3"
- [grid@node3 ~]$ cd $ORACLE_HOME/oui/bin
- [grid@node3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={node3}" CRS=TRUE -local
- Starting Oracle Universal Installer...
- Checking swap space: must be greater than 500 MB. Actual 4075 MB Passed
- The inventory pointer is located at /etc/oraInst.loc
- The inventory is located at /u01/app/oraInventory
- 'UpdateNodeList' was successful.
- [grid@node3 bin]$ cd $ORACLE_HOME/deinstall
- [grid@node3 deinstall]$ ./deinstall -local
- Checking for required files and bootstrapping ...
- Please wait ...
- Location of logs /tmp/deinstall2013-01-05_03-58-04PM/logs/
- ############ ORACLE DEINSTALL & DECONFIG TOOL START ############
- ######################### CHECK OPERATION START #########################
- ## [START] Install check configuration ##
- Checking for existence of the Oracle home location /u01/app/11.2.0/grid
- Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
- Oracle Base selected for deinstall is: /u01/app/grid
- Checking for existence of central inventory location /u01/app/oraInventory
- Checking for existence of the Oracle Grid Infrastructure home
- The following nodes are part of this cluster: node3
- Checking for sufficient temp space availability on node(s) : 'node3'
- ## [END] Install check configuration ##
- Traces log file: /tmp/deinstall2013-01-05_03-58-04PM/logs//crsdc.log
- Enter an address or the name of the virtual IP used on node "node3"[node3-vip]
- >
- The following information can be collected by running "/sbin/ifconfig -a" on node "node3"
- Enter the IP netmask of Virtual IP "192.168.1.153" on node "node3"[255.255.255.0]
- >
- Enter the network interface name on which the virtual IP address "192.168.1.153" is active
- >
- Enter an address or the name of the virtual IP[]
- >
- Network Configuration check config START
- Network de-configuration trace file location: /tmp/deinstall2013-01-05_03-58-04PM/logs/netdc_check2013-01-05_04-02-44-PM.log
- Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER]:
- Network Configuration check config END
- Asm Check Configuration START
- ASM de-configuration trace file location: /tmp/deinstall2013-01-05_03-58-04PM/logs/asmcadc_check2013-01-05_04-02-56-PM.log
- ######################### CHECK OPERATION END #########################
- ####################### CHECK OPERATION SUMMARY #######################
- Oracle Grid Infrastructure Home is:
- The cluster node(s) on which the Oracle home deinstallation will be performed are:node3
- Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'node3', and the global configuration will be removed.
- Oracle Home selected for deinstall is: /u01/app/11.2.0/grid
- Inventory Location where the Oracle home registered is: /u01/app/oraInventory
- Following RAC listener(s) will be de-configured: LISTENER
- Option -local will not modify any ASM configuration.
- Do you want to continue (y - yes, n - no)? [n]: y
- A log of this session will be written to: '/tmp/deinstall2013-01-05_03-58-04PM/logs/deinstall_deconfig2013-01-05_03-58-44-PM.out'
- Any error messages from this session will be written to: '/tmp/deinstall2013-01-05_03-58-04PM/logs/deinstall_deconfig2013-01-05_03-58-44-PM.err'
- ######################## CLEAN OPERATION START ########################
- ASM de-configuration trace file location: /tmp/deinstall2013-01-05_03-58-04PM/logs/asmcadc_clean2013-01-05_04-03-08-PM.log
- ASM Clean Configuration END
- Network Configuration clean config START
- Network de-configuration trace file location: /tmp/deinstall2013-01-05_03-58-04PM/logs/netdc_clean2013-01-05_04-03-08-PM.log
- De-configuring RAC listener(s): LISTENER
- De-configuring listener: LISTENER
- Stopping listener on node "node3": LISTENER
- Warning: Failed to stop listener. Listener may not be running.
- Listener de-configured successfully.
- De-configuring Naming Methods configuration file...
- Naming Methods configuration file de-configured successfully.
- De-configuring backup files...
- Backup files de-configured successfully.
- The network configuration has been cleaned up successfully.
- Network Configuration clean config END
- ---------------------------------------->
- The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on the local node after the execution completes on all the remote nodes.
- Run the following command as the root user or the administrator on node "node3".
- /tmp/deinstall2013-01-05_03-58-04PM/perl/bin/perl -I/tmp/deinstall2013-01-05_03-58-04PM/perl/lib -I/tmp/deinstall2013-01-05_03-58-04PM/crs/install /tmp/deinstall2013-01-05_03-58-04PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2013-01-05_03-58-04PM/response/deinstall_Ora11g_gridinfrahome1.rsp"
- Press Enter after you finish running the above commands
- <----------------------------------------
Make sure to specify the "-local" flag as not to remove more than just the local node’s software, etc.
The script will provide commands to be run as "root", in another window.
Don't rush to press "Enter".
Run the following command as the root user or the administrator on node "node3".
/tmp/deinstall2013-01-05_03-58-04PM/perl/bin/perl -I/tmp/deinstall2013-01-05_03-58-04PM/perl/lib -I/tmp/deinstall2013-01-05_03-58-04PM/crs/install /tmp/deinstall2013-01-05_03-58-04PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2013-01-05_03-58-04PM/response/deinstall_Ora11g_gridinfrahome1.rsp"
- [root@node3 ~]# /tmp/deinstall2013-01-05_03-58-04PM/perl/bin/perl -I/tmp/deinstall2013-01-05_03-58-04PM/perl/lib -I/tmp/deinstall2013-01-05_03-58-04PM/crs/install /tmp/deinstall2013-01-05_03-58-04PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2013-01-05_03-58-04PM/response/deinstall_Ora11g_gridinfrahome1.rsp"
- Using configuration parameter file: /tmp/deinstall2013-01-05_03-58-04PM/response/deinstall_Ora11g_gridinfrahome1.rsp
- ****Unable to retrieve Oracle Clusterware home.
- Start Oracle Clusterware stack and try again.
- CRS-4047: No Oracle Clusterware components configured.
- CRS-4000: Command Stop failed, or completed with errors.
- ################################################################
- # You must kill processes or reboot the system to properly #
- # cleanup the processes started by Oracle clusterware #
- ################################################################
- Either /etc/oracle/olr.loc does not exist or is not readable
- Make sure the file exists and it has read and execute access
- Either /etc/oracle/olr.loc does not exist or is not readable
- Make sure the file exists and it has read and execute access
- Failure in execution (rc=-1, 256, No such file or directory) for command /etc/init.d/ohasd deinstall
- error: package cvuqdisk is not installed
- Successfully deconfigured Oracle clusterware stack on this node
- [root@node3 ~]# rm -rf /etc/oraInst.loc
- [root@node3 ~]# rm -rf /opt/ORCLfmap
Press "Enter" to continue.
- Run the following command as the root user or the administrator on node "node3".
- /tmp/deinstall2013-01-05_03-58-04PM/perl/bin/perl -I/tmp/deinstall2013-01-05_03-58-04PM/perl/lib -I/tmp/deinstall2013-01-05_03-58-04PM/crs/install /tmp/deinstall2013-01-05_03-58-04PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2013-01-05_03-58-04PM/response/deinstall_Ora11g_gridinfrahome1.rsp"
- Press Enter after you finish running the above commands
- <----------------------------------------
- Remove the directory: /tmp/deinstall2013-01-05_03-58-04PM on node:
- Setting the force flag to false
- Setting the force flag to cleanup the Oracle Base
- Oracle Universal Installer clean START
- Detach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node : Done
- Delete directory '/u01/app/11.2.0/grid' on the local node : Done
- Delete directory '/u01/app/oraInventory' on the local node : Done
- Delete directory '/u01/app/grid' on the local node : Done
- Oracle Universal Installer cleanup was successful.
- Oracle Universal Installer clean END
- ## [START] Oracle install clean ##
- Clean install operation removing temporary directory '/tmp/deinstall2013-01-05_03-58-04PM' on node 'node3'
- ## [END] Oracle install clean ##
- ######################### CLEAN OPERATION END #########################
- ####################### CLEAN OPERATION SUMMARY #######################
- Following RAC listener(s) were de-configured successfully: LISTENER
- Oracle Clusterware is stopped and successfully de-configured on node "node3"
- Oracle Clusterware is stopped and de-configured successfully.
- Successfully detached Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node.
- Successfully deleted directory '/u01/app/11.2.0/grid' on the local node.
- Successfully deleted directory '/u01/app/oraInventory' on the local node.
- Successfully deleted directory '/u01/app/grid' on the local node.
- Oracle Universal Installer cleanup was successful.
- Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'node3' at the end of the session.
- Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'node3' at the end of the session.
- Oracle deinstall tool successfully cleaned up temporary directories.
- #######################################################################
- ############# ORACLE DEINSTALL & DECONFIG TOOL END #############
Update inventories/node list on the remaining nodes.
- [grid@node1 ~]$ cd $ORACLE_HOME/oui/bin
- [grid@node1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={node1,node2}" CRS=TRUE
- Starting Oracle Universal Installer...
- Checking swap space: must be greater than 500 MB. Actual 3493 MB Passed
- The inventory pointer is located at /etc/oraInst.loc
- The inventory is located at /u01/app/oraInventory
- 'UpdateNodeList' was successful.
Confirm that the node in question – zhongwc3 – has been properly removed via.
- [grid@node1 ~]$ cluvfy stage -post nodedel -n node3
- Performing post-checks for node removal
- Checking CRS integrity...
- Clusterware version consistency passed
- CRS integrity check passed
- Node removal check passed
- Post-check for node removal was successful.
- [grid@node1 ~]$ crsctl stat res -t
- --------------------------------------------------------------------------------
- NAME TARGET STATE SERVER STATE_DETAILS
- --------------------------------------------------------------------------------
- Local Resources
- --------------------------------------------------------------------------------
- ora.CRS.dg
- ONLINE ONLINE node1
- ONLINE ONLINE node2
- ora.DATADG.dg
- ONLINE ONLINE node1
- ONLINE ONLINE node2
- ora.FRADG.dg
- ONLINE ONLINE node1
- ONLINE ONLINE node2
- ora.LISTENER.lsnr
- ONLINE ONLINE node1
- ONLINE ONLINE node2
- ora.asm
- ONLINE ONLINE node1 Started
- ONLINE ONLINE node2 Started
- ora.gsd
- OFFLINE OFFLINE node1
- OFFLINE OFFLINE node2
- ora.net1.network
- ONLINE ONLINE node1
- ONLINE ONLINE node2
- ora.ons
- ONLINE ONLINE node1
- ONLINE ONLINE node2
- --------------------------------------------------------------------------------
- Cluster Resources
- --------------------------------------------------------------------------------
- ora.LISTENER_SCAN1.lsnr
- 1 ONLINE ONLINE node2
- ora.cvu
- 1 ONLINE ONLINE node1
- ora.node1.vip
- 1 ONLINE ONLINE node1
- ora.node2.vip
- 1 ONLINE ONLINE node2
- ora.oc4j
- 1 ONLINE ONLINE node1
- ora.scan1.vip
- 1 ONLINE ONLINE node2
- ora.zhongwc.db
- 1 ONLINE ONLINE node1 Open
- 2 ONLINE ONLINE node2 Open
- [oracle@node1 ~]$ sqlplus / as sysdba
- SQL*Plus: Release 11.2.0.3.0 Production on Sat Jan 5 16:30:58 2013
- Copyright (c) 1982, 2011, Oracle. All rights reserved.
- Connected to:
- Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
- With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
- Data Mining and Real Application Testing options
- SQL> alter session set nls_date_format='yyyy-mm-dd hh24:mi:ss';
- Session altered.
- SQL> col host_name format a11
- SQL> set line 300
- SQL> select INSTANCE_NAME,HOST_NAME,VERSION,STARTUP_TIME,STATUS,ACTIVE_STATE,INSTANCE_ROLE,DATABASE_STATUS from gv$INSTANCE;
- INSTANCE_NAME HOST_NAME VERSION STARTUP_TIME STATUS ACTIVE_ST INSTANCE_ROLE DATABASE_STATUS
- ---------------- ----------- ----------------- ------------------- ------------ --------- ------------------ -----------------
- zhongwc1 node1 11.2.0.3.0 2013-01-05 09:53:24 OPEN NORMAL PRIMARY_INSTANCE ACTIVE
- zhongwc2 node2 11.2.0.3.0 2013-01-04 17:34:40 OPEN NORMAL PRIMARY_INSTANCE ACTIVE
参考至:http://blog.csdn.net/staricqxyz/article/details/8468774
如有错误,欢迎指正
邮箱:czmcj@163.com
相关推荐
在Linux 32位操作系统上安装和运行Oracle 11g 11.2.0.3,需要对Linux系统和Oracle数据库有深入的理解。 首先,Oracle数据库系统的核心组件包括数据库服务器、SQL*Plus、企业管理器(EM)等。在Linux环境下,你需要...
Oracle 11g Release 2 (11.2.0.3) 是Oracle数据库的一个重要版本,它在功能、性能和稳定性方面都有显著提升。Oracle数据库是全球最受欢迎的关系型数据库管理系统之一,广泛应用于企业级应用、云计算服务以及大数据...
Oracle Database 11g Release 2 (11.2.0.3) RAC On Oracle Linux 6.3 Using VirtualBox This article describes the installation of Oracle Database 11g release 2 (11.2.0.3 64-bit) RAC on Linux (Oracle Linux ...
ORACLE11gR2_11.2.0.3_RAC部署方案good非常好
Oracle 11g 11.2.0.3 Client是Oracle公司针对Windows 64位操作系统提供的数据库连接客户端软件。这个版本的客户端是Oracle数据库系统的重要组成部分,它允许开发者和管理员在本地计算机上与远程Oracle数据库服务器...
Oracle 11.2.0.3 数据保护技术在企业级数据库管理中扮演着至关重要的角色,其中Active Data Guard(ADG)是Oracle提供的一种高级数据保护解决方案,用于构建高可用性和灾难恢复策略。本案例将详细介绍如何使用ADG来...
本案例重点讲述了如何利用RMAN(Recovery Manager)工具进行Oracle 11.2.0.3单实例到11gR2 RAC的异机恢复操作。这一过程涉及多个关键步骤,包括数据备份、目标环境准备、恢复和配置调整。 首先,RMAN是Oracle数据库...
Oracle 11.2.0.3 Real Application Clusters (RAC) 是Oracle数据库的一个重要特性,它提供了高可用性和可扩展性,使得多个数据库实例能够共享同一个物理数据库,从而在集群环境中实现故障切换和负载均衡。...
在这个场景中,我们讨论的是将Oracle 11.2.0.1版本升级到11.2.0.3的过程,这是一个常规的数据库软件更新。Oracle 11g是Oracle数据库的第11个主要版本,提供了许多增强功能和修复了早期版本中的问题。 首先,升级...
一步一步在Linux上部署11.2.0.3 RAC+Dataguard演示文稿
在给定的压缩包文件中,包含的是Oracle的补丁应用工具OPatch的最新版本v11.2.0.3.15,专为运行在Linux x86平台上的Oracle 11.2.0.x版本设计。发布日期为2016年8月26日,至2017年4月9日,该版本仍是最新的。 **...
本文档详细介绍了HP Unix IA64 11.31环境下Oracle Database 11.2.0.3及Grid Infrastructure 11.2.0.3的RAC(Real Application Clusters)安装过程。文档不仅提供了必要的硬件与软件配置信息,还涵盖了详细的安装步骤...
本文档用于详细记录在OEL 5.5 X86_64 位系统上安装配置Oracle 11gR2(11.2.0.3.0)RAC的步骤、RAC数据库的基本维护、为RAC 创建单实例Active Physical Dataguard、验证Active Dataguard的功能、RAC主库同单实例物理备...
### Oracle 11G 11.2.0.4 RAC环境打补丁操作指南 #### 一、环境配置信息 **操作系统:** Red Hat 7.2 x64 **数据库版本:** Oracle 11.2.0.4 x64 (RAC) **Grid Infrastructure:** 11.2.0.4 **补丁包:** - **GI...
### Oracle 11g R2 (11.2.0.1) 升级至 11.2.0.3 在 Linux 双节点环境下的详细步骤 #### 一、概述 本文档详细介绍了如何在Linux双节点环境下,将Oracle 11g R2 (11.2.0.1) 版本升级到 11.2.0.3 版本的过程。此升级...
### 升级Oracle Grid Infrastructure和RAC从11.2.0.3到11.2.0.4 #### 大概步骤 在进行Oracle Grid Infrastructure (GI) 和 RAC 的版本升级之前,需要对整个升级过程有一个大致的了解。升级流程可以分为三个主要...
Install Oracle 11gR2(11.2.0.3.0) RAC +Active Dataguard on Oracle Enterprise Linux(OEL5.5 X86_64) 本文档用于详细记录在 OEL 5.5 X86_64 位系统上安装配置 Oracle 11gR2(11.2.0.3.0)RAC 的步骤、RAC 数据库的...
Oracle 11.2.0.3 RAC安装,自己动手做的实验,完整过程。
Oracle 11.2.0.3 RAC for Red Hat Enterprise Linux 6.3 (64-bit)的安装过程涉及多个步骤,包括环境准备、系统配置、软件安装以及数据库创建。以下是对这些关键知识点的详细说明: 1. **安装环境**: - RHEL 6.3 ...