Thursday, May 7, 2015

Installing Oracle 12c Release 1 RAC on Linux 6

This article explains how to install a 2 nodes Oracle 12c release 1 Real Application Cluster (RAC) on Oracle Linux 6. I did this installation on Oracle Virtual Box by creating 2 virtual machines with shared storage. OS platform I used is Oracle Enterprise Linux 6 (update 3), and Oracle GI and RDBMS version is 12.1.0.2. Same installation guide should work for Redhat Linux 6 also
. There will be no difference in installation steps if there are more than 2 nodes in the RAC setup.

Official document for 12c RAC installation can be found by clicking at the following link.
http://docs.oracle.com/database/121/CWLIN/toc.htm

Prerequisites
Prepare all the nodes by
installing Oracle Linux 6. Have private interconnects setup, and shared storage mounted on all the nodes. For this example, I have 3 disks (2G size each) for CRS diskgroup to store OCR and Voting disk, and 1 disk (4G) for DATA diskgroup to store the database.
Please note that you should have external redundancy for these disks, otherwise you should specify at least 3 different locations for storing voting disk, and 2 locations to store OCR.
Have 2 public IPs, 2 virtual IPs, 2 private IPs and 1 SCAN IP (from public subnet), which we will use later during the installation. For this article, node names for are as follows
Public
salman1.salman.com
salman2.salman.com

Virtual
salman1-vip.salman.com
salman2-vip.salman.com

Private
salman1-priv.salman.com
salman2.priv.salman.com
SCAN
salman-scan.salman.com

Recommended way for SCAN configuration is to have 3 IP addresses which are resolved through DNS. Since I don’t have DNS for my environment, I will be using entry in the host file for the SCAN, and will use a single SCAN IP address

Download and unzip the software
Download 12c Grid Infrastructure and RDBMS softwares.
First release (12.1.0.1) can be downloaded from www.oracle.com, but since we are installing 12.1.0.2 here, we would need to download patch set 1769377. First 2 zip files are for database and last 2 are for Grid Infrastructure binaries.













1)
On each node
, edit /etc/selinux/config and set value for SELINUX to either “permissive” or “disabled”
SELINUX=permissive

2)
On each node
, configure Shared Memory File System. Add following line in /etc/fstab file for shared memory file system. Modify the value of “size” based on the amount of memory you will be using for your SGA.
tmpfs                                   /dev/shm                tmpfs   rw,exec,size=8g        0 0

3)
On each node, disable the firewall.
 
[root@salman1 ~]# service iptables stop
iptables: Flushing firewall rules:                         [  OK  ]
iptables: Setting chains to policy ACCEPT: filter          [  OK  ]
iptables: Unloading modules:                               [  OK  ]

[root@salman1 ~]# chkconfig iptables off



4)
Reboot all nodes

5)
On each node
, in /etc/host file, add the IP addresses and fully qualified names of each node of RAC, including public IPs; virtual IP; private IPs and SCAN IP
#Public
192.231.231.40 salman1.salman.com       salman1
192.231.231.41 salman2.salman.com       salman2

#Virtual
192.231.231.50 salman1-vip.salman.com   salman1-vip
192.231.231.51 salman2-vip.salman.com   salman2-vip

#Private
10.10.10.10 salman1-priv.salman.com     salman1-priv
10.10.10.11 salman2-priv.salman.com     salman2-priv

#SCAN
192.231.231.60 salman-scan.salman.com   salman-scan
Make sure that Public interface and Private interfaces have the same name and they list in same order on all the nodes, in ifconfig command (my public interface name is eth0 and private interface name is eth1)

6)
Test the nodes connectivity with each other using ping command from each node to the other node(s). SCAN IP and Virtual IPs are not required to be tested at this point.
From node salman1
[root@salman1 ~]#  ping salman1
[root@salman1 ~]#  ping salman1-priv
[root@salman1 ~]#  ping salman2
[root@salman1 ~]#  ping salman2-priv

From node salman2
[root@salman2 ~]#  ping salman2
[root@salman2 ~]#  ping salman2-priv
[root@salman2 ~]#  ping salman1
[root@salman2 ~]#  ping salman1-priv


7)
We can perform automatic configuration of the node using “yum” command. If you want to do manual configuration, skip this step and go to next step.
Automatic configuration would perform following tasks
- Installation of required RPM packages
- Setup kernel parameters in /etc/sysctl.conf file
- Creation of OS groups (oinstall, dba) and OS user (oracle)
- Setting limits for installation user “oracle”

For Oracle Linux, follow the steps mentioned in the following documents to access the online yum repository.
http://public-yum.oracle.com/

On each node, execute following command to perform all prerequisites automatically.
[root@salman1 ~]# yum install oracle-rdbms-server-12cR1-preinstall –y

As already mentioned, above command will install all required packages which are needed for grid infrastructure and/or RDBMS software installation. If you also plan to install a 32-bit client on this x86_64 server, you would also need to install 32-bit version of these RPM packages manually.
compat-libstdc++-33-3.2.3-69.el6.i686
glibc-2.12-1.7.el6 (i686)
glibc-devel-2.12-1.7.el6.i686
libgcc-4.4.4-13.el6 (i686)
libstdc++-4.4.4-13.el6.i686
libstdc++-devel-4.4.4-13.el6.i686
libaio-0.3.107-10.el6.i686
libaio-devel-0.3.107-10.el6.i686
libXext-1.1 (i686)
libXtst-1.0.99.2 (i686)
libX11-1.3 (i686)
libXau-1.0.5 (i686)
libxcb-1.5 (i686)
libXi-1.3 (i686)
make-3.81-19.el6
sysstat-9.0.4-11.el6 (x86_64)
nfs-utils-1.2.3-15.0.1

If you want to install form Oracle Linux installation media, use “rpm -i” command to install the packages.
Automatic configuration also sets the required kernel parameters in /etc/sysctl.conf file.
Automatic configuration would create default OS groups i.e. oinstall and dba (with group ID 54321 and 54322 respectively), and OS user (oracle) with user ID 54321. If you plan to implement role separation by creating a “grid” user to install and manage grid infrastructure, and use “oracle” user for database software, you would need to create more OS groups and “grid” user manually, as follows (on both nodes).
Add groups
[root@salman1 ~]#  groupadd -g 54323 oper
[root@salman1 ~]#  groupadd -g 54325 asmdba
[root@salman1 ~]#  groupadd -g 54328 asmadmin
[root@salman1 ~]#  groupadd -g 54329 asmoper

Add user
[root@salman1 ~]# useradd -u 54322 -g oinstall -G dba,asmdba,asmadmin,asmoper grid

Set passwords for both users (oracle and grid)
[root@salman1 ~]# passwd oracle
[root@salman1 ~]# passwd grid

Make user oracle member of oper and asmdba groups
[root@salman1 ~]#  usermod  -g oinstall -G dba,oper,asmdba oracle

Automatic configuration also sets the resource limits for the default user “oracle” in /etc/security/limits.conf file. If you have created user “grid” for grid infrastructure installation, you would also need to set these limits manually for grid user.
grid soft nofile 1024
grid hard nofile 65536
grid soft nproc 16384
grid hard nproc 16384
grid soft stack 10240
grid hard stack 32768
grid hard memlock 134217728
grid soft memlock 134217728
 
8)
Skip this step if you have followed Step 7 above; otherwise perform following tasks on each node.

Install following RPM packages (or latest version) from either yum repository or from Linux 6 media
binutils-2.20.51.0.2-5.11.el6 (x86_64)
compat-libcap1-1.10-1 (x86_64)
compat-libstdc++-33-3.2.3-69.el6 (x86_64)
compat-libstdc++-33-3.2.3-69.el6.i686
gcc-4.4.4-13.el6 (x86_64)
gcc-c++-4.4.4-13.el6 (x86_64)
glibc-2.12-1.7.el6 (i686)
glibc-2.12-1.7.el6 (x86_64)
glibc-devel-2.12-1.7.el6 (x86_64)
glibc-devel-2.12-1.7.el6.i686
ksh
libgcc-4.4.4-13.el6 (i686)
libgcc-4.4.4-13.el6 (x86_64)
libstdc++-4.4.4-13.el6 (x86_64)
libstdc++-4.4.4-13.el6.i686
libstdc++-devel-4.4.4-13.el6 (x86_64)
libstdc++-devel-4.4.4-13.el6.i686
libaio-0.3.107-10.el6 (x86_64)
libaio-0.3.107-10.el6.i686
libaio-devel-0.3.107-10.el6 (x86_64)
libaio-devel-0.3.107-10.el6.i686
libXext-1.1 (x86_64)
libXext-1.1 (i686)
libXtst-1.0.99.2 (x86_64)
libXtst-1.0.99.2 (i686)
libX11-1.3 (x86_64)
libX11-1.3 (i686)
libXau-1.0.5 (x86_64)
libXau-1.0.5 (i686)
libxcb-1.5 (x86_64)
libxcb-1.5 (i686)
libXi-1.3 (x86_64)
libXi-1.3 (i686)
make-3.81-19.el6
sysstat-9.0.4-11.el6 (x86_64)
nfs-utils-1.2.3-15.0.1
unixODBC

Example (yum)
[root@salman1 ~]# yum install glibc

Example (Linux Media)
[root@salman1 ~]# rpm -i glibc

Example (Check after install)
[root@salman1 ~]# rpm -q glibc
glibc-2.17-55.el7.x86_64


On each node, edit /etc/sysctl.conf add following entries to set kernel parameters
fs.file-max = 6815744
kernel.sem = 250 32000 100 128
kernel.shmmni = 4096
kernel.shmall = 4294967296
kernel.shmmax = 4398046511104
kernel.panic_on_oops = 1
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500
Execute following command after adding above lines
/sbin/sysctl -p

On each node, add groups and users for Grid Infrastructure and Database softwares. “grid” user will be owner and administrator of Grid Infrastructure and “oracle” user will be owner and administrator of database.
Add groups
[root@salman1 ~]# groupadd -g 54321 oinstall
[root@salman1 ~]# groupadd -g 54322 dba
[root@salman1 ~]# groupadd -g 54323 oper
[root@salman1 ~]# groupadd -g 54325 asmdba
[root@salman1 ~]# groupadd -g 54328 asmadmin
[root@salman1 ~]# groupadd -g 54329 asmoper

Add users
[root@salman1 ~]# useradd -u 54321 -g oinstall -G dba,oper,asmdba oracle
[root@salman1 ~]# useradd -u 54322 -g oinstall -G dba,asmdba,asmadmin,asmoper grid

Set passwords for both users (oracle and grid)
[root@salman1 ~]# passwd oracle
[root@salman1 ~]# passwd grid

If you don’t want role separation for grid infrastructure and database, you may create a single user i.e. “oracle” and do both Grid Infrastructure and Oracle RDBMS installation with this single user.
[root@salman1 ~]# useradd -u 54321 -g oinstall -G dba,oper, asmdba, asmadmin,asmoper oracle


On each nodes, create a .conf file (file name can be anything) under /etc/security/limits.d directory to set shell limits for grid and oracle users. For example, create file oracleusers.conf with following entries. Alternatively, you can also set the limits in /detc/security/limits.conf. But I would prefer to set under /etc/security/limits.d directory.
# Grid user
grid soft nofile 1024
grid hard nofile 65536
grid soft nproc 16384
grid hard nproc 16384
grid soft stack 10240
grid hard stack 32768
grid hard memlock 134217728
grid soft memlock 134217728

# Oracle user
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft nproc 16384
oracle hard nproc 16384
oracle soft stack 10240
oracle hard stack 32768
oracle hard memlock 134217728
oracle soft memlock 134217728

9) 
On each node, install oracleasm-support and oracleasmlib, and then configure oracleasm.
oracleasm kernel driver is built in Oracle Linux and does not need to be installed. After installing oracleasm-support and oracleasmlib packages, oracleasm driver starts working.

If you are using some other flavour of Linux, for example RedHat Linux, then you would need to install all 3 packages (oracleasm driver, oracleasm-support and oracleasmlib).
Install oracleasm-support from yum repository or from Linux media, then download oracleasmlib package form the following URL, and install.
http://www.oracle.com/technetwork/server-storage/linux/asmlib/ol6-1709075.html
Install oracleasmlib and oracleasm-support
[root@salman1 ~]#  yum install oracleasm-support
[root@salman1 ~]#  rpm -i oracleasm-lib*

Configure oracleasm (highlighted in red are the inputs during configuration)
[root@salman1 ~]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Scan for Oracle ASM disks on boot (y/n) [y]:y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver:                     [  OK  ]
Scanning the system for Oracle ASMLib disks:               [  OK  ]

Check Configuration
[root@salman1 ~]# /usr/sbin/oracleasm configure
ORACLEASM_UID=grid
ORACLEASM_GID=asmadmin
ORACLEASM_SCANBOOT=true
ORACLEASM_SCANORDER=""
ORACLEASM_SCANEXCLUDE=""
ORACLEASM_USE_LOGICAL_BLOCK_SIZE="false"

10)
On each node
, add following in /etc/pam.d/login file if not present already
session required /lib64/security/pam_limits.so
session required pam_limits.so

11)
On each node,
stop and disable NTP (Network Time Protocol) and avahi-daemon.
During the installation, Cluster Time Synchronization Service would be installed and used instead or NTP.
NTP
[root@salman1 ~]# /sbin/service ntpd stop
[root@salman1 ~]# chkconfig ntpd off
[root@salman1 ~]# mv /etc/ntp.conf /etc/ntp.conf.org
[root@salman1 ~]# rm /var/run/ntpd.pid

avahi-daemon
[root@salman1 ~]# /etc/init.d/avahi-daemon stop
Shutting down Avahi daemon:                                [  OK  ]

[root@salman1 ~]# chkconfig avahi-daemon off

[root@salman1 ~]# chkconfig --list avahi-daemon
avahi-daemon    0:off   1:off   2:off   3:off   4:off   5:off   6:off
[root@salman1 ~]#

12)
On each node,
make sure /etc/resolv.conf file contain entries similar to the following. Replace salman.com with your domain name and 192.231.231.1 with your names server IP address
search salman.com
nameserver 192.231.231.1

13)
On each nodee
, install cvuqdisk RPM
This is required for cluvfy (Cluster Verification Utility) to work properly to discover shared disks, otherwise an error will be reported while running cluvfy. Log in as root and execute following steps
Set the environment variable CVUQDISK_GRP to point to the group what will own cvuqdisk, which is oinstall group
[root@salman1 ~]# export CVUQDISK_GRP=oinstall

Install cvuqdisk which will be found under unzipd Grid Infrastructure software
[root@salman1 ~]# cd /u02/12.1.0.2/grid/rpm
[root@salman1 ~]#  rpm –i cvuqdisk-1.0.9-1.rpm

14)
On each node
, create directories and change ownership to respective users (grid and oracle).
[root@salman1 ~]# mkdir -p /u01/app/12.1.0/grid
[root@salman1 ~]# mkdir -p /u01/app/grid
[root@salman1 ~]# mkdir -p /u01/app/oracle

[root@salman1 ~]# chown -R grid:oinstall /u01
[root@salman1 ~]# chown oracle:oinstall /u01/app/oracle
[root@salman1 ~]# chmod -R 775 /u01/

15)
From any node,
partition the shared disks. As stated in the beginning, I have total 4 disks - 3 for OCR and Voting disk, and 1 for DATA diskgroup. Highlighted disks are the disks I will be using here to create partitions and then creating ASM disks using oracleasm.
[root@salman1 ~]# ls -l /dev/sd*
brw-rw----. 1 root disk 8,  0 Apr  7 11:38 /dev/sda
brw-rw----. 1 root disk 8,  1 Apr  7 11:38 /dev/sda1
brw-rw----. 1 root disk 8,  2 Apr  7 11:38 /dev/sda2
brw-rw----. 1 root disk 8,  3 Apr  7 11:38 /dev/sda3
brw-rw----. 1 root disk 8,  4 Apr  7 11:38 /dev/sda4
brw-rw----. 1 root disk 8,  5 Apr  7 11:38 /dev/sda5
brw-rw----. 1 root disk 8, 16 Apr  7 11:38 /dev/sdb
brw-rw----. 1 root disk 8, 17 Apr  7 11:38 /dev/sdb1
brw-rw----. 1 root disk 8, 32 Apr  7 11:38 /dev/sdc
brw-rw----. 1 root disk 8, 48 Apr  7 11:38 /dev/sdd
brw-rw----. 1 root disk 8, 64 Apr  7 11:38 /dev/sde
brw-rw----. 1 root disk 8, 80 Apr  7 11:38 /dev/sdf

Use following steps to partition each disk (/dev/sdc, /dev/sdd, /dev/sde, /dev/sdf).
[root@salman1 ~]# fdisk /dev/sdc
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x457c3565.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-25165823, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-25165823, default 25165823):
Using default value 25165823
Partition 1 of type Linux and of size 12 GiB is set

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

For simplicity, after executing command fdisk for each disk, press following sequence keys
n, p, 1, <enter>, <enter>, w to create new partitions for each disk
After the partitioning completes, following are my partitioned disks
[root@salman1 ~]# ls -l /dev/sd*
brw-rw----. 1 root disk 8, 32 Apr  7 12:04 /dev/sdc
brw-rw----. 1 root disk 8, 33 Apr  7 12:04 /dev/sdc1
brw-rw----. 1 root disk 8, 48 Apr  7 12:08 /dev/sdd
brw-rw----. 1 root disk 8, 49 Apr  7 12:08 /dev/sdd1
brw-rw----. 1 root disk 8, 64 Apr  7 12:08 /dev/sde
brw-rw----. 1 root disk 8, 65 Apr  7 12:08 /dev/sde1
brw-rw----. 1 root disk 8, 80 Apr  7 12:08 /dev/sdf
brw-rw----. 1 root disk 8, 81 Apr  7 12:08 /dev/sdf1

16)
From any node, use oracleasm command to create ASM disks.
[root@salman1 ~]# oracleasm createdisk crs1 /dev/sdc1
Writing disk header: done
Instantiating disk: done
[root@salman1 ~]# oracleasm createdisk crs2 /dev/sdd1
Writing disk header: done
Instantiating disk: done
[root@salman1 ~]# oracleasm createdisk crs3 /dev/sde1
Writing disk header: done
Instantiating disk: done
[root@salman1 ~]# oracleasm createdisk data /dev/sdf1
Writing disk header: done
Instantiating disk: done
[root@salman1 ~]#
[root@salman1 sys]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
[root@salman1 sys]# oracleasm listdisks
CRS1
CRS2
CRS3
DATA

Execute “oracleasm scandisks” and “oracleasm listdisks” command on all other nodes also and you should be able to see all the ASM disks.
From the other node(s), issue following as root
[root@salman2 ~]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...

[root@salman2 ~]# oracleasm listdisks
CRS1
CRS2
CRS3
DATA

17)
On each node, add IO scheduler for ASM disks. Modify highlighted disks names to match your environment.
[root@salman1 ~]# echo deadline > /sys/block/sdc/queue/scheduler
[root@salman1 ~]# echo deadline > /sys/block/sdd/queue/scheduler
[root@salman1 ~]# echo deadline > /sys/block/sde/queue/scheduler
[root@salman1 ~]# echo deadline > /sys/block/sdf/queue/scheduler

18)
On each node
, add following in /home/<username>/.bash_profile file of both users; “grid” and “oracle”. At this point, we do no need to set other environment variables i.e. ORACLE_BASE or ORACLE_HOME.
umask 022

19)
Run cluvfy (Optional)
In the node where you plan to initiate the installation, open a terminal and log in as “grid” user, and you can optionally execute cluster verification utility (cluvfy). Click here to see the output of cluvfy
[grid@salman1 ~]$ cd /u02/12.1.0.2/grid/
[grid@salman1 ~]$ ./runcluvfy.sh stage -pre crsinst -n salman1 -fixup -verbose
This will fail if user equivalence (passwordless ssh login) has not been configured from each node to other node. Passwordless login using ssh is also optional at this point, because installer will allow us to configure user equivalence during the installation wizard as you will see in a while. To learn how to configure SSH connectivity prior to the installation, click here


20)
Choose any node form where you want to initiate the installation and log in as installation user (“grid” in this example). You would choose desktop logging in to the server. If you did not install any desktop environment, use X Window System.
























21)
Start installation from the extracted software directory (/u02/12.1.0.2/grid), execute “./runInstaller” to begin the installation

22)
Click Next




























23)
Click Next




























24)
Select “Advance Installation” and click Next




























25)
Click Next




























26) Change Cluster Name and SCAN name and SCAN Port, if needed. Make sure that SCAN Name is same as specified in /etc/host file. Uncheck “Configure GNS”. Click Next




























27)
You will see only first node (where you initiated installation) here. Click Add to open a window to provide details of the second node. You would need to add details of every node in your RAC environment by clicking Add button.
Click SSH Connectivity. If you have already done SSH configuration for passwordless login to the remote node(s), you should skip next setup.




























28)
After clicking SSH Connectivity, you will see text boxes to provide username (grid) and password to setup SSH connectivity. Provide password and click on Setup button. It will take a few seconds to configure the SSH.




























29)
Once SSH configuration completes, click OK, then click Next




























30)
Select correct network interfaces for Public and Private interconnects and click Next



























31) 
Click Next




























32)
Provide diskgroup name. I am providing CRS as I would store OCR and Voting disk on this diskgroup. I am also selecting “External” redundancy. If your disks are not already mirrored, you should use either Normal (minimum 2 disks required) or High redundancy (minimum 3 disks required) to safeguard against disks failure. Select ASM disks to be part of this diskgroup




























33)
Provide password for user SYS (SYSASM) and click Next





























34)
Click Next





























35)
Click Next





























36)
Click Next





























37)
If required, modify the paths based on the directories created in a previous step. Or you can go ahead with the default locations. Click Next





























38)
Click Next





























39)
Select this option if you want the required scripts to be executed automatically by OUI rather than prompting you to execute the scripts as “root”; during the installation





























40) 
If you see fail/warning for any prerequisite check, fix it before clicking Next. In my case, I am ignoring all of them because of the following reasons.
i) I am using virtual Box with 2G RAM, so I ignored Physical Memory Warning
ii) I am not using DNS, so I ignored resolv.conf failure
iii) I am not using DNS, so I ignored DNS/NIS failure
Click Next




























41)
Click Install button to start installation




























42)
Installer will prompt for the permission to execute the scripts as root. Click Yes




























43)
Installation completed. I see one error here and that is because of cluvfy execution failure during post installation steps. Reason of cluvfy failure is same as I explained above in step 40 above, so I just clicked OK to close the box

























44) 
Click Yes



























45) 
Click Close, installation has successfully completed




























46)
On both nodes, add following environment variables in .bash_profile file. On node2, ORACLE_SID value should be +ASM2, and so on. 
##grid user environment variables
ORACLE_SID=+ASM1; export ORACLE_SID
ORACLE_BASE=/u01/app/grid; export ORACLE_BASE
ORACLE_HOME=/u01/app/12.1.0/grid; export ORACLE_HOME
ORACLE_TERM=xterm; export ORACLE_TERM
PATH=$ORACLE_HOME/bin:$PATH; export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib:/usr/local/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib; export CLASSPATH
export TEMP=/tmp
export TMPDIR=/tmp

47)
Check health of CRS. All seems OK and working properly.
[grid@salman2 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS.dg
               ONLINE  ONLINE       salman1                  STABLE
               ONLINE  ONLINE       salman2                  STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       salman1                  STABLE
               ONLINE  ONLINE       salman2                  STABLE
ora.asm
               ONLINE  ONLINE       salman1                  Started,STABLE
               ONLINE  ONLINE       salman2                  Started,STABLE
ora.net1.network
               ONLINE  ONLINE       salman1                  STABLE
               ONLINE  ONLINE       salman2                  STABLE
ora.ons
               ONLINE  ONLINE       salman1                  STABLE
               ONLINE  ONLINE       salman2                  STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       salman1                  STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       salman1                  169.254.93.149 10.1                             0
                                                             .10.10,STABLE
ora.cvu
      1        ONLINE  ONLINE       salman1                  STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       salman1                  Open,STABLE
ora.oc4j
      1        ONLINE  ONLINE       salman1                  STABLE
ora.salman1.vip
      1        ONLINE  ONLINE       salman1                  STABLE
ora.salman2.vip
      1        ONLINE  ONLINE       salman2                  STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       salman1                  STABLE
--------------------------------------------------------------------------------

[grid@salman2 ~]$ srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node salman1

[grid@salman2 ~]$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node salman1


48) 
Installing Database software
Before we start database software installation, first we create a diskgroup DATA which will be used to store the database; we will be creating during the database software installation.
Open a terminal and log in as OS user “grid”. Now use SQLPLUS to log into the ASM instance with SYSASM privileges, and then create the diskgroup DATA

[grid@salman1 ~]$ sqlplus

SQL*Plus: Release 12.1.0.2.0 Production on Mon Apr 20 12:11:05 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Enter user-name: sys as sysasm
Enter password:

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options


SQL> select name, path from v$asm_disk;

NAME                           PATH
------------------------------ ----------------------------------------------------------------------------------
                                        ORCL:DATA
CRS1                               ORCL:CRS1
CRS2                               ORCL:CRS2
CRS3                             ORCL:CRS3

Compatibility for ASM and RDBMS should be specified, otherwise it would default to 10.1.0.0.0 and we will not be able to store 12c database files on it.

SQL>  create diskgroup DATA external redundancy disk  'ORCL:DATA' ATTRIBUTE 
'compatible.asm'='12.1.0.0.0' , 'compatible.rdbms'='12.1.0.0.0';

Diskgroup created.

If you created this diskgroup from node1, use srvctl command to start this new disk resource on all other nodes
[grid@salman1 ~]$ srvctl start diskgroup -diskgroup DATA -node salman2

Since I am using 2 different OS users here, from any node, log in as user “oracle” by using either XWindow or Linux desktop, and initiate installation from the directory where database software has already been extracted. /u02/12.1.02/database
[oracle@salman1 database]$cd /u02/12.1.02/database
[oracle@salman1 database]$./runInstaller
Click Next




























49)
I want a new database to be created during the installation. Click Next





























50)
I selected Server class database. Click Next





























51)
We will be doing RAC database installation so that software is installed on all the RAC nodes. Click Next





























52)
Click Next





























53) 
By default all nodes are selected for database software installation, click on “SSH Connectivity” button as previously done during Grid infrastructure installation and configure the SSH connectivity before moving forward. This time SSH will be configured for “oracle” user. Once configured, click Next




























54)
Click Next




























55)
Click Next




























56)
Click Next




























57)
Modify paths if required, click Next


58)
Select database type which you want to create, click Next




























59)
Provide database name and also check the checkbox if you want it to be a container database




























60)
Specify memory allocation details for the database instances, click Next




























61) 
Select “Oracle Automatic Storage Management” as we already have an ASM disk created to store the database.






























62) 
Click Next




























63)
Click Next




























64)
Select diskgroup which we created to store the database files, click Next




























65)
Specify the passwords for administrator users, click Next




























66)
Click Next




























67)

On last screen, I see 4 warnings which I can ignore because of following reasons.
i) Since I am not using namesserver, I faced warning number 1 and 2, so I can ignore both
ii) Just like during Grid Infrastructure installation, OUI had an issue in recognizing actual value of my /dev/shm files system, same happened here, so I can ignore it.
iii) Limits for oracle user are already set in /etc/security/limits.d/oracle-rdbms-server-12cR1-preinstall.conf which can be confirmed using “ulimit –a” command, but OUI is checking /etc/security/limits.conf file for limits, so I can ignore this warning also. Click Next




























68)
Click Install




























69)
During the installation, you will be prompted to execute the root.sh on both nodes as root. When prompted, executed on both nodes one by one, click OK to close the box so that installation could proceed further.

























70)
Database Creation Assistant would initiate to create the database



























71)
Click OK to close the prompt after database creation completes.




























72)
Click Close.





























73)
Add following environment variables setting in .bash_profile of “oracle” user, on both nodes. Modify value of ORACLE_SID for each node accordingly
ORACLE_SID=salmandb1; export ORACLE_SID
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1; export ORACLE_HOME
ORACLE_TERM=xterm; export ORACLE_TERM
PATH=$ORACLE_HOME/bin:$PATH; export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib:/usr/local/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib; export CLASSPATH
export TEMP=/tmp
export TMPDIR=/tmp


74)
RAC Installation completes successfully. Check the status of all resources and database instances
[grid@salman1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS.dg
               ONLINE  ONLINE       salman1                  STABLE
               ONLINE  ONLINE       salman2                  STABLE
ora.DATA.dg
               ONLINE  ONLINE       salman1                  STABLE
               ONLINE  ONLINE       salman2                  STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       salman1                  STABLE
               ONLINE  ONLINE       salman2                  STABLE
ora.asm
               ONLINE  ONLINE       salman1                  Started,STABLE
               ONLINE  ONLINE       salman2                  Started,STABLE
ora.net1.network
               ONLINE  ONLINE       salman1                  STABLE
               ONLINE  ONLINE       salman2                  STABLE
ora.ons
               ONLINE  ONLINE       salman1                  STABLE
               ONLINE  ONLINE       salman2                  STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       salman1                  STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       salman1                  169.254.93.149,STABL
                                                             E
ora.cvu
      1        ONLINE  ONLINE       salman1                  STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       salman1                  Open,STABLE
ora.oc4j
      1        ONLINE  ONLINE       salman1                  STABLE
ora.salman1.vip
      1        ONLINE  ONLINE       salman1                  STABLE
ora.salman2.vip
      1        ONLINE  ONLINE       salman2                  STABLE
ora.salmndb.db
      1        ONLINE  ONLINE       salman1                  Open,STABLE
      2        ONLINE  ONLINE       salman2                  Open,STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       salman1                  STABLE
--------------------------------------------------------------------------------


[grid@salman1 ~]$ srvctl status database -d salmndb
Instance salmndb1 is running on node salman1
Instance salmndb2 is running on node salman2


Related Article

6 comments:

  1. Hi Salman.. Thanks for detailed step to implement 2 node RAC. I have some issue to install 2 node RAC in my Virtualbox.. In my windows 10 OS lap, I installed Oracle VM VirtaulBox and installed OEL 6.4. I have issues on Network setup for Public and Private IP's. In what basis we have to choose the IPs. for ex: In My network IP range 192.168.43.xxx, if I want to choose public and private IP's, do we need to choose same IP range with different subnet and do we need setup anything in our local machine Ethernet. Please clarify on this. Thanks.

    ReplyDelete
    Replies
    1. Hello Thoufic. Check my article mentioned below where you can see how I am setting up a public IP address for my virtual host. Similarly you can add a new "host only" network adapter to your both VMs, for setting up private interconnects. The IP for private interconnect should be from different range (not as public IPs as configured in DHCP in VirtualBox). For example, you can choose 10.10.10.1 and 10.10.10.2 for your private IPs for both nodes of a 2 nods RAC. Remember that for your virtualBox network setting, you don't need to use the same IP range of the network where your laptop is connected. For Virtual Box, your laptop itself is a DHCP server and your VMs take IPs from this DHCP if you are using host only adapter. For NAT type of adapter, IP address of your VM would come from your laptop's network (the network to which your laptop is connected
      https://www.oraclenext.com/2015/05/creating-virtual-machine-using-oracle.html

      Delete
  2. Hi Salman, i'm trying to setup 2node rac.In virtualbox and OEL 7.9.i have setup 2 bridge networks(1 public static IP and 1 private static IP) and 1 NAT(automatic DHCP) adapter. while coming to the issue my nodes have same (NAT)IP's these two i'm using for network connection,if i try to change one of the ip its not connecting to the network.please suggest me how to proceed further

    ReplyDelete
  3. Try following.
    1) Move NAT network card to the end of the lists.
    2) Remove NAT altogether.

    ReplyDelete
  4. Hi salman, I'm executing root.sh script on node1(rac1 in my case).below is the error i'm getting in OEL6 &OEL7 versions
    Start of resource "ora.cluster_interconnect.haip" failed
    CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac1'
    CRS-5017: The resource action "ora.cluster_interconnect.haip start" encountered the following error:
    Start action for HAIP aborted. For details refer to "(:CLSN00107:)" in "/u01/app/11.2.0.4/grid/log/rac1/agent/ohasd/orarootagent_root/orarootagent_root.log".
    CRS-2674: Start of 'ora.cluster_interconnect.haip' on 'rac1' failed
    CRS-2679: Attempting to clean 'ora.cluster_interconnect.haip' on 'rac1'
    CRS-2681: Clean of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
    CRS-4000: Command Start failed, or completed with errors.
    HAIP startup failure considered fatal, terminating at /u01/app/11.2.0.4/grid/crs/install/crsconfig_lib.pm line 1330.
    /u01/app/11.2.0.4/grid/perl/bin/perl -I/u01/app/11.2.0.4/grid/perl/lib -I/u01/app/11.2.0.4/grid/crs/install /u01/app/11.2.0.4/grid/crs/install/rootcrs.pl execution failed

    ReplyDelete
    Replies
    1. I would suggest you to look into the log files, and individually search (on google) for the error you are facing.

      Delete

Popular Posts - All Times