Saturday, July 15, 2017

Installing Oracle 12c Release 2 RAC on Linux 6 and Linux 7

This article explains how to install a 2 nodes Oracle 12c Release 2 Real Application Cluster (RAC) on Oracle Linux 6 and 7. I did this installation on Oracle Virtual Box by creating 2 virtual machines with shared storage. OS platform I used is Oracle Enterprise Linux 7, and Oracle GI and RDBMS version is 12.2. Same installation guide should work for Linux 6 also as there are only a couple of differences in the steps that I will explain during this installation. There will be no difference in installation steps if there are more than 2 nodes in the RAC setup.
Official document for 12c RAC installation can be found by clicking at the following link.

Installing RAC on VM

If you plan to install RAC in virtual environment, you may be interested in knowing about how to create virtual machines using Virtual box, and how to create shared storage for Oracle RAC installation. You must allocate at least 2 CPUs to each VM, otherwise installation would be very slow. Better to allocate even more CPUs if possible. Allocate at least 8G RAM to each VM if possible.

Prepare all the nodes by
installing Linux 7 or installing Linux 6. Have private interconnects setup, and also shared storage mounted on all the nodes. For this example, I have a disk with 40G for CRS diskgroup to store OCR and Voting disk and GIMR (Grid Infrastructure Management Repository). GIMR was introduced in 12.1 and was optional, but from 12.2 it is mandatory. GIMR is actually a multitenant database that is created for storing cluster health monitor data. GIMR repository size is around 30G and that is why I selected 40G size for my OCR diskgroup. At this point I am not creating any other disk for the creation of other disk    groups to store database.
Please note that you should have external redundancy for OCR diskgroup otherwise you should specify at least 3 different locations/disks for storing voting disk, and 2 locations/disks to store OCR.
Have 2 public IPs, 2 virtual IPs, 2 private IPs, and 1 SCAN IP (from public subnet) which we will use later during the installation. For this article, node names for are as follows



Recommended way for SCAN configuration is to have 3 IP addresses that are resolved through DNS. Since I don’t have DNS in my environment, I will be using entry in the host file for the SCAN, and will use a single SCAN IP address.

Download and unzip the software
Download 12c R2 Grid Infrastructure and RDBMS softwares from

For this article, I have stored the downloaded zip files under /u02/12.2/software directory on node 1.
Let’s start installation steps for 12c R2 RAC.

On each node,
edit /etc/selinux/config and set value for SELINUX to either “permissive” or “disabled”

On each node,
configure Shared Memory File System. Add following line in /etc/fstab file for shared memory file system. Modify the value of “size” based on the amount of memory you will be using for your SGA.
Add in /etc/fstab
tmpfs                                   /dev/shm                /tmpfs   rw,exec,size=8g        0 0

Execute from Command Prompt to mount it instantly
[root@salman11 ~]#  mount -o remount,rw,exec,size=8G /dev/shm

On each node, disable the firewall.
If you are using Linux 6, use following method

[root@salman11 ~]# service iptables stop
iptables: Flushing firewall rules:                         [  OK  ]
iptables: Setting chains to policy ACCEPT: filter          [  OK  ]
iptables: Unloading modules:                               [  OK  ]

[root@salman1 ~]# chkconfig iptables off

If you are using Linux 7, use following method
[root@salman11 ~]# systemctl start firewalld.service
[root@salman11 ~]# systemctl stop firewalld.service
[root@salman11 ~]# systemctl disable firewalld.service
rm '/etc/systemd/system/'
rm '/etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service'

On each node,
stop and disable NTP (Network Time Protocol)
During the installation, Cluster Time Synchronization Service would be installed and used instead or NTP. If you want to use NTP, you can skip this step and RAC installation process would recognise automatically about NTP configuration.
On Linux 6, execute following
[root@salman11 ~]# /sbin/service ntpd stop
[root@salman11 ~]# chkconfig ntpd off
[root@salman11 ~]# mv /etc/ntp.conf /etc/
[root@salman11 ~]# rm /var/run/

On Linux 7, execute following
[root@salman11 ~]# systemctl stop chronyd
[root@salman11 ~]# systemctl disable chronyd
rm '/etc/systemd/system/'

--Delete or rename /etc/chrony.conf file
[root@salman11 ~]# mv /etc/chrony.conf  /etc/chrony.conf_old

Reboot all nodes (Optional)

On each node
, edit /etc/hosts file and add IP addresses and fully qualified names of each node of the RAC, including public IPs; virtual IPs; private IPs and SCAN IP.

#Public       salman11       salman12

#Virtual   salman11-vip   salman12-vip

#Private     salman11-priv     salman12-priv

#SCAN   salman-scan
Make sure that Public interface and Private interface have the same name and they are listed in same order on all the nodes. For example if I execute ifconfig command for my setup, my public interface name is enp0s8 and private interface name is enp0s9.

Use ping command to test the nodes connectivity from each other each node to the other node(s) and vice versa. SCAN IP and Virtual IPs are not required to be tested at this point.

From node salman11
[root@salman11 ~]#  ping salman11
[root@salman11 ~]#  ping salman11-priv
[root@salman11 ~]#  ping salman12
[root@salman11 ~]#  ping salman12-priv

From node salman12
[root@salman12 ~]#  ping salman12
[root@salman12 ~]#  ping salman12-priv
[root@salman12 ~]#  ping salman11
[root@salman12 ~]#  ping salman11-priv

We can perform automatic configuration of the nodes using “yum” command. If you want to do manual configuration, skip this step and go to next step (Step 9).
Automatic configuration would perform following tasks
- Installation of required RPM packages
- Setup kernel parameters in /etc/sysctl.conf file
- Creation of OS groups (oinstall, dba) and OS user (oracle)
- Setting limits for installation user “oracle”

For Oracle Linux, follow the steps mentioned in the following documents to access the online yum repository.

On each node, execute following command to perform all prerequisites automatically.

[root@salman11 ~]# yum install oracle-database-server-12cR2-preinstall.x86_64 preinstall –y
As already mentioned, above command will install all required packages which are needed for grid infrastructure and/or RDBMS software installation. I have noticed that even if we do automatic configuration, 32 bit RPM packages still don’t get installed. Check if following 32-bit packages have been installed, and install manually if they are not.
For Linux 6, install following (or latest version) RPMs manually if not already installed.

compat-libstdc++-33-3.2.3-69.el6 (i686)
glibc-2.12-1.7.el6 (i686)
glibc-devel-2.12-1.7.el6 (i686)
libgcc-4.4.4-13.el6 (i686)
libstdc++-4.4.4-13.el6 (i686)
libstdc++-devel-4.4.4-13.el6 (i686)
libaio-0.3.107-10.el6 (i686)
libaio-devel-0.3.107-10.el6 (i686)
libXtst- (i686)
libX11-1.5.0-4.el6 (i686)
libXau-1.0.6-4.el6 (i686)
libxcb-1.8.1-1.el6 (i686)
libXi-1.3 (i686)

Example (Checking 32-bit version of glibc-devel). 32-bit version is not installed
[root@salman11 ~]# rpm -q glibc-devel

Example (installing 32-bit version of glibc-devel using yum)
[root@salman11 ~]# yum install glibc-devel.i686

For Linux 7, install following (or latest version) 32-bit RPMs manually if not already installed.
compat-libstdc++-33-3.2.3-71.el7 (i686)
glibc-2.17-36.el7 (i686)
glibc-devel-2.17-36.el7 (i686)
libaio-0.3.109-9.el7 (i686)
libaio-devel-0.3.109-9.el7 (i686)
libX11-1.6.0-2.1.el7 (i686)
libXau-1.0.8-2.1.el7 (i686)
libXi-1.7.2-1.el7 (i686)
libXtst-1.2.2-1.el7 (i686)
libgcc-4.8.2-3.el7 (i686)
libstdc++-4.8.2-3.el7 (i686)
libstdc++-devel-4.8.2-3.el7 (i686)
libxcb-1.9-5.el7 (i686)

Example (Checking 32-bit version of glibc-devel). 32-bit version is not installed
[root@salman11 ~]# rpm -q glibc-devel

Example (installing 32-bit version of glibc-devel using yum)
[root@salman11 ~]# yum install glibc-devel.i686

Automatic configuration would create default OS groups i.e. oinstall and dba (with group ID 54321 and 54322 respectively), and OS user (oracle) with user ID 54321. If you plan to implement role separation by creating a “grid” user to install and manage grid infrastructure, and use “oracle” user for database software, you would need to create more OS groups and “grid” user manually as follows (on both nodes).

Add groups
[root@salman11 ~]#  groupadd -g 54323 oper
[root@salman11 ~]#  groupadd -g 54325 asmdba
[root@salman11 ~]#  groupadd -g 54328 asmadmin
[root@salman11 ~]#  groupadd -g 54329 asmoper

Add user
[root@salman11 ~]# useradd -u 54322 -g oinstall -G dba,asmdba,asmadmin,asmoper grid

Set passwords for both users (oracle and grid)
[root@salman11 ~]# passwd oracle
[root@salman11 ~]# passwd grid

Make user oracle member of oper and asmdba groups
[root@salman1 ~]#  usermod  -g oinstall -G dba,oper,asmdba oracle

Automatic configuration also sets the resource limits for the default user “oracle” by creating oracle-database-server-12cR2-preinstall.conf file under /etc/security/limits.d directory. If you have created user “grid” for grid infrastructure installation, you would also need to set these limits manually for grid user. You can create a separate .conf file under /etc/security/limits.d directory (.conf file name can be anything), or append following lines in oracle-database-server-12cR2-preinstall.conf file
grid soft nofile 1024
grid hard nofile 65536
grid soft nproc 16384
grid hard nproc 16384
grid soft stack 10240
grid hard stack 32768
grid hard memlock 134217728
grid soft memlock 134217728
Skip this step if you have followed Step 8 above, otherwise perform following tasks on each node.
For Linux 6, install following RPM packages (or latest version) from either yum repository or from Linux 6 media

binutils- (x86_64)
compat-libcap1-1.10-1 (x86_64)
compat-libstdc++-33-3.2.3-69.el6 (x86_64)
compat-libstdc++-33-3.2.3-69.el6 (i686)
e2fsprogs-1.41.12-14.el6 (x86_64)
e2fsprogs-libs-1.41.12-14.el6 (x86_64)
glibc-2.12-1.7.el6 (i686)
glibc-2.12-1.7.el6 (x86_64)
glibc-devel-2.12-1.7.el6 (x86_64)
glibc-devel-2.12-1.7.el6 (i686)
libgcc-4.4.4-13.el6 (i686)
libgcc-4.4.4-13.el6 (x86_64)
libstdc++-4.4.4-13.el6 (x86_64)
libstdc++-4.4.4-13.el6 (i686)
libstdc++-devel-4.4.4-13.el6 (x86_64)
libstdc++-devel-4.4.4-13.el6 (i686)
libaio-0.3.107-10.el6 (x86_64)
libaio-0.3.107-10.el6 (i686)
libaio-devel-0.3.107-10.el6 (x86_64)
libaio-devel-0.3.107-10.el6 (i686)
libXtst- (x86_64)
libXtst- (i686)
libX11-1.5.0-4.el6 (i686)
libX11-1.5.0-4.el6 (x86_64)
libXau-1.0.6-4.el6 (i686)
libXau-1.0.6-4.el6 (x86_64)
libxcb-1.8.1-1.el6 (i686)
libxcb-1.8.1-1.el6 (x86_64)
libXi-1.3 (x86_64)
libXi-1.3 (i686)
net-tools-1.60-110.el6_2.x86_64 (for Oracle RAC and Oracle Clusterware)
nfs-utils-1.2.3-15.0.1 (for Oracle ACFS)
sysstat-9.0.4-11.el6 (x86_64)

Example (yum)
[root@salman11 ~]# yum install glibc

Example (Linux Media)
[root@salman11 ~]# rpm -i glibc

Example (Check after install)
[root@salman11 ~]# rpm -q glibc

For Linux 7, install following RPM packages (or latest version) from either yum repository or from Linux 7 media
binutils- (x86_64)
compat-libcap1-1.10-3.el7 (x86_64)
compat-libstdc++-33-3.2.3-71.el7 (i686)
compat-libstdc++-33-3.2.3-71.el7 (x86_64)
glibc-2.17-36.el7 (i686)
glibc-2.17-36.el7 (x86_64)
glibc-devel-2.17-36.el7 (i686)
glibc-devel-2.17-36.el7 (x86_64)
libaio-0.3.109-9.el7 (i686)
libaio-0.3.109-9.el7 (x86_64)
libaio-devel-0.3.109-9.el7 (i686)
libaio-devel-0.3.109-9.el7 (x86_64)
libX11-1.6.0-2.1.el7 (i686)
libX11-1.6.0-2.1.el7 (x86_64)
libXau-1.0.8-2.1.el7 (i686)
libXau-1.0.8-2.1.el7 (x86_64)
libXi-1.7.2-1.el7 (i686)
libXi-1.7.2-1.el7 (x86_64)
libXtst-1.2.2-1.el7 (i686)
libXtst-1.2.2-1.el7 (x86_64)
libgcc-4.8.2-3.el7 (i686)
libgcc-4.8.2-3.el7 (x86_64)
libstdc++-4.8.2-3.el7 (i686)
libstdc++-4.8.2-3.el7 (x86_64)
libstdc++-devel-4.8.2-3.el7 (i686)
libstdc++-devel-4.8.2-3.el7 (x86_64)
libxcb-1.9-5.el7 (i686)
libxcb-1.9-5.el7 (x86_64)
make-3.82-19.el7 (x86_64)
nfs-utils-1.3.0-0.21.el7.x86_64 (for Oracle ACFS)
net-tools-2.0-0.17.20131004git.el7 (x86_64) (for Oracle RAC and Oracle Clusterware)
smartmontools-6.2-4.el7 (x86_64)
sysstat-10.1.5-1.el7 (x86_64)
Example (yum)
[root@salman11 ~]# yum install glibc
Example (Linux Media)
[root@salman11 ~]# rpm -i glibc
Example (Check after install)
[root@salman11 ~]# rpm -q glibc

On each node, edit /etc/sysctl.conf add following entries to set kernel parameters
fs.file-max = 6815744
kernel.sem = 250 32000 100 128
kernel.shmmni = 4096
kernel.shmall = 1073741824
kernel.shmmax = 4398046511104
kernel.panic_on_oops = 1
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500

Execute following command after adding above lines
/sbin/sysctl –p

On each node, add groups and users for Grid Infrastructure and Database softwares. “grid” user will be owner and administrator of Grid Infrastructure and “oracle” user will be owner and administrator of database.
Add groups
[root@salman11 ~]# groupadd -g 54321 oinstall
[root@salman11 ~]# groupadd -g 54322 dba
[root@salman11 ~]# groupadd -g 54323 oper
[root@salman11 ~]# groupadd -g 54325 asmdba
[root@salman11 ~]# groupadd -g 54328 asmadmin
[root@salman11 ~]# groupadd -g 54329 asmoper

Add users
[root@salman11 ~]# useradd -u 54321 -g oinstall -G dba,oper,asmdba oracle
[root@salman11 ~]# useradd -u 54322 -g oinstall -G dba,asmdba,asmadmin,asmoper grid

Set passwords for both users (oracle and grid)
[root@salman11 ~]# passwd oracle
[root@salman11 ~]# passwd grid

If you don’t want role separation for grid infrastructure and database, you may create a single user i.e. “oracle” and do both Grid Infrastructure and Oracle RDBMS installation with this single user.
[root@salman11 ~]# useradd -u 54321 -g oinstall -G dba,oper, asmdba, asmadmin,asmoper oracle

On each node, create a .conf file (file name can be anything) under /etc/security/limits.d directory to set shell limits for grid and oracle user users. For example, create file oracleusers.conf with following entries. Alternatively, you can also set the limits in /detc/security/limits.conf. But I would prefer to set under /etc/security/limits.d directory.
# Grid user
grid soft nofile 1024
grid hard nofile 65536
grid soft nproc 16384
grid hard nproc 16384
grid soft stack 10240
grid hard stack 32768
grid hard memlock 134217728
grid soft memlock 134217728

# Oracle user
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft nproc 16384
oracle hard nproc 16384
oracle soft stack 10240
oracle hard stack 32768
oracle hard memlock 134217728
oracle soft memlock 134217728

As per Oracle’s recommendation, we need to disable Transparent Huge Pages. For my Oracle Linux 7, it is already disabled. Following is official document for this task.

Starting, we can use ASM Filter Driver for our ASM disks labelling and configuration, and ASMLib configuration will be no longer required in that case. If you want to use ASMLib (as I will be using in this installation), you can continue, otherwise you can skip details on this step and can straight away go to next step (step 12). I will explain how to configure ASM Filter Driver later during installation wizard.

For continuing with the ASMLib, perform following configuration.
On each node, install oracleasm-support and oracleasmlib, and then configure oracleasm.
oracleasm kernel driver is built in Oracle Linux and does not need to be installed. After installing oracleasm-support and oracleasmlib packages, oracleasm driver starts working automatically.
If you are using some other flavour of Linux, for example RedHat Linux, then you would need to install all 3 packages (oracleasm driver, oracleasm-support and oracleasmlib).

For Oracle Linux, as root, install oracleasm-support from yum repository or from Linux media, then download oracleasmlib package and install.
For Linux 6, download form the following URL.

For Linux 7, download from the following URL.
Perform following on all nodes.
Install oracleasmlib and oracleasm-support
[root@salman11 ~]#  yum install oracleasm-support
[root@salman11 ~]#  rpm -i oracleasm-lib*

Configure oracleasm (highlighted in red are the inputs during configuration)
[root@salman11 ~]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver:                     [  OK  ]
Scanning the system for Oracle ASMLib disks:               [  OK  ]

Check Configuration
[root@salman11 ~]# /usr/sbin/oracleasm configure

On each node
, add following in /etc/pam.d/login file if not present already
session required /lib64/security/
session required

On each node,
make sure /etc/resolv.conf file contain entries similar to the following. Replace with your domain name and with your names server IP address

On each node
, create directories and change ownership to respective users (grid and oracle).
[root@salman11 ~]# mkdir -p /u01/app/12.2.0/grid
[root@salman11 ~]# mkdir -p /u01/app/grid
[root@salman11 ~]# mkdir -p /u01/app/oracle
[root@salman11 ~]# mkdir -p /u01/app/oracle/product/12.2.0/dbhome_1
[root@salman11 ~]# chown -R grid:oinstall /u01
[root@salman11 ~]# chown -R oracle:oinstall /u01/app/oracle

On each node
, add following in /home/<username>/.bash_profile file of both users; “grid” and “oracle”. At this point, we do no need to set other environment variables i.e. ORACLE_BASE or ORACLE_HOME.
umask 022

From any node,
partition the shared disk(s). As stated in the beginning, I have one disk of 40G size for storing OCR and Voting disk. Highlighted disk is the disk I will be using here to create partitions and then creating ASM disks using oracleasm.
[root@salman11~]# ls -l /dev/sd*
brw-rw----. 1 root disk 8,  0 Apr  9 16:04 /dev/sda
brw-rw----. 1 root disk 8,  1 Apr  9 16:04 /dev/sda1
brw-rw----. 1 root disk 8,  2 Apr  9 16:04 /dev/sda2
brw-rw----. 1 root disk 8,  3 Apr  9 16:04 /dev/sda3
brw-rw----. 1 root disk 8, 16 Apr  9 16:04 /dev/sdb

Use following steps to partition the disk. Highlighted in bold are where I provided the input.
[root@salman11~]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0xe3dc19a6.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-4194303, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-4194303, default 4194303):
Using default value 4194303
Partition 1 of type Linux and of size 2 GiB is set

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Skip this step if you plan to configure and use ASM Filter Driver during installation.
From any node, use oracleasm command to create ASM disks.
[root@salman11 ~]# oracleasm createdisk CRS1 /dev/sdb1
Writing disk header: done
Instantiating disk: done
[root@salman11 ~]# oracleasm listdisks
Execute “oracleasm scandisks” and “oracleasm listdisks” command on all other nodes also and you should be able to see all the ASM disks.

From the other node(s), issue following as root
[root@salman12 oracle]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "CRS1"

[root@salman12 oracle]# oracleasm listdisks

On each node:
Oracle recommends to use Deadline IO scheduler for optimum performance of ASM disks. Following command will tell if Deadline IO scheduler is already configured or not. Here /sdb/ is the disk I used to create ASM disk in previous step. You may check for all of your disks that you will be using.
[root@salman11 ~]# cat /sys/block/sdb/queue/scheduler
noop [deadline] cfq

For me, it is already Deadline scheduler configured. If not, following is the procedure to configuring Deadline scheduler as default.
--Create a new rules file
[root@salman11 ~]# vi /etc/udev/rules.d/60-oracle-schedulers.rules

--Add following line into this rules file
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="deadline"

--Reload the rules file
[root@salman11 ~]# udevadm control --reload-rules

Now we need to extract the downloaded zip file for GI installation. Starting 12.2, we have image based installation for Grid Infrastructure. It means that we need to extract the zip file under ORACLE_HOME for GI, and software is considered installed. After extracting the zip file, still we need to run the setup to perform all configurations and other required stuff.
On node1 log in as grid user, extract the zip file under GI home directory that we already have created.
[grid@salman11 ~]$ cd /u02/12.2/software/
[grid@salman11 software]$ unzip -d /u01/app/12.2.0/grid/

On each nodee
, install cvuqdisk RPM
This is required for cluvfy (Cluster Verification Utility) to work properly to discover shared disks, otherwise an error will be reported while running cluvfy. Log in as root and execute following steps
Set the environment variable CVUQDISK_GRP to point to the group what will own cvuqdisk, which is oinstall group
[root@salman11 software]# export CVUQDISK_GRP=oinstall
[root@salman11 software]# cd /u01/app/12.2.0/grid/cv/r
remenv/ rpm/
[root@salman11 software]# cd /u01/app/12.2.0/grid/cv/rpm/
[root@salman11 rpm]# rpm -i cvuqdisk-1.0.10-1.rpm

On other node(s), copy this rpm using scp command and install.
[root@salman12 ~]# scp .
root@'s password:
cvuqdisk-1.0.10-1.rpm                                                                     100% 8860     8.7KB/s   00:00
[root@salman12 ~]# export CVUQDISK_GRP=oinstall
[root@salman12 ~]# rpm -i cvuqdisk-1.0.10-1.rpm

Run cluvfy (Optional)
From node1
, open a terminal and log in as “grid” user and you can execute cluster verification utility (cluvfy). Click here to see the output of cluvfy. Normally cluvfy is initiated from one node and list of all other nodes is provided with –n option of the command, but this needs password less SSH connectivity (for grid and oracle users) to be already enabled among all nodes. Since I have not enabled password less SSH connectivity (I will enable this during installation), I would need to run it individually on each node by copying and extracting GI software on each node.
[grid@salman11 ~]$ cd /u01/app/12.2.0/grid/
[grid@salman11 ~]$ ./ stage -pre crsinst -n salman11 -fixup –verbose
If you have SSH connectivity enabled, provide list of nodes in the command

[grid@salman11 ~]$ ./ stage -pre crsinst -n salman11,salman12 -fixup –verbose

To learn how to configure SSH connectivity prior to the installation,
click here. Once SSH connectivity is enabled, you can run cluvfy form one node to verify configuration on all nodes.

From node1
, initiate the setup. Log in as “grid” user through desktop or you may use any X Window System. Start setup from the GRID_HOME where software has already been extracted in a previous step.

[grid@salman11 ~] cd /u01/app/12.2.0/grid
[grid@salman11 grid] ./

Leave default option selected and click next. Click “Help” to find out what are other types of installations.

Provide/modify cluster name and SCAN (we already put in /etc/hosts file) and port (default is 1521). Click next

Click “add” button. A dialogue box will open where you can add your other node(s) of RAC

Provide other node(s) details one by one and click OK. Leave Node Role to “HUB”.

Once all nodes are added, click on “SSH connectivity”.

Provide OS user (grid) and its password and click “Setup” button so that password less SSH connectivity can be setup which is required for the installation. Once done, you can click on “Test” button to test the connectivity. Click next.

Interface enp0s8 is for Public network use and enp0s9 is for ASM and Private. You may selected based on your interface names that you have. For any extra network interfaces listed, you can select “Do Not Use” form the drop down menu. Click next.

Since I will be using ASM, I will leave the first option selected (remember that we already have created oracleasm disks above). Click next

Select “No” and click next.

Initially you might not see any disks listed in this screen. Click on “Disk Discovery Path” and change path from “/dev/sd*” to “ORCL:*.” and click OK. 
Now previously created ASM disks should be visible to you.

Provide diskgroup name and select “External” redundancy. If you want to use ASM redundancy, select redundancy level and number of disks accordingly. 
Starting 12.1, you can also use ASMFD (ASM filter Driver) instead of asmLib by clicking the checkbox highlighted on this screen.
Click next.

Provide privileged account’s passwords. Click next.

Click next with the default selection.

Click next

Click Next

Provide ORACLE_BASE directory. ORACLE_HOME is already hard coded here which is the folder from where we initiated the setup. Click next.

Click next

Select for auto execution of the scripts by the configuration wizard and provide either root, or sudo privileged user and password. Alternatively you may leave this unselected and manually run scripts as root when prompted by the wizard. Click next

Resolve any problems if listed on the prerequisite check screen (this screen actually shows output of cluvfy). For me it is listing 4 issues. First is low physical memory that I can ignore as I have limited memory in my virtualbox. Rest of the warnings here are because I don’t have a valid DNS server with entries in resolve.conf, so I am ignoring all warnings. Click next

Click install.

During the installation, wizard will prompt for the execution of script as root. Click “Yes” from the dialogue box to execute the configuration scripts as root (we already provided root password in above step for the execution of the required scripts automatically).

In the end, cluster verification utility will again fail and the reasons are same as explained in the prerequisites checks explained in step 38 above. So I can just ignore this. Click OK to close the dialogue box. Then click next

Click “close” to finish the installation.

On both nodes, add following environment variables in .bash_profile file. On node2, ORACLE_SID value should be +ASM2, and so on.

##grid user environment variables
ORACLE_BASE=/u01/app/grid; export ORACLE_BASE
ORACLE_HOME=/u01/app/12.2.0/grid; export ORACLE_HOME
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib:/usr/local/lib; export LD_LIBRARY_PATH
export TEMP=/tmp
export TMPDIR=/tmp

Check status of all services. All seems OK and working properly.
[grid@salman11 ~]$ crsctl stat res -t
Name           Target  State        Server                   State details
Local Resources
               ONLINE  ONLINE       salman11                 STABLE
               ONLINE  ONLINE       salman12                 STABLE
               ONLINE  ONLINE       salman11                 STABLE
               ONLINE  ONLINE       salman12                 STABLE
               ONLINE  ONLINE       salman11                 STABLE
               ONLINE  ONLINE       salman12                 STABLE
               ONLINE  ONLINE       salman11                 STABLE
               ONLINE  ONLINE       salman12                 STABLE
               ONLINE  ONLINE       salman11                 STABLE
               ONLINE  ONLINE       salman12                 STABLE
               ONLINE  ONLINE       salman11                 STABLE
               ONLINE  ONLINE       salman12                 STABLE
               OFFLINE OFFLINE      salman11                 STABLE
               OFFLINE OFFLINE      salman12                 STABLE
Cluster Resources
      1        ONLINE  ONLINE       salman11                 STABLE
      1        ONLINE  ONLINE       salman11        10.10
      1        ONLINE  ONLINE       salman11                 Started,STABLE
      2        ONLINE  ONLINE       salman12                 Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
      1        ONLINE  ONLINE       salman11                 STABLE
      1        ONLINE  ONLINE       salman11                 Open,STABLE
      1        ONLINE  ONLINE       salman11                 STABLE
      1        ONLINE  ONLINE       salman11                 STABLE
      1        ONLINE  ONLINE       salman12                 STABLE
      1        ONLINE  ONLINE       salman11                 STABLE

On node 1
 Log into the OS as”oracle” user and extract the downloaded software
[oracle@salman11 ~]$ cd /u02/12.2/software/
[oracle@salman11 software]$ unzip

Log in as user “oracle” by using either XWindow or Linux desktop and initiate the installation
[oracle@salman11 ~]$cd /u02/12.2/software/database
[oracle@salman1 database]$./runInstaller
Uncheck the box and click next

I am selecting second option here as I want to install the software only. Database creation can be done later. Click next

Second option is automatically selected since this is a RAC installation. Click next

Nodes for installation are already selected. Click on “SSH Connectivity” button to setup password less SSH connectivity (as done while GI installation above).

Click Next

Provide ORACLE_BASE and ORACLE_HOME locations. We already have created these directories above. Click next

Click Next

I am ignoring all warnings as all warnings appearing here are because I don’t have a DNS in my virtual environment. If you see any issue/warning, resolve that before going forward. Click next

Click install

Execute script on ALL NODES as “root” user when prompted. After execution of scripts completes, click OK. Click close after installation completes.

We have successfully completed 2 nodes RAC installation for Oracle 12c R2. Next step is to create a RAC database using DBCA. You follow this article for the creation of a database.

Now you can add environment variables in .bash_profile file of “oracle” user.
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=/u01/app/oracle/product/12.2.0/dbhome_1; export ORACLE_HOME
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib:/usr/local/lib; export LD_LIBRARY_PATH
export TEMP=/tmp
export TMPDIR=/tmp

1 comment:

  1. thank you for sharing your knowledge . you did it very professionally


Popular Posts - All Times