Test

Sunday, January 11, 2015

Configuring Oracle Dataguard

Following is a step by step guide to configure Oracle Dataguard using physical standby database for Oracle database. If you have RAC environment, you can follow this document to configure dataguard. OS platform for this example is Oracle Linux 6 (x86_64), but same steps can be followed to configure dataguard for other platforms. 
  1. For this guide, primary and standby database names are “proddb”.
  2. Unique database name for primary is “proddb” and unique database for standby database is “proddb_stby”.
  3. Both hosts have ASM diskgroup DATA for datafiles and diskgroup FRA set for DB_RECOVERY_FILE_DEST. If you are using filesystem, method for setting up Dataguard will still be same.
  4. Primary database is running in ARCHIVELOG mode. If not, enable the ARCHIVELOG mode of the database.

 Steps for setting up Dataguard 

1) Log in as “oracle” user on both primary and standby hosts using putty.
2) On primary database, enable “force logging”.

SQL> alter database force logging;

3) On primary database host, create a staging directory which will host the backup of primary database. Instead of this directory, you may have a NFS mount which can later be mounted to standby server to access the RMAN backup files. This backup will be used to create the standby database

$mkdir -p /u01/dgbackup

4) On standby host, create exact path where this backup will be copied later.

$ mkdir -p /u01/dgbackup


5) Log in to the primary database and create a PFILE from the SPFILE in the staging directory.

SQL> CREATE PFILE=’/u01/dgbackup/initproddb.ora’ FROM SPFILE;

6) On the primary host, perform an RMAN backup of the primary database that places the backup pieces into the staging directory. 
You may skip this step if you want to use DULICATE command in step 15 with “FROM ACTIVE DATABASE” option.

$ rman target /
RMAN> BACKUP DEVICE TYPE DISK FORMAT '/u01/dgbackup/%U' DATABASE PLUS ARCHIVELOG;
RMAN> BACKUP DEVICE TYPE DISK FORMAT ‘/u01/dgbackup/control_stby.ctl’ CURRENT CONTROLFILE FOR STANDBY;

7) From standby host, copy the contents of the staging directory on primary host to the staging directory on standby host.

$ cd /u01/dgbackup
$ scp <primary_host_ip>:/u01/dgbackup/* .

8) Create following 2 TNS entries in the TNSNAMES.ORA files of both primary and standby hosts. Specify TNS services names and host IPs to match your requirement)

PRODDB=
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = <PRIMARY_HOST_IP>)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = PRODDB)
    )
  )

PRODDB_STBY =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = <STANDBY_HOST_IP>)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = PRODDB_STBY)
    )
  )

9) From primary host, copy password file to the standby host. Please also note that in case you change SYS password in primary database, you would need to copy the password file on to the standby host otherwise you may be receiving “ORA-01031:insufficient privileges” error in your alert log file and problems with log shipping to the standby site.

cd $ORACLE_HOME/dbs
scp orapwproddb <standby_host_IP>:/u01/app/oracle/product/11203/db1/dbs/

10) On standby host, move the PFILE already copied in /u01/dgbackup in step 7 above to the  $ORACLE_HOME/dbs directory.  

$ cd /u01/dgbackup
$ mv initproddb.orap $ORACLE_HOME/dbs/

11) Connect to the ASM instance on the standby host, and create a directory in the +DATA +FRA diskgroups which have the same name as the DB_UNIQUE_NAME of the physical standby database.

SQL> ALTER DISKGROUP  DATA ADD DIRECTORY '+DATA/PRODDB_STBY';
SQL> ALTER DISKGROUP FRA ADD DIRECTORY '+FRA/PRODDB_STBY';
If you are using filesystem, create respective directories on the file system

12) On standby host, modify copied PFILE to have following parameters in it.
Control_files=’+DATA/proddb_stby/control01’, ’+FRA/proddb_stby/control02’

##Primary Role Parameters
db_unique_name='proddb_stby'
log_archive_dest_1= ‘LOCATION=USE_DB_RECOVERY_FILE_DEST mandatory reopen=120 VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
log_archive_dest_2='SERVICE=proddb lgwr async VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)DB_UNIQUE_NAME=proddb’
log_archive_dest_state_1=enable
log_archive_dest_state_2=enable
log_archive_format = proddb%t%s%r.arc

##Standby Role Parameters
standby_file_management=auto
fal_client='proddb_stby'
fal_server='proddb'
standby_archive_dest=’USE_DB_RECOVERY_FILE_DEST’
## Enable and modify following parameters if standby database has
##different paths of datafiles and redo log files as compared to primary.
#db_file_name_convert=<VALUE>
#log_file_name_convert=<VALUE>
  
13) On standby server, create the “adump” directory for audit file dest.

$ mkdir -p /u01/app/oracle/admin/proddb/adump/

14) On standby host, after setting up the appropriate environment variables, such as ORACLE_SID, ORACLE_HOME, and PATH, start the physical standby database instance without mounting the control file.


SQL> STARTUP NOMOUNT


15) From the standby host connect to the RMAN as following and execute DUPLICATE command to create the standby database. 
If you skipped step 6 above (taking RMAN Backup), you should use "FROM ACTIVE DATABASE" option with DUPLICATE command (DUPLICATE TARGET DATABASE FOR STANDBY FROM ACTIVE DATABASE NOFILENAMECHECK) here. Point to note is that to use ACTIVE DATABASE, you would need to connect both with Target and Auxiliary using a TNS entry.

$ rman target sys/password@proddb auxiliary /
RMAN> DUPLICATE TARGET DATABASE FOR STANDBY NOFILENAMECHECK;

Note: NOFILENAMECHECK is required to be specified if standby database files location is same as primary (i.e db_file_name_conver and log_file_name_convert are not specified).








16) Connect to the physical standby database, and create the standby redo logs to support REAL TIME APPLY for the standby. The standby redo logs must be the same size as the primary database online logs. The recommended number of standby redo logs is:
(maximum # of logfiles +1) * maximum # of threads
For this standby database example, primary database has 3 redo log groups with 100MB size each. Thus, the number of standby redo logs should be (3 + 1) * 1 = 4.

SQL> ALTER DATABASE ADD STANDBY LOGFILE
GROUP 4 ('+DATA','+FRA') SIZE 100M,
GROUP 5 ('+DATA','+FRA') SIZE 100M,
GROUP 6 ('+DATA','+FRA') SIZE 100M,
GROUP 7 ('+DATA','+FRA') SIZE 100M;

17) Create SPFILE for the standby database and start it with new SPFILE.

SQL> create spfile='+DATA/proddb_stby/spfileproddb.ora' from pfile;

Edit $ORACLE_HOME/dbs/initprod.ora and add SPFILE parameter to point to new spfile using following command

$ cd $ORACLE_HOME/dbs
$ echo "SPFILE='+ORADATA/proddb_stby/spfileproddb.ora'" > initproddb.ora
SQL> Shutdown
SQL> Startup mount

18) Add/configure following parameters in the primary database. Use ALTER SYSTEM command to modify these parameters where needed

##Primary Role Parameters
*.db_unique_name='proddb'
*.log_archive_dest_1= ‘LOCATION=USE_DB_RECOVERY_FILE_DEST mandatory reopen=120 VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=proddb’
*.log_archive_dest_2=’SERVICE=proddb_stby lgwr async VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)DB_UNIQUE_NAME=proddb_stby’

##Standby Role Parameters
*.standby_file_management=auto
*.fal_client='proddb'
*.fal_server='proddb_stby'
*.standby_archive_dest=’USE_DB_RECOVERY_FILE_DEST’

## Enable and modify following parameters if standby database has
##different paths of datafiles and redo log files as compared to primary
#db_file_name_convert=<VALUE>
#log_file_name_convert=<VALUE>

19) Create standby redo logs on the primary database to support the standby role. The standby redo logs are the same size as the primary database online logs. Please see step 16 above for more details about standby redo log files.

SQL> ALTER DATABASE ADD STANDBY LOGFILE 
GROUP 4 ('+DATA','+FRA') SIZE 100M,
GROUP 5 ('+DATA','+FRA') SIZE 100M,
GROUP 6 ('+DATA','+FRA') SIZE 100M,
GROUP 7 ('+DATA','+FRA') SIZE 100M;

20) Start managed recovery and real-time apply on the standby database:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;

21) Verify Dataguard Environment

On the physical standby database, query the V$ARCHIVED_LOG view to identify existing files in the archived redo log.

SQL> SELECT SEQUENCE#, FIRST_TIME,NAME FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;

On the primary database, issue the following SQL statement to force a log switch and archive the current online redo log file group:
SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;

On the physical standby database, query the V$ARCHIVED_LOG view to verify that the redo data was received and archived on the standby database:
SQL> SELECT SEQUENCE#, FIRST_TIME, NAME FROM V$ARCHIVED_LOG
ORDER BY SEQUENCE#;















Related Article

No comments:

Post a Comment

Popular Posts - All Times