Sunday, February 18, 2018

Unplugging and Plugging in of a Pluggable Database

In a multitenant environment we can unplug a pluggable database (PDB) from a container database (CDB), and then plug it into another CDB. After we unplug a PDB, it needs to be dropped from dba CDB as it become unusable in this CDB. We can plug the same PDB back into the same CDB as well. In this article I will explain how we perform this operation. I will unplug a database and plug it back in the same CDB, method of plugging it into a different CDB is essentially the same.

Monday, February 5, 2018

Point in Time Recovery of a Pluggable Database

In this article I will explain how we can perform a point in time recovery for a single pluggable database. Point in time recovery of a single pluggable database would not have any effect on other pluggable databases in the same container database, or container database itself. In the following, a real time scenario of a pluggable database point in time recovery is explained. Following are the points to consider this scenario.

Monday, January 29, 2018

Testing ASM Disk Failure Scenario and disk_repair_time


When a disk failure occurs for an ASM disk, behavior of ASM would be different, based on what kind of redundancy for the diskgroup is in use. If diskgroup has EXTERNAL REDUDANCY, diskgroup would keep working if you have redundancy at external RAID level. If there is no RAID at external level, the diskgroup would immediately get dismounted and disk would need a repair/replaced and then diskgroup might need to be dropped and re-created, and data on this diskgroup would require recovery.

Monday, January 22, 2018

ORA-00845: MEMORY_TARGET not supported on this system

If we face ORA-00845 during database startup, it would mean that /dev/shm file system is not configured with appropriate value required to start the database instance. We need to mount /dev/shm with a value that should be equal or more than the value of memory we want to allocate to all instances/SGAs (ASM as well as database instances) that would run on this host.
You may find following type of warning in alert log file.

Monday, January 8, 2018

Slow RMAN Performance for CROSSCHECK and DELETE OBSOLETE

I faced an issue recently where my CROSSCHECK and DELETE OBSOLETE commands were too much slow to execute, actually these were hung. The reason for this issue was IP address change of my NFS server form where a drive was mounted on database server for backups and backups were being taken on this NFS share. After NFS server’s IP address change, RMAN would go to check old IP address to search for the old/obsolete backups when we executed CROSSCHECK and DELETE OBSOLETE commands.

Thursday, December 28, 2017

ORA-01586: database must be mounted EXCLUSIVE and not open for this operation

As this message clearly explains, this error means that the operation you are going to perform that returned this error, needs database to be mounted EXCLUSIVELY - by only one instance. In my case, I was trying to drop a RAC database when I faced ORA-01586. I realized that the other instance of my RAC database is still up and it is needed that all the instances of a RAC database should be down before we can drop a database.

Sunday, November 19, 2017

CRS-4700: The Cluster Time Synchronization Service is in Observer mode

This is very important to keep the time of cluster nodes synchronized across the cluster. Time difference among nodes can cause issues, and sometime it can cause the node(s) restart.

There are 2 ways of keeping the time synchronized across the RAC nodes: using NTP at OS level, or using Cluster Time Synchronization Service that runs as a RAC resource.

Sunday, November 12, 2017

Segment Space Growth History and Forecast

I have already written articles to get tablespace space usage history and forecast (10g and 11g, 12c and above) and database space usage history and forecast (10g and 11g, 12c and above). Here, I will explain how we can get the same information about segments, and can forecast the future growth of segments.

Sunday, November 5, 2017

ORA-39510 While Starting Cluster Health Monitor Repository (MGMTDB)

MGMTDB, also called Management Repository Database was introduced in 12c Grid Infrastructure. This repository database is used to store Cluster Health Monitor data so that this monitoring data can be used for Grid Infrastructure troubleshooting. This database resource is set to start automatically on the start of Grid Infrastructure, but, there could be some scenarios where MGMTDB might not start because of some issues. One of the similar problems I face because of which MGMTDB could not start automatically. When I tried to start Management Repository Database manually, it failed to start and threw following error messages.

Sunday, October 29, 2017

ORA-00313 ORA-00312 ORA-17503 ORA-15173 in Standby Alert Log File

Errors in file /u01/app/oracle/diag/rdbms/standby_db/standby_db/trace/standby_db_lgwr_393229.trc:
ORA-00313: open failed for members of log group 12 of thread 2
ORA-00312: online log 12 thread 2: '+FRA_DG/standby_db/redolog12_02.rdo'
ORA-17503: ksfdopn:2 Failed to open file +FRA_DG/standby_db /redolog12_02.rdo

Popular Posts - All Times