Sunday, December 30, 2018

Error 3113 in Primary Database while connecting Standby


Sometime you may see following error stack in your primary database alert log file.
Wed May 31 08:02:37 2017
LNS: Attempting destination LOG_ARCHIVE_DEST_2 network reconnect (3113)
LNS: Destination LOG_ARCHIVE_DEST_2 network reconnect abandoned
Error 3113 for archive log file 9 to 'standby_db'
Errors in file D:\ORACLE\11203\diag\rdbms\standby_db\mydb\trace\mydb_nsa2_3544.trc:
ORA-03113: end-of-file on communication channel
LNS: Failed to archive log 9 thread 1 sequence 61239 (3113)

Thursday, December 6, 2018

Fatal NI connect error 12541

If you see this error message in the alert log file of your primary database in a dataguard environment, it is most likely that listener on the standby database host(s) is not running. As a result, primary database will not be able to ship redo data to the standby database. This failure of log shipping will add lines similar to the following in the alert log file of the primary database.

Saturday, November 17, 2018

Fatal NI connect error 12547, connecting to:


Sometimes you may see following error message in your alert log file. There is a similar Fatal NI error 12537 that appears in the alert log file when session exists during the session creation phase. But error 12547 comes when session terminates abnormally. For example, client application crashes without properly exiting the database, or sessions gets disconnected because of a network disconnection.

Sunday, November 11, 2018

How to Enable Block Change Tracking


In RMAN incremental backup strategy, enabling block change tracking significantly increases incremental RMAN backups performance by helping RMAN to find out only changed/modified blocks (since last full backup) very fast, because only these are the blocks to be backed up during an incremental backup. List of these blocks (changed since last full backup) is maintained in the block change tracking file. In this article I will explain how to enable block change tracking.

Friday, October 26, 2018

ORA-16179: incremental changes to "log_archive_dest_1" not allowed with SPFILE


This error comes when you use wrong syntax while setting log archive destination using   log_Archive_dest_n parameter.  I faced this error while trying to set log_archive_dest_1 parameter using ALTER SYSTEM command and the reason of this error was that I missed the keyword “location=” while specifying the destination directory. But this error may also come because of other syntax errors in the command while setting the parameter.     

Friday, October 19, 2018

ORA-01261: Parameter db_recovery_file_dest destination string cannot be translated

DB_RECOVERY_FILE_DEST parameter is used to set Fast Recovery Area for the instance and  archived logs, flashback logs, and RMAN backups are kept at this location and are automatically handled by the database. While using ALTER SYSTEM command to set db_recovery_file_dest parameter you might face following error.

Sunday, October 14, 2018

ORA-00257 archiver error


There are a several archived log errors that exist, and there are several different reasons for these errors as well. As we know that redo log groups are used in a circular fashion and redo logs are reused again and again. After every log switch, the previous redo log file is archived (if database is running in archivelog mode). If oracle is not able to create an archived log file after a redo log switch ORA-00257 would appear, and after a few more redo log switches the turn of this redo log file would come again. Since oracle was not able to archived this redo log previously, it will not overwrite this redo log and database would hang. This redo log must be archived before it could be reused. Alert log would who errors as follows if it is not able to create an archived log file

Friday, October 5, 2018

ORA-48913: Writing into trace file failed, file size limit...


In some environments DBAs limit the size of trace files generated by the database. This included all trace files that could get generated under USER_DUMP_DEST/DIAGNOSTIC_DEST). The parameter to set the limit for trace files is MAX_DUMP_FILE_SIZE and its value is in OS number of blocks. After setting this value, if any trace file size would increase form the size specified in this parameter, ORA-48913 would be recorded in alert log file.

Friday, September 28, 2018

Creating services with srvctl


In a RAC environment we can create services and run these services on a specific RAC instance/node (or a set of instances/nodes). This makes it possible for the applications to connect to specific instances. In this article I would explain how we create a service and make it run on the instance(s) of our choice, and make it automatically failover to another instance/node if current instance node goes down.

Sunday, September 23, 2018

How to Add and Remove database to CRS or Oracle Restart

In RAC or Oracle Restart environment we can use “srvctl” command to add resources to CRS for automatic management of the resources. In this article I would explain how we remove and add a database resource to CRS in RAC environment. Same process can be used for single instance database in Oracle Restart environment. These steps are needed to be executed as RDBMS software owner “oracle”. Starting 12c, single letter options (-d, -i, etc.) can still be used, but probably these would be made obsolete in coming releases because full length option (-db, -instance etc.) have been introduced and I will use these full length options here.

Wednesday, September 19, 2018

CRS-0245: User doesn't have enough privilege to perform the operation


Reason of CRS-0245 is that you are executing “srvctl” command to perform a task that is not authorized for current logged in user. For example, I was trying to remove a RAC database from CRS using srvctl and I faced this error because I was executing this command as “grid” user whereas I was supposed to execute this command as “oracle” user that is RDBMS software owner.

Friday, September 7, 2018

PRKO-3141 : Database could not be removed because it was running


If we use “srvctl” command to remove a database from CRS (in case of RAC) or Oracle Restart (in case of single server GI installation), we might face this error if database is currently  running. To successfully remove a database from CRS using srvctl, either shutdown the database (all instances), or alternatively you can use “–f” (for versions bellow 12c) or “-force” (for 12 and above) option.

Saturday, September 1, 2018

OEM Target Status Pending – OEM Metric Collection Error


Oracle Enterprise Manager Cloud Control is a very handy tool to monitor database servers. Sometimes it happens that Cloud Control does not show the actual status of a target. If status of your database is either “Pending” or “Metric Collection Error”, you can try following trouble shooting steps to resolve the problem. Similar steps may also be followed if targets other than database targets face same issue.

Friday, August 24, 2018

ORA-01145: offline immediate disallowed unless media recovery enabled


I faced this error while I was trying to take a datafile offline and my database was in mount state as well as in noarchivelog mode, at that time. Opening database would return ORA-01157 for this datafile.
SQL> alter database datafile 'd:\oracle\oradata\data01.dbf' offline;
alter database datafile 'd:\oracle\oradata\data01.dbf' offline
*
ERROR at line 1:
ORA-01145: offline immediate disallowed unless media recovery enabled

Wednesday, August 15, 2018

Creating Standby Database using Duplicate from Active Database


I already have articles written about how to create dataguard and creating dataguard in RAC environment. I used the DUPLICATE DATABASE command using backup, to build the standby database in aforementioned documents. In this article I would provide some tips about building standby database using DUPLICATE with ACTIVE DATABASE option.
Please also refer to this article about duplicate using ACTIVE DATABASE.
Follow all steps same as mentioned in how to create dataguard to build the standby, but remember following points/steps.

Friday, August 10, 2018

Slow tnsping Response and Slow Connections to the Database

We can use TNSPING utility to check connection with the database, for the client side. If you observe that TNSPING is returning very late/slow, and application connection is also very slow, there are a couple of things that you can check. First thing is to use network speed. You can use OS command “ping” to check if network is stable and response is fast enough. You can also seek help of your network team to verify if network is working as expected, and no latency or packet loss is being observed.

Monday, August 6, 2018

ORA-12170 : TNS:Connect timeout occurred


If your application (or SQLPLUS) is returning ORA-12170 while connecting to the database, there are a few things that you can verify in order to find out the root cause of the issue, and implement a solution. Although sometimes it is quite difficult to find out the root cause, I will discuss some major and quite common causes of this error message that cause failure of application to connect with the database.

Friday, July 27, 2018

Resuming Database Upgrade Assistant after Error

If your database upgrade by database upgrade assistant is caught by a glitch, a dialogue box may appear with options of “Ignore”, or “Abort”. There could be a situation whereby you can ignore the error and continue with the upgrade process. But, before you could do this, you would need to resolve the problem that encountered, before clicking the Ignore button to continue.

Sunday, July 15, 2018

Installing Oracle 12c R2 Database Software on Linux 6 and Linux 7


In this article I would explain how to install Oracle 12c R2 database software on Linux 6 and Linux 7. If you want to use ASM as storage and/or Oracle Restart feature, you would need to install 12c R2 Grid Infrastructure before proceeding to install RDBMS software. You must be using at least Linux 6.4 for installation on Linux 6.

Friday, July 6, 2018

ORA-16191: Primary log shipping client not logged on standby

Password file difference is one of the biggest causes of log shipping failure from primary database to the standby. Follow this document to resolve ORA-1619. Following is what would appear in the alert log file if log shipping fails because of password file differences between primary site and standby site.

Thursday, July 5, 2018

Fractured block found during backing up datafile


This error means that RMAN found an inconsistent block during the backup of a datafile. It is quite likely that on second attempt to read the block, RMAN is able to find a consistent image of the block, hence no further action would be required. But, if you need to perform a block recovery, this article will be worth reading. Alert log file will show message similar to the following when this error is reported.

Monday, July 2, 2018

ASM Disk not Accessible and not Appearing in V$ASM_DISK

If you are using ASMLib for your ASM storage and it is working perfectly fine; then, after adding a new ASM Disk you suddenly realize that although new disk is being listed by “oracleasm listdisks” command, it is still not appearing in V$ASM_DISK view. As a result, you are not able to add this new disk in your diskgroup because disk is not accessible to ASM instance. One reason of this could be that ASMLib configuration is not as it should be. Reasons could be: OS patching, oracleasm kernel driver update etc.

Saturday, June 30, 2018

Installing Oracle 12c R2 Grid Infrastructure on Linux 6 and Linux 7

You need to install 12c R2 Grid Infrastructure (for a standalone server) software in order to use ASM and/or Oracle Restart feature. In the following I will explain how to install GI for a standalone server on Linux 6 and Linux 7. You must use at least Linux 6.4 for installation on Linux 6. For this installation, I am using Oracle Linux 7

There are only a couple of differences between the installation procedures on Linux 6 and Linux 7 which I will mention along the way.

Sunday, June 10, 2018

Creating Weblogic Domain for Weblogic Server

After the installationof weblogic server, next step is to create a new weblogic domain. If you have installed weblogic server with examples, there will be a domain already configured during the installation and an application deployed. For the creation of other domain(s), you may follow the steps explained in this article.

Sunday, June 3, 2018

Installing Weblogic Server 12.2 on Linux 6 Linux 7 and Windows

If you want to install full Fusion Middleware, this article will help you, otherwise keep reading this article if you wish to install weblogic server only. In this article I will explain how to install Weblogic and Coherence that is used to deploy Java applications. I did this installation on Linux, but same steps can be followed to install the weblogic server on Windows platform.

Saturday, May 26, 2018

Starting and Stopping Weblogic Server

After you have created a weblogic domain for Fusion Middleware, or a weblogic domain for Weblogc Server, next step is to start the weblogic components that include node manager, administration server and managed server.

Starting Node Manager

First we start the node manager. Use following method to start the node manager. For this example, $DOMAIN_HOME is pointing to /home/oracle/config/domains

Wednesday, May 23, 2018

Creating Weblogic Domain for Fusion Middleware

After we have installed the Fusion Middleware and completed its initial configuration, next step is to configure a weblogic domain. A weblogic domain is a logically related group of weblogic resources. Once we create a domain, and Administration Server instance is also gets created that is used to manage this domain. After this, we can add Managed Servers using this Administration Server by logging into the Administration Server using an admin user account through a console. Web Applications are then deployed on to the managed server(s). We can create multiple managed servers to work as a cluster for load balancing and fault tolerance

Friday, May 11, 2018

Initial Configuration of Fusion Middleware after the Installation

After completing Fusion Middleware installation installation, next step is initial configuration of the it. If you have done Fusion Middleware Installation, you would need to configure certain schemas (this is not needed for Weblogic and Coherence installation using generic installer) before you can create domains and start using Fusion Middleware. Click here for the official documentation.
Following are the schemas that need to be created

Monday, May 7, 2018

ORA-02095: specified initialization parameter cannot be modified


This is a generic error message that may be returned while setting an initialization parameter. There are a few other error messages that we may face and in this article, I would discuss those error messages as well. The most common reason of this error message is modifying a parameter using ALTER SYSTEM command that is not dynamically modifiable. In this case, we need to set the parameter in spfile, or pfile, and then a re-bounce of the instance would set the value of this parameter in the instance.

Tuesday, May 1, 2018

Installing Oracle Fusion Middleware 12.2 on Linux 6, Linux 7 and Windows

Oracle Fusion Middleware contains many products, tools, and services, and provides facility of using Java EE and developer tools, integration services, identity services, business intelligence and several other Oracle products.
In this article I would demonstrate how to install Oracle Fusion Middleware 12.2 (I am using 12.2.1.2.0 for this article) on Linux 6 and Linux 7. Following is the official document that I used while writing this article.

Monday, April 23, 2018

ORA-01013: user requested cancel of current operation


If ORA-01013 appears in the alert log file, it would mean that your application is closing a session forcefully and database records this as action canceled by the user/application (as name description of this error suggests). It would mean that most of the time it is not something that can be handled from the database side to stop this message from appearing in the alert log file. But, sometimes this error may be because Oracle internally kills some process that crosses a timeline already defined. In the following I would provide further detail of this.

Tuesday, April 17, 2018

Error 12154 received logging on to the standby

There are several error messages related to dataguard environment and many of them have been discussed in my blog, and you can find links to those articles at the bottom of this article. In this article, I will explain Error 12154 that I received in one of my dataguard environments. This error message may appear in the alert log file of your primary database and it would seem similar to the following.
Error 12154 received logging on to the standby
FAL[client, USER]: Error 12154 connecting to <FAL_SERVER> for fetching gap sequence

Friday, April 13, 2018

Error 1033 received logging on to the standby



Troubleshooting a dataguard environment is not very straightforward sometimes. If you come to know that physical standby database is out of sync with the primary database, first thing is to check the alert log files of primary and standby database to find out any error messages that might be appearing in the alert log files. For example, if error 1003 is reported in the alert log file, you may see entries similar to the following in the primary database’s alert log file.
Mon Feb 06 00:10:40 2017
Error 1033 received logging on to the standby
PING[ARC1]: Heartbeat failed to connect to standby 'MY_STANDBY_DB'. Error is 1033.

Monday, April 2, 2018

Starting and Stopping Grid Infrastructure on a Standalone GI Installation

This document explains how to start and stop an Oracle cluster. To start and stop Grid Infrastructure services for a standalone installation, there are slightly different commands. Using same commands as RAC to start and stop the GI resources would return errors as follows
[root@~]$ /u01/app/11203/grid/bin/crsctl stop crs
CRS-4013: This command is not supported in a single-node configuration.
CRS-4000: Command Stop failed, or completed with errors.
[root@ ~]$ /u01/app/11203/grid/bin/crsctl stop cluster -all
CRS-4013: This command is not supported in a single-node configuration.
CRS-4000: Command Stop failed, or completed with errors.

Monday, March 26, 2018

Finding IO statistics in Linux Environment for Slow Disks


I have written 2 articles (AWR 1, AWR 2) in relation to the disk latency and how to read AWR reports to investigate IO slowness. In this article I will explain how we check disks IO performance in Linux systems. We will check how the disks where our datafiles and redo log files are stored are performing. These disks could be simple Linux mount points, or ASM disks. In case of ASM, we will need to find out the disks that are part of ASM diskgroups so that we can check the performance of those disks. For example, following is the way how we can find out the disks that are part of ASM diskgroup.

Monday, March 19, 2018

Reading and Understanding AWR Report for IO or Disk latency - 2



This a second article regarding IO latency issues investigation using AWR. First article can be found here. In this article I will further explain about checking IO latencies at the OS level in Linux

Log file sync

We had a production server running on Virtual Machine (vmware), and after a downtime, we started receiving complains about slow database. AWR report showed that “log fie sync” wait event that comes under COMMIT wait class was at the top, and database was spending more than 30% of its time on log file sync wait. Log file sync wait even can be observed in a very busy OLTP database, but it should not consume this much time as we were seeing, and should be found at the bottom of the list of top wait events.

Monday, March 12, 2018

Reading and Understanding AWR Report for IO or Disk latency - 1

Recently I performed a failover of my Oracle database (running on Linux) to my standby database, and after the switchover, application team started complaining about extreme slowness. I was using OEM Cloud Control and the graph was showing high waits for “free buffer wait” and alert log started showing Checkpoint not Complete.  Since I never saw these waits on my previous primary server (now standby), so first thing came into my mind was that the disks on the standby server (now primary) are probably very slow, because hardware of my servers was very old. Servers also had internal disks (not SAN or NAS). I generated AWR report for the time when database was running fine and without any performance issue, and then a latest time report based on latest snapshots to see what is going wrong with the IO.

Tuesday, March 6, 2018

ORA-00742: Log read detects lost write in thread %d sequence %d block %

During real time apply on one of my physical standby RAC database , the managed recovery process crashed with this error message, following is the entry in alert log file.
CORRUPTION DETECTED: In redo blocks starting at block 169592count 142 for thread 4 sequence 157
Sat Jul 02 19:12:25 2016
MRP0: Background Media Recovery terminated with error 742

Monday, February 26, 2018

Oracle Patching in Multitenant Environment with Minimal Downtime

Starting 12c, it is even easier to patch/upgrade existing databases if you have already employed multitenant environment. To do patching in multitenant environment, we can install a new Oracle home and patch it to the level we want to patch, and then run requited scripts (post-patch scripts) in the container database (CDB). At this point, you will have 2 oracle homes, one new and one old home where you have current pluggable database(s) running (which remain available while we install and patch new oracle home).

Sunday, February 18, 2018

Unplugging and Plugging in of a Pluggable Database

In a multitenant environment we can unplug a pluggable database (PDB) from a container database (CDB), and then plug it into another CDB. After we unplug a PDB, it needs to be dropped from dba CDB as it become unusable in this CDB. We can plug the same PDB back into the same CDB as well. In this article I will explain how we perform this operation. I will unplug a database and plug it back in the same CDB, method of plugging it into a different CDB is essentially the same.

Monday, February 5, 2018

Point in Time Recovery of a Pluggable Database

In this article I will explain how we can perform a point in time recovery for a single pluggable database. Point in time recovery of a single pluggable database would not have any effect on other pluggable databases in the same container database, or container database itself. In the following, a real time scenario of a pluggable database point in time recovery is explained. Following are the points to consider this scenario.

Monday, January 29, 2018

Testing ASM Disk Failure Scenario and disk_repair_time


When a disk failure occurs for an ASM disk, behavior of ASM would be different, based on what kind of redundancy for the diskgroup is in use. If diskgroup has EXTERNAL REDUDANCY, diskgroup would keep working if you have redundancy at external RAID level. If there is no RAID at external level, the diskgroup would immediately get dismounted and disk would need a repair/replaced and then diskgroup might need to be dropped and re-created, and data on this diskgroup would require recovery.

Monday, January 22, 2018

ORA-00845: MEMORY_TARGET not supported on this system

If we face ORA-00845 during database startup, it would mean that /dev/shm file system is not configured with appropriate value required to start the database instance. We need to mount /dev/shm with a value that should be equal or more than the value of memory we want to allocate to all instances/SGAs (ASM as well as database instances) that would run on this host.
You may find following type of warning in alert log file.

Monday, January 8, 2018

Slow RMAN Performance for CROSSCHECK and DELETE OBSOLETE

I faced an issue recently where my CROSSCHECK and DELETE OBSOLETE commands were too much slow to execute, actually these were hung. The reason for this issue was IP address change of my NFS server form where a drive was mounted on database server for backups and backups were being taken on this NFS share. After NFS server’s IP address change, RMAN would go to check old IP address to search for the old/obsolete backups when we executed CROSSCHECK and DELETE OBSOLETE commands.

Popular Posts - All Times