Posts for all those DBAs out there, who are more fond of google than metalink :)

Sunday, December 20, 2009

Discoverer Preferences Issue


Scope:

Oracle Discoverer - Version: 10.1.2.1 to 10.1.2.2

Issue:

After Discoverer 10g server has been started, users are unable to run any Discoverer reports. From user browser screen, they see a message "Request Progress - processing your request" without any futher activity.



opmnctl status shows all processes are up and running.

Analysis:

On checking the system performance, we see that the Discoverer Preference Server process dis51pr is consuming very high CPU. On investigating further, we stumble upon a bug “6388696 - DISCOVERER TAKES MORE THEN AN HOUR TO START WORKING AFTER STARTUP”.

The bug states that the Preference server takes a really long time to initialize as it reads all the preferences from the file .reg_key.dc. In our case, this file was more than 3 GB!!

Solution:

As always suggested by support, since we were on the base version, the permanent fix was to apply the cumulative patch. But we chose to go with the workaround, which is to clear the preferences.

Steps to follow:

1. Stop services using opmnctl stopall
2. Back up .reg_key.dc file
3. Run dis51pr –clean
4. Apply preferences
5. Start services using opmnctl startall

Wednesday, September 30, 2009

DataBroker Demystified


Assumptions:

Primary IP – 10.0.0.1
Standby IP – 10.0.0.2

Changes/Additions to Primary Listener.ora:

SID_LIST_PRIMARY =
(SID_LIST =
(SID_DESC =
(ORACLE_HOME= /mnt/primarydb/10.2.0)
(SID_NAME = PRIMARY)
(GLOBAL_DBNAME = PRIMARY)
)
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /mnt/primarydb/10.2.0)
(PROGRAM = extproc)
)
(SID_DESC =
(SID_NAME = PRIMARY)
(GLOBAL_DBNAME = PRIMARY_DGMGRL)
(ORACLE_HOME = /mnt/primarydb/10.2.0)
)
(SID_DESC =
(SID_NAME = PRIMARY)
(GLOBAL_DBNAME = PRIMARY_XPT)
(ORACLE_HOME = /mnt/primarydb/10.2.0)
)
)

Changes/Additions to Standby Listener.ora:

SID_LIST_PRIMARY =
(SID_LIST =
(SID_DESC =
(ORACLE_HOME= /mnt/standbydb/10.2.0)
(SID_NAME = PRIMARY)
)
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /mnt/standbydb/10.2.0)
(PROGRAM = extproc)
)
(SID_DESC =
(SID_NAME = PRIMARY)
(GLOBAL_DBNAME = STANDBY_DGMGRL)
(ORACLE_HOME = /mnt/standbydb/10.2.0)
)
(SID_DESC =
(SID_NAME = PRIMARY)
(GLOBAL_DBNAME = STANDBY_XPT)
(ORACLE_HOME = /mnt/standbydb/10.2.0)
)
)

To Enable DG Broker Service:

alter system set dg_broker_start=true;

To Bounce DG Broker Service:

alter system set dg_broker_start=false;
alter system set dg_broker_start=true;

Steps to create DRC(Disaster Recovery Configuration):

At Primary

DGMGRL> CONNECT sys/manager@primary
Connected.
DGMGRL> CREATE CONFIGURATION PRIMARY AS
> PRIMARY DATABASE IS primary
> CONNECT IDENTIFIER IS primary;

Configuration "primary" created with primary database "primary"

DGMGRL> ADD DATABASE standby AS
> CONNECT IDENTIFIER IS standby
> MAINTAINED AS PHYSICAL;

Database "standby" added

DGMGRL> enable configuration;
Enabled.

DGMGRL> show configuration;

Configuration
Name: primary
Enabled: YES
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
primary - Primary database
standby - Physical standby database

Current status for "primary":
SUCCESS

DGMGRL> show database verbose standby;

Database
Name: standby
Role: PHYSICAL STANDBY
Enabled: YES
Intended State: ONLINE
Instance(s):
PRIMARY

Properties:
InitialConnectIdentifier = 'standby'
LogXptMode = 'ARCH'
Dependency = ''
DelayMins = '0'
Binding = 'OPTIONAL'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '180'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'AUTO'
ArchiveLagTarget = '0'
LogArchiveMaxProcesses = '5'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = ''
LogFileNameConvert = ''
FastStartFailoverTarget = ''
StatusReport = '(monitor)'
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
HostName = 'hostname'
SidName = 'PRIMARY'
LocalListenerAddress = '(ADDRESS=(PROTOCOL=tcp)(HOST=10.0.0.2)(PORT=1521))'
StandbyArchiveLocation = '/mnt/standbydb/10.2.0/dbs/arch/'
AlternateLocation = ''
LogArchiveTrace = '0'
LogArchiveFormat = '%t_%s_%r.dbf'
LatestLog = '(monitor)'
TopWaitEvents = '(monitor)'

Current status for "standby":
SUCCESS

DGMGRL> show database verbose primary;

Database
Name: primary
Role: PRIMARY
Enabled: YES
Intended State: ONLINE
Instance(s):
PRIMARY

Properties:
InitialConnectIdentifier = 'primary'
LogXptMode = 'ARCH'
Dependency = ''
DelayMins = '0'
Binding = 'OPTIONAL'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '180'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'manual'
ArchiveLagTarget = '0'
LogArchiveMaxProcesses = '5'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = ''
LogFileNameConvert = ''
FastStartFailoverTarget = ''
StatusReport = '(monitor)'
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
HostName = 'hostname'
SidName = 'PRIMARY'
LocalListenerAddress = '(ADDRESS=(PROTOCOL=tcp)(HOST=10.0.0.1)(PORT=1521))'
StandbyArchiveLocation = '/mnt/primarydb/10.2.0/dbs/arch/'
AlternateLocation = ''
LogArchiveTrace = '0'
LogArchiveFormat = '%t_%s_%r.dbf'
LatestLog = '(monitor)'
TopWaitEvents = '(monitor)'

Current status for "primary":
SUCCESS

In case some database property is not set properly, use the following example to set it:

DGMGRL> edit database "standby" set property 'LocalListenerAddress'='(ADDRESS=(PROTOCOL=tcp)(HOST=10.0.0.2)(PORT=1521))';
Error: ORA-16703: cannot set property while the database is enabled

Failed.
DGMGRL> disable database standby;
Disabled.
DGMGRL> edit database "standby" set property 'LocalListenerAddress'='(ADDRESS=(PROTOCOL=tcp)(HOST=10.0.0.2)(PORT=1521))';
Property "LocalListenerAddress" updated

Important Logs:

1. $ORACLE_HOME/admin//bdump/drc.log
2. $ORACLE_HOME/admin//bdump/alter_.log



Friday, September 4, 2009

Dataguard Configurations Simplified


Assumptions:

Primary IP – 10.0.0.1
Standby IP – 10.0.0.2

Changes/Additions to Primary PFILE (initPRIMARY.ora):

*.fal_server=STANDBY
*.fal_client=PRIMARY
*.log_archive_config='DG_CONFIG=(PRIMARY,STANDBY)'
*.log_archive_dest_1='location=/mnt/primarydb/10.2.0/dbs/arch VALID_FOR=(ALL_LOGFILES,ALL_ROLES) db_unique_name=PRIMARY'
*.log_archive_dest_2='SERVICE=standby LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=STANDBY'
*.log_archive_dest_state_1='ENABLE'
*.log_archive_dest_state_2='ENABLE'
*.LOG_ARCHIVE_MAX_PROCESSES=5 [optional]

Changes/Additions to Standby PFILE (initSTANDBY.ora):

*.fal_server=' PRIMARY'
*.fal_client='STANDBY'
*.log_archive_config='DG_CONFIG=( PRIMARY,STANDBY)'
*.LOG_ARCHIVE_DEST_1='LOCATION=/mnt/standbydb/10.2.0/dbs/arch/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=STANDBY'
*.LOG_ARCHIVE_DEST_2='SERVICE=primary LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=PRIMARY'

Changes/Additions to Primary Tnsnames.ora:

STANDBY=
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=10.0.0.2)(PORT=1521))
(CONNECT_DATA=
(SID= PRIMARY)
)
)

Changes/Additions to Standby Tnsnames.ora:

PRIMARY=
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=10.0.0.1)(PORT=1521))
(CONNECT_DATA=
(SERVICE_NAME= PRIMARY)
(INSTANCE_NAME= PRIMARY)
)
)

STANDBY=
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=10.0.0.2)(PORT=1521))
(CONNECT_DATA=
(SERVICE_NAME= PRIMARY)
(INSTANCE_NAME= PRIMARY)
)
)



XLIFFLoader class failed



Issue:

An instance-specific translated sync patch is applied for MLS Implementation for Polish and the patch fails with the following error:

oracle.apps.xdo.XDOException: No template found with application short name XDO and template code BENTCSRPT

Analysis:


The problem was determined to be the "-APPS_SHORT_NAME" parameter used in the U driver. The job should be run against the "BEN" product instead of "XDO".

Solution:

Run the job manually, with "-APPS_SHORT_NAME=BEN", instead of
"-APPS_SHORTNAME=XDO", then skip the job with adctrl.

Here is the manual command:

adjava -mx128m -nojit oracle.apps.xdo.oa.util.XLIFFLoader \
UPLOAD -DB_USERNAME apps -DB_PASSWORD xxxxx \
-JDBC_CONNECTION myhost.mydomain:1521:PROD \
-APPS_SHORT_NAME BEN -TEMPLATE_CODE BENTCSRPT \
-FILE_NAME $BEN_TOP/patch/115/publisher/templates/D/BENTCSRPT.xlf

Tuesday, August 25, 2009

‘View Log’ generates 'Internal Server Error'



Issue:

On clicking on ‘View Log’ for any concurrent request, we hit ‘Internal Server Error’

Analysis:

The Apache error_log shows:

[Fri Dec 5 14:29:34 2008] [error] [client x.x.x.x] client denied by server configuration: /mnt/comn/java/oracle/dms
ld.so.1: FNDWRR.exe: fatal: libm.so.2: open failed: No such file or directory
[Fri Dec 5 14:29:41 2008] [error] [client x.x.x.x] Premature end of script headers: /mnt/comn/html/bin/FNDWRR.exe

The last run of autoconfig had apparently resulted in error, as the adconfig log shows:

Uploading Metadata file /mnt/appl/ad/11.5.0/admin/template/adctxinf.tmp
Metadata file /mnt/appl/ad/11.5.0/admin/template/adctxinf.tmp upload failed.
Check log file created by FNDCPUCF program.
ERRORCODE = 137 ERRORCODE_END
.end std out.
ld.so.1: FNDCPUCF: fatal: libm.so.2: open failed: No such file or directory
Killed

ld.so.1: FNDCPUCF: fatal: libm.so.2: open failed: No such file or directory
Killed

Solution:

Created a soft link linking libm.so.2 to ld.so.1 but still got the error in Adconfig Log

Uploading Metadata file /t5/oracle/eexitdevappl/ad/11.5.0/admin/template/adctxinf.tmp
Metadata file /t5/oracle/eexitdevappl/ad/11.5.0/admin/template/adctxinf.tmp upload failed.
Check log file created by FNDCPUCF program.
ERRORCODE = 137 ERRORCODE_END
.end std out.
ld.so.1: FNDCPUCF: fatal: libm.so.2: version `SUNW_1.1' not found (required by file /t5/oracle/eexitdevappl/fnd/11.5.0/bin/FNDCPUCF)
ld.so.1: FNDCPUCF: fatal: libm.so.2: open failed: No such file or directory
Killed
ld.so.1: FNDCPUCF: fatal: libm.so.2: version `SUNW_1.1' not found (required by file /t5/oracle/eexitdevappl/fnd/11.5.0/bin/FNDCPUCF)
ld.so.1: FNDCPUCF: fatal: libm.so.2: open failed: No such file or directory
Killed

Then created soft link linking libm.so.2 to libm.so.1

ln -s libm.so.1 libm.so.2

And finally through the maze of dreaded lib files, we managed to resolve the issue.


Sunday, July 5, 2009

Open Document Link - Oracle Purchasing


Scope:

Oracle Purchasing - Version: 11.5.10 to 11.5.10.4

Issue:

On performing the following steps:

- Create a purchase order or purchase requisition that is over the requestors' approval limit
- Submit the document for approval.
- Approver logs into i-Procurement or Oracle Purchasing
- Selects on notifications options to look at all approval notifications in queue.
- The approver tries to open the document via the "Open Document" link.

The page takes forever to process that request and nothing happens (The document does not open in the expected Oracle Form)

We faced this issue while upgrading to 11.5.10.2 from 11.5.10. And it recurred while we moved our single node instance to two nodes, by separating the Web and Forms node.

Cause:

The forms appears to 'hang' because the link is executing in a hidden iframe, so you never see the error that is actually occuring. The actual error that is occurring behind the scenes is:

Oracle Forms Web CGI: Error detected
Your request cannot be serviced at this time due to the following error:

Error:
URL Contains One or More Restricted Characters
This error is occurring because some characters are incorrectly being blocked.

Solution:


There is a patch for the issue (5447522) but we went for the workaround which is to comment out the variable FORMS60_BLOCK_URL_CHARACTERS in apps.conf and bounce the Apache.


BneInvalidException - Server TimeZone


Scope:

Oracle Applications 11.5.10.2

Issue:

Users are facing the following error while uploading data through Web ADIs:



Analysis:


The bne.log shows:

BneUserLocaleUtils.getTimeZone CLIENT_TIMEZONE_ID has not been set

Solution:

The profile options ‘Server Timezone’ and ‘Client Timezone’ are incorrectly set.

Note: In our case, we only needed to set ‘Server Timezone’


Poor Discoverer Performance


During the very start of my career, I used to be very intimated by the oh-so-famous Discoverer… Though the reason for this phobia always slipped my mind! But then, there are instances like the one I would be discussing here, which might be passed as a substantial reason!

Scope:


Oracle Discoverer - Version: 10.1.2.2

Issue:

On one of our production systems, we upgraded Discoverer 4 to 10, which seemed like a good move till the users started mailing like crazy that the reports that took 45 mins, now were taking 6 hrs!!

Analysis:

On checking the SQL being used by the Discoverer, we noticed that it was appending the hint /*+ NOREWRITE */ to every query. Also, when we executed the queries without this hint, through a client like say TOAD, the performance improved.

So we came to the conclusion that, Discoverer quite conveniently was not using the CBO(though all our tables were analyzed).

Solution:

We made the following changes to pref.txt:

1. UseOptimizerHints = 1 – Which was initially 0
2. UseNoRewriteHint = 0 – Which was initially not included(by default it was picking 1)


After editing the above file, applypreferences.sh needs to be run for the changes to take effect. Bouncing the Discoverer server is not necessary. Also on adding the above, Discoverer started using the ALL ROWS hint and stopped using the NOREWRITE hint.

Though, Oracle strongly recommends quite contrary to the changes we made, it still did work really well for us!! The only catch here is that the concerned tables have to be regularly analyzed or the performance would degrade manifold. Hope that helps!

Wednesday, June 24, 2009

OPatch Detects Platform ID 453


Scope:

Enterprise Manager Grid Control - Version: 10.2.0.2 to 10.2.0.4
Sun Solaris SPARC (64-bit)
Solaris Operating System (SPARC) (64-bit)
Can effect any Opatch patching on Solaris SPARC 64 Bit

Issue:

While upgrading Enterprise Manager Grid Control 10.2.0.2 to 10.2.0.4, I faced an interesting issue which I later realized is, though not documented as one, a bug of the OPatch utility.

As a prereq of the upgrade, one needs to apply the patch 4329444 to the repository database. But on applying the patch through the OPatch utility, the session fails with the message:

Apply Session failed: ApplySession failed to prepare the system. OUI-67020:Interim Patch '4329444' is not applicable to OH '/oracle/grid/db10g'... Platforms not compatible
OPatch detects Platform ID 453 while this patch supports the following platforms: [ 23: Sun SPARC Solaris (64-BIT), ]
System intact, OPatch will not attempt to restore the system


So here OPatch stops being smart, and fails to recognize the Platform ID correctly as 23 and instead it picks 453.

Solution:

Before running OPatch, set the OPATCH_PLATFORM_ID environment variable to 23 by the following command
 export OPATCH_PLATFORM_ID=23

And OPatch is smart again!


Simplifying RMAN


Till sometime back, RMAN really eluded me. But personally I think its over-rated(No offence, RMAN fans!!)

Here, I would be discussing simple steps for setting up RMAN. The settings I mention here are the very basic ones. Tweaking them depends heavily on the environment, requirements and trade-offs.

Lets suppose our requirements are the ones listed below.

1. Retain only 1 backup of each datafile and controlfile.
2. Optimize backup by skipping the backup of files in certain circumstances if the identical file or an identical version of the file has already been backed up.
3. Backup database to disk, instead of tape
4. Backup controlfile along with the datafiles
5. Backup the controlfile to a specific location on disk, say, /mount/cntFileBkp/
6. Parallelism of say, 4
7. Compressed backup
8. Backup in the form of backup sets, instead of image copies
9. Each backup set to be max of 2000 M
10. Backup datafiles to a specific location on disk, say, /mount/dataFileBkp/

The above mentioned requirements can be implemented by the following simple steps:

1. Start the rman utility
2. On the rman prompt, connect to the target database

RMAN> connect target;

connected to target database: TEST (DBID=528243886)

3. Start configuring to suit your requirements

• For point 1(requirements)
CONFIGURE RETENTION POLICY TO REDUNDANCY 1;
• For point 2
CONFIGURE BACKUP OPTIMIZATION ON;
• For point 3
CONFIGURE DEFAULT DEVICE TYPE TO DISK;
• For point 4
CONFIGURE CONTROLFILE AUTOBACKUP ON;
• For point 5
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO
'/mount/cntFileBkp/%F';

• For points 6, 7 and 8
CONFIGURE DEVICE TYPE DISK PARALLELISM 4 BACKUP TYPE TO COMPRESSED BACKUPSET;
• For points 9 and 10
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/mount/dataFileBkp/%U'
MAXPIECESIZE 2000 M;

4. Confirm your settings by

RMAN> show all;

using target database control file instead of recovery catalog
RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1;
CONFIGURE BACKUP OPTIMIZATION ON;
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/mount/cntFileBkp/%F';
CONFIGURE DEVICE TYPE DISK PARALLELISM 4 BACKUP TYPE TO COMPRESSED BACKUPSET;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/mount/dataFileBkp/%U' MAXPIECESIZE 2000 M;
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/mount/oracle/testdb/10.2.0/dbs/snapcf_TEST.f'; # default

5. And now we are ready to take a backup(online) of our db, on the following assumptions

• DB is in archivelog mode
• Directory structures for backup locations are in place

6. Run the following backup script to take an online backup of your DB

run {
allocate channel backup_disk1 type disk format '/mount/dataFileBkp/%U' maxpiecesize 2000 M;
allocate channel backup_disk2 type disk format '/mount/dataFileBkp/%U' maxpiecesize 2000 M;
allocate channel backup_disk3 type disk format '/mount/dataFileBkp/%U' maxpiecesize 2000 M;
allocate channel backup_disk4 type disk format '/mount/dataFileBkp/%U' maxpiecesize 2000 M;
backup incremental level 0 cumulative filesperset = 10 as COMPRESSED BACKUPSET tag '%TAG' database;
release channel backup_disk1;
release channel backup_disk2;
release channel backup_disk3;
release channel backup_disk4;
}
release channel;

7. incremental level 0 here refers to a FULL backup, to be used for subsequent incremental backups
8. cumulative refers to copying the data blocks used since the most recent level 0 backup
9. filesperset refers to max no. of datafiles to be included in one backupset

Now, the catch here is, though it all seems very easy to setup and implement, tweaking it to suit your needs/trade-offs can be quite tricky.
For instance, the purpose of taking a compressed backup is to save disk space, but during compressed backup, the CPU utilization is extremely high.
So basically it’s a trade-off between performance and affordable disk space.

Using Log Miner Utility


Using LogMiner utility, you can query the contents of online redo log files and archived log files. And that’s not all, if you’ve the knack for it, you might as well use it as a data audit tool and also as a tool for sophisticated data analysis(Nice, isn’t it!)

Now lets concentrate on the very basics. Suppose you’ve a primary database which is really troubling you by generating a lot of archives. And you need to find out which are the culprit tables/objects. The safest way to use LogMiner is to use it on the standby(assuming that you have one!)

Follow the simple steps below.

1. Logon to the standby
2. Open the database in read-only mode
   SQL> ALTER DATABASE OPEN; 
[Since it’s a standby, this command would open it in Read-Only]
3. On the sql prompt
   SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/mount1/archive/arch1.arc',OPTIONS => DBMS_LOGMNR.NEW);

This statement tells the Log Miner that ‘I want to know what’s inside arch1.arc’

   SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/mount1/archive/arch2.arc',OPTIONS => DBMS_LOGMNR.ADDFILE);

‘And this is one more that I want to know about’ [Notice the difference in OPTIONS]

4. Start the Log Miner
   SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR

5. The Log Miner dumps all the info gathered into a view called V$LOGMNR_CONTENTS. The most useful columns of this view are
OPERATION [VARCHAR2(32)] - User level SQL operation that made the change: 
o INSERT - Change was caused by an insert statement
o UPDATE - Change was caused by an update statement
o DELETE - Change was caused by a delete statement
o DDL - Change was caused by a DDL statement
o START - Change was caused by the start of a transaction
o COMMIT - Change was caused by the commit of a transaction
o ROLLBACK - Change was caused by a full rollback of a transaction
o LOB_WRITE - Change was caused by an invocation of DBMS_LOB.WRITE
o LOB_TRIM - Change was caused by an invocation of DBMS_LOB.TRIM
o LOB_ERASE - Change was caused by an invocation of DBMS_LOB.ERASE
o SELECT_FOR_UPDATE - Operation was a SELECT FOR UPDATE statement
o SEL_LOB_LOCATOR - Operation was a SELECT statement that returned a LOB locator
o MISSING_SCN - LogMiner encountered a gap in the redo records.
o INTERNAL - Change was caused by internal operations initiated by the database
o XML DOC BEGIN - Beginning of a change to an XMLType column or table
o XML DOC WRITE - Data for an XML document
o XML DOC END - End of the Data for an XML document
o UNSUPPORTED - Change was caused by operations not currently supported by LogMiner

OPERATION_CODE [NUMBER] - Number of the operation code:
o 0 - INTERNAL
o 1 - INSERT
o 2 - DELETE
o 3 - UPDATE
o 5 - DDL
o 6 - START
o 7 - COMMIT
o 9 - SELECT_LOB_LOCATOR
o 10 - LOB_WRITE
o 11 - LOB_TRIM
o 25 - SELECT_FOR_UPDATE
o 28 - LOB_ERASE
o 34 - MISSING_SCN
o 68 - XML DOC BEGIN
o 70 = XML DOC WRITE
o 71 = XML DOC END
o 36 - ROLLBACK
o 255 - UNSUPPORTED

ROLLBACK [NUMBER] - 1 = if the redo record was generated because of a partial or a full rollback of the
associated transaction, 0 = otherwise
TABLE_NAME [VARCHAR2(32)] - Name of the modified table (in case the redo pertains to a table modification)
SEG_NAME [VARCHAR2(256)] - Name of the modified data segment
SEG_TYPE [NUMBER] - Type of the modified data segment:
o 0 - UNKNOWN
o 1 - INDEX
o 2 - TABLE
o 19 - TABLE PARTITION
o 20 - INDEX PARTITION
o 34 - TABLE SUBPARTITION
o All other values – UNSUPPORTED

SEG_TYPE_NAME [VARCHAR2(32)] - Segment type name:
o UNKNOWN
o INDEX
o TABLE
o TABLE PARTITION
o INDEX PARTITION
o TABLE SUBPARTITION
o UNSUPPORTED

ROW_ID [VARCHAR2(18)] - Row ID of the row modified by the change (only meaningful if the change
pertains to a DML).
This will be NULL if the redo record is not associated with a DML.
USERNAME [VARCHAR2(30)] - Name of the user who executed the transaction
OS_USERNAME [VARCHAR2(4000)] - Name of the operating system user
SEQUENCE# [NUMBER] - Sequence number of the redo log that contained the redo record corresponding
to the database change
SQL_REDO [VARCHAR2(4000)] - Reconstructed SQL statement that is equivalent to
the original SQL statement that made the change. LogMiner does not generate SQL redo
for temporary tables. In such a case, this column will contain the string "/* No
SQL_REDO for temporary tables */".
SQL_UNDO [VARCHAR2(4000)] - Reconstructed SQL statement that can be used to
undo the effect of the original statement that made the change. DDL statements have
no corresponding SQL_UNDO. LogMiner does not generate SQL undo for temporary tables.
In such a case, this column will contain the string "/* No SQL_UNDO for temporary
tables */".

6. End the Log Miner session after your queries are answered
SQL> EXECUTE DBMS_LOGMNR.STOP_LOGMNR

7. Remember that the view would exist only for the current session and not any subsequent sessions. So you have to start the LogMiner each time you wish to query the view.

Tuesday, June 23, 2009

The Notorious NullPointerException

Personally, I'm really fond of errors...but this one can get a little messy!! I'll be picking out one particular case here.

Scope - Oracle Applications 11.5.10.2

Issue - One particular user gets the NullPointerException right after login.

Analysis - The below query

select * from icx_custom_menu_entries where user_id = (select user_id from fnd_user where user_name ='user')

returns two entries corresponding to the user where one of the entries had the Prompt and URL field null(yes, the culprits!)

Similar issue has been documented in bug 4866881; this is a Data Corruption issue.

Solution - Create a backup of table icx_custom_menu_entries

create table icx.icx_custom_menu_entries_bak as select * from icx_custom_menu_entries

Extract the user id of the concerned user

select user_id from fnd_user where user_name ='user'

Delete the offending entry from the table

delete from icx_custom_menu_entries where user_id='userid' and prompt is null

And life’s good again!!