Posts for all those DBAs out there, who are more fond of google than metalink :)

Wednesday, June 24, 2009

OPatch Detects Platform ID 453


Scope:

Enterprise Manager Grid Control - Version: 10.2.0.2 to 10.2.0.4
Sun Solaris SPARC (64-bit)
Solaris Operating System (SPARC) (64-bit)
Can effect any Opatch patching on Solaris SPARC 64 Bit

Issue:

While upgrading Enterprise Manager Grid Control 10.2.0.2 to 10.2.0.4, I faced an interesting issue which I later realized is, though not documented as one, a bug of the OPatch utility.

As a prereq of the upgrade, one needs to apply the patch 4329444 to the repository database. But on applying the patch through the OPatch utility, the session fails with the message:

Apply Session failed: ApplySession failed to prepare the system. OUI-67020:Interim Patch '4329444' is not applicable to OH '/oracle/grid/db10g'... Platforms not compatible
OPatch detects Platform ID 453 while this patch supports the following platforms: [ 23: Sun SPARC Solaris (64-BIT), ]
System intact, OPatch will not attempt to restore the system


So here OPatch stops being smart, and fails to recognize the Platform ID correctly as 23 and instead it picks 453.

Solution:

Before running OPatch, set the OPATCH_PLATFORM_ID environment variable to 23 by the following command
 export OPATCH_PLATFORM_ID=23

And OPatch is smart again!


Simplifying RMAN


Till sometime back, RMAN really eluded me. But personally I think its over-rated(No offence, RMAN fans!!)

Here, I would be discussing simple steps for setting up RMAN. The settings I mention here are the very basic ones. Tweaking them depends heavily on the environment, requirements and trade-offs.

Lets suppose our requirements are the ones listed below.

1. Retain only 1 backup of each datafile and controlfile.
2. Optimize backup by skipping the backup of files in certain circumstances if the identical file or an identical version of the file has already been backed up.
3. Backup database to disk, instead of tape
4. Backup controlfile along with the datafiles
5. Backup the controlfile to a specific location on disk, say, /mount/cntFileBkp/
6. Parallelism of say, 4
7. Compressed backup
8. Backup in the form of backup sets, instead of image copies
9. Each backup set to be max of 2000 M
10. Backup datafiles to a specific location on disk, say, /mount/dataFileBkp/

The above mentioned requirements can be implemented by the following simple steps:

1. Start the rman utility
2. On the rman prompt, connect to the target database

RMAN> connect target;

connected to target database: TEST (DBID=528243886)

3. Start configuring to suit your requirements

• For point 1(requirements)
CONFIGURE RETENTION POLICY TO REDUNDANCY 1;
• For point 2
CONFIGURE BACKUP OPTIMIZATION ON;
• For point 3
CONFIGURE DEFAULT DEVICE TYPE TO DISK;
• For point 4
CONFIGURE CONTROLFILE AUTOBACKUP ON;
• For point 5
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO
'/mount/cntFileBkp/%F';

• For points 6, 7 and 8
CONFIGURE DEVICE TYPE DISK PARALLELISM 4 BACKUP TYPE TO COMPRESSED BACKUPSET;
• For points 9 and 10
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/mount/dataFileBkp/%U'
MAXPIECESIZE 2000 M;

4. Confirm your settings by

RMAN> show all;

using target database control file instead of recovery catalog
RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1;
CONFIGURE BACKUP OPTIMIZATION ON;
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/mount/cntFileBkp/%F';
CONFIGURE DEVICE TYPE DISK PARALLELISM 4 BACKUP TYPE TO COMPRESSED BACKUPSET;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/mount/dataFileBkp/%U' MAXPIECESIZE 2000 M;
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/mount/oracle/testdb/10.2.0/dbs/snapcf_TEST.f'; # default

5. And now we are ready to take a backup(online) of our db, on the following assumptions

• DB is in archivelog mode
• Directory structures for backup locations are in place

6. Run the following backup script to take an online backup of your DB

run {
allocate channel backup_disk1 type disk format '/mount/dataFileBkp/%U' maxpiecesize 2000 M;
allocate channel backup_disk2 type disk format '/mount/dataFileBkp/%U' maxpiecesize 2000 M;
allocate channel backup_disk3 type disk format '/mount/dataFileBkp/%U' maxpiecesize 2000 M;
allocate channel backup_disk4 type disk format '/mount/dataFileBkp/%U' maxpiecesize 2000 M;
backup incremental level 0 cumulative filesperset = 10 as COMPRESSED BACKUPSET tag '%TAG' database;
release channel backup_disk1;
release channel backup_disk2;
release channel backup_disk3;
release channel backup_disk4;
}
release channel;

7. incremental level 0 here refers to a FULL backup, to be used for subsequent incremental backups
8. cumulative refers to copying the data blocks used since the most recent level 0 backup
9. filesperset refers to max no. of datafiles to be included in one backupset

Now, the catch here is, though it all seems very easy to setup and implement, tweaking it to suit your needs/trade-offs can be quite tricky.
For instance, the purpose of taking a compressed backup is to save disk space, but during compressed backup, the CPU utilization is extremely high.
So basically it’s a trade-off between performance and affordable disk space.

Using Log Miner Utility


Using LogMiner utility, you can query the contents of online redo log files and archived log files. And that’s not all, if you’ve the knack for it, you might as well use it as a data audit tool and also as a tool for sophisticated data analysis(Nice, isn’t it!)

Now lets concentrate on the very basics. Suppose you’ve a primary database which is really troubling you by generating a lot of archives. And you need to find out which are the culprit tables/objects. The safest way to use LogMiner is to use it on the standby(assuming that you have one!)

Follow the simple steps below.

1. Logon to the standby
2. Open the database in read-only mode
   SQL> ALTER DATABASE OPEN; 
[Since it’s a standby, this command would open it in Read-Only]
3. On the sql prompt
   SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/mount1/archive/arch1.arc',OPTIONS => DBMS_LOGMNR.NEW);

This statement tells the Log Miner that ‘I want to know what’s inside arch1.arc’

   SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/mount1/archive/arch2.arc',OPTIONS => DBMS_LOGMNR.ADDFILE);

‘And this is one more that I want to know about’ [Notice the difference in OPTIONS]

4. Start the Log Miner
   SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR

5. The Log Miner dumps all the info gathered into a view called V$LOGMNR_CONTENTS. The most useful columns of this view are
OPERATION [VARCHAR2(32)] - User level SQL operation that made the change: 
o INSERT - Change was caused by an insert statement
o UPDATE - Change was caused by an update statement
o DELETE - Change was caused by a delete statement
o DDL - Change was caused by a DDL statement
o START - Change was caused by the start of a transaction
o COMMIT - Change was caused by the commit of a transaction
o ROLLBACK - Change was caused by a full rollback of a transaction
o LOB_WRITE - Change was caused by an invocation of DBMS_LOB.WRITE
o LOB_TRIM - Change was caused by an invocation of DBMS_LOB.TRIM
o LOB_ERASE - Change was caused by an invocation of DBMS_LOB.ERASE
o SELECT_FOR_UPDATE - Operation was a SELECT FOR UPDATE statement
o SEL_LOB_LOCATOR - Operation was a SELECT statement that returned a LOB locator
o MISSING_SCN - LogMiner encountered a gap in the redo records.
o INTERNAL - Change was caused by internal operations initiated by the database
o XML DOC BEGIN - Beginning of a change to an XMLType column or table
o XML DOC WRITE - Data for an XML document
o XML DOC END - End of the Data for an XML document
o UNSUPPORTED - Change was caused by operations not currently supported by LogMiner

OPERATION_CODE [NUMBER] - Number of the operation code:
o 0 - INTERNAL
o 1 - INSERT
o 2 - DELETE
o 3 - UPDATE
o 5 - DDL
o 6 - START
o 7 - COMMIT
o 9 - SELECT_LOB_LOCATOR
o 10 - LOB_WRITE
o 11 - LOB_TRIM
o 25 - SELECT_FOR_UPDATE
o 28 - LOB_ERASE
o 34 - MISSING_SCN
o 68 - XML DOC BEGIN
o 70 = XML DOC WRITE
o 71 = XML DOC END
o 36 - ROLLBACK
o 255 - UNSUPPORTED

ROLLBACK [NUMBER] - 1 = if the redo record was generated because of a partial or a full rollback of the
associated transaction, 0 = otherwise
TABLE_NAME [VARCHAR2(32)] - Name of the modified table (in case the redo pertains to a table modification)
SEG_NAME [VARCHAR2(256)] - Name of the modified data segment
SEG_TYPE [NUMBER] - Type of the modified data segment:
o 0 - UNKNOWN
o 1 - INDEX
o 2 - TABLE
o 19 - TABLE PARTITION
o 20 - INDEX PARTITION
o 34 - TABLE SUBPARTITION
o All other values – UNSUPPORTED

SEG_TYPE_NAME [VARCHAR2(32)] - Segment type name:
o UNKNOWN
o INDEX
o TABLE
o TABLE PARTITION
o INDEX PARTITION
o TABLE SUBPARTITION
o UNSUPPORTED

ROW_ID [VARCHAR2(18)] - Row ID of the row modified by the change (only meaningful if the change
pertains to a DML).
This will be NULL if the redo record is not associated with a DML.
USERNAME [VARCHAR2(30)] - Name of the user who executed the transaction
OS_USERNAME [VARCHAR2(4000)] - Name of the operating system user
SEQUENCE# [NUMBER] - Sequence number of the redo log that contained the redo record corresponding
to the database change
SQL_REDO [VARCHAR2(4000)] - Reconstructed SQL statement that is equivalent to
the original SQL statement that made the change. LogMiner does not generate SQL redo
for temporary tables. In such a case, this column will contain the string "/* No
SQL_REDO for temporary tables */".
SQL_UNDO [VARCHAR2(4000)] - Reconstructed SQL statement that can be used to
undo the effect of the original statement that made the change. DDL statements have
no corresponding SQL_UNDO. LogMiner does not generate SQL undo for temporary tables.
In such a case, this column will contain the string "/* No SQL_UNDO for temporary
tables */".

6. End the Log Miner session after your queries are answered
SQL> EXECUTE DBMS_LOGMNR.STOP_LOGMNR

7. Remember that the view would exist only for the current session and not any subsequent sessions. So you have to start the LogMiner each time you wish to query the view.

Tuesday, June 23, 2009

The Notorious NullPointerException

Personally, I'm really fond of errors...but this one can get a little messy!! I'll be picking out one particular case here.

Scope - Oracle Applications 11.5.10.2

Issue - One particular user gets the NullPointerException right after login.

Analysis - The below query

select * from icx_custom_menu_entries where user_id = (select user_id from fnd_user where user_name ='user')

returns two entries corresponding to the user where one of the entries had the Prompt and URL field null(yes, the culprits!)

Similar issue has been documented in bug 4866881; this is a Data Corruption issue.

Solution - Create a backup of table icx_custom_menu_entries

create table icx.icx_custom_menu_entries_bak as select * from icx_custom_menu_entries

Extract the user id of the concerned user

select user_id from fnd_user where user_name ='user'

Delete the offending entry from the table

delete from icx_custom_menu_entries where user_id='userid' and prompt is null

And life’s good again!!