Wednesday, July 10, 2013

Oracle 12c ..available for windows

0 comments

Today, I was able to see the download available for windows. It’s provides another round of testing new features on windows environment too.

test1

Enjoy the 12c and lot’s of features on your way for Database Systems.!!

Monday, July 1, 2013

Oracle 12c RMAN Features.. !!

0 comments

Many of the DBA’s are known with Oracle 12c download is available now. Good to see already many of them would have completed testing of 12c new features till this point of time (exceptional people like Tanel Poder, Jonathan lewis and Richard Foote). I would like to quote some important features of rman which I was interested into.

SYSBACKUP Privilege (New) – It’s nice to see that new privilege is introduced in order to maintain the backup /recoveries with respect PDB.

select privilege from dba_sys_privs where grantee='SYSBACKUP';

SELECt ANY DICTIONARY  - is part of privilege enables the root to take backup of CDB and as well as PDB information while taking backup from root.

BACKUP DATABASE ROOT;  -- CDB
BACKUP PLUGGABLE DATABASE sales, hr; – PDB


SQL Interface Improvements( enabled on RMAN) – Now, we are enabled with running sql statements on RMAN Prompt


Active Database Duplication Improvements -  Oracle introduced two different types of backup methods to perform the task (It’s an enhancement to current active method of duplication)


push based and pull based method.


push based method – It’s an current algorithm which was used till oracle 11g versions, where image copies are getting utilized while performing the task of active database duplications (which results in copying of actual data file copy, snapshot of data file from task initiated. Which contains empty/partially filled blocks).


pull based method – rman would utilize new backup sets when we initiate an call to active database duplication over the sqlnet and performs the active data duplication. Looking into I think we can use features of compression etc during the implementation part which reduces the duplication time.


Refer to  Oracle active data duplication

Tuesday, June 11, 2013

ORACLE_BASE not set in environment …!

2 comments

Today, I will talk about the above issue and minor fundamental concept related to this issue. At first instance what we see/understand from the basic error, we try to check our environment variables. Glad every one does that, when it comes to oracle rac environment we need to check srvctl environmental variables

$ srvctl getenv database -d test
test:

If it’s does not set then we need to set as below

srvctl setenv database -d test -t "ORACLE_BASE=/u01/app/oracle"

Inspite of setting above parameters you might face the issue, then may proceed with removing and adding of srvctl entries – just incase your issue would have been resolved. If it got resolved, well good else then the trouble gets started. Even one of my colleague came across the same on one the development environments. I as verified the basic stuff, which I described and above and still the issue persists (I tried to verify the same using sqlplus and it’s works with out any issues).


Now, I tried to reviewed the logs of GRID ($GRID_HOME/log/server_name/agent/crsd/oraagent_oracle) which provides the valuable information during the instance_startup and communication across srvctl utility.

2013-06-10 07:20:12.357: [ora.test.db][2052462912] {1:1116:25189} [start] Utils::getCrsHome crsHome /u01/app/11.2.0.3/grid
2013-06-10 07:20:12.357: [ora.test.db][2052462912] {1:1116:25189} [start] clsnInstConnection::makeConnectStr LIBRARY_PATH1 ,LD_LIBRARY_PATH=
2013-06-10 07:20:12.357: [ora.test.db][2052462912] {1:1116:25189} [start] makeConnectStr = (DESCRIPTION=(ADDRESS=(PROTOCOL=beq)(PROGRAM=/u01/app/oracle/product/11.2.0.3_test/bin/oracle)(ARGV0=oracletest1)(ENVS='ORACLE_HOME=/u01/app/oracle/product/11.2.0.3_test,ORACLE_SID=test1,LD_LIBRARY_PATH=,ORACLE_BASE=/u01/app/oracle')(ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))'))(CONNECT_DATA=(SID=test1)))
2013-06-10 07:20:12.358: [ora.test.db][2052462912] {1:1116:25189} [start] Container:start oracle home /u01/app/oracle/product/11.2.0.3_test
2013-06-10 07:20:12.359: [ora.test.db][2052462912] {1:1116:25189} [start] InstConnection::connectInt: server not attached

If we observe carefully, the srvctl is trying to communicate (if we observe the instance is got hanged prior to “alter database mount”) to oracle background process. Since, we didn’t able to receive confirmation from oracle process about “mounting” of instance.



SRVCTL tool is an tool which talks to database through oracle process (it needs to use communication through listener – in rac environment local_listener parameter helps that) where as sqlplus is tool which directly maps (get communicated with out listener help – we can get the same information when we debug with gdb debugger).


So I tried to verified the same once I was able to digest my self the error information, what lacks communication between srvctl and oracle background process (pmon which gets register to local_listener – listens on port).  The parameter was missing in spfile, so I have created “local_listener” entry in spfile and respective entry in tnsnames.ora and started the instance with SRVCTL.


This time it worked,the cluster instance came up with out any issues.


I hope it helps if some one came across this type of issue during their instance startup’s on RAC level.

Friday, May 31, 2013

DBCA Silent Installation..PRCR-1070

0 comments

Today I would like to share some minor issue, perhaps most interesting and lesson learning. I was trying with DBCA silent installation with help of template (RAC DB Creation on two node cluster). I’m not sure how many would have been tried using silent installation (since many of them would prefer to use/opt for response file based installation). Please find the script below which I utilized for silent installation (parameters)

dbca -silent -createDatabase 
-templateName /u01/app/oracle/product/11.2.0.3/assistants/dbca/templates/test_new.dbt
-gdbName test.domain.com
-sid test
-sysPassword change_on_install
-systemPassword manager
-emConfiguration NONE
-datafileDestination +DATA
-recoveryAreaDestination +RECO
-storageType ASM
-diskGroupName +DATA
-recoveryGroupName +RECO
-nodelist node001,node002
-characterSet AL32UTF8
-nationalCharacterSet AL16UTF16
-variables ORACLE_BASE=/u01/app/oracle

If you observer carefully, I have provided the locations for disk groups (with an plus sign). During my first execution the database creation completed with in seconds and resulted in to the error stack.

Creating and starting Oracle instance
PRCR-1070 : Failed to check if resource ora.+DATA.dg is registered
Invalid resource name
DBCA_PROGRESS : 100%

When I verified the cluster resource it mapped to “ora.DATA.dg”, my issue was with an “+” sign. Once modified (removed “+”) sign the dbca completed successfully. Further, I usually spend time on reviewing and understanding things behind the screens, as usual reviewed the trace log files and found some useful things to understand/digest my self. Please find the things below,

startup nomount pfile='/u01/app/oracle/product/11.2.0.3/dbs/initDBUA3149591.ora'

If you observe carefully the naming convention, that oracle uses for temporary creation of parameter file during the process of db creation.

[main] [ 2013-05-30 06:32:06.321 PDT ] [ASMUtils.loadDiskGroups:783]  sql to be executed:=select NAME||'|'||round(TOTAL_MB)||'|'||round(USABLE_FILE_MB)||'|'||nvl(TYPE, 'DBCA_NULL')||'|'||STATE from v$asm_diskgroup order by NAME using sqlengine
[main] [ 2013-05-30 06:32:07.507 PDT ] [SQLEngine.done:2189] Done called
[main] [ 2013-05-30 06:32:07.508 PDT ] [ASMUtils.loadDiskGroups:818] Loading the diskgroups...
[main] [ 2013-05-30 06:32:07.508 PDT ] [ASMUtils.loadDiskGroups:826] diskgroup: DATA|121595904|40630566|NORMAL|MOUNTED
[main] [ 2013-05-30 06:32:07.510 PDT ] [ASMUtils.loadDiskGroups:826] diskgroup: DBFS|4173120|1936420|NORMAL|MOUNTED
[main] [ 2013-05-30 06:32:07.510 PDT ] [ASMUtils.loadDiskGroups:826] diskgroup: RECO|182401632|12881308|NORMAL|MOUNTED



DBCA internally, performing the checks of disk groups size and type.

[main] [ 2013-05-30 06:32:08.852 PDT ] [SQLEngine.doSQLSubstitution:2414]  The substituted sql statement:=select t1.name from v$asm_template t1, v$asm_diskgroup t2 where t1.group_number=t2.group_number and t2.name='DATA'
[main] [ 2013-05-30 06:32:08.852 PDT ] [ASMUtils.setTemplates:898] sql to be executed:=select t1.name from v$asm_template t1, v$asm_diskgroup t2 where t1.group_number=t2.group_number and t2.name='DATA'
[main] [ 2013-05-30 06:32:09.987 PDT ] [Host.executeSteps:5349] disk group DATA checking if mounted
[main] [ 2013-05-30 06:32:09.987 PDT ] [SQLEngine.doSQLSubstitution:2414] The substituted sql statement:=select DATABASE_COMPATIBILITY from v$asm_diskgroup where name='DATA'
[main] [ 2013-05-30 06:32:09.987 PDT ] [ASMUtils.getDiskGroupCompatibility:1252] sql to be executed:=select DATABASE_COMPATIBILITY from v$asm_diskgroup where name='DATA'



[Thread-127] [ 2013-05-30 06:32:26.201 PDT ] [SQLEngine.setSpool:1968]  Setting Spool  = /u01/app/oracle/cfgtoollogs/dbca/test/CreateDB.log
[Thread-127] [ 2013-05-30 06:32:26.201 PDT ] [SQLEngine.setSpool:1969] Is spool appendable? --> true
[Thread-127] [ 2013-05-30 06:32:26.201 PDT ] [DBEntryStep.executeImpl:254] undoMgmt=null
[Thread-127] [ 2013-05-30 06:32:26.202 PDT ] [DBEntryStep.executeImpl:255] undoMgmt=true
[Thread-127] [ 2013-05-30 06:32:26.202 PDT ] [DBEntryStep.executeImpl:274] undoTBS=UNDOTBS1
[Thread-127] [ 2013-05-30 06:32:30.907 PDT ] [DBEntryStep.executeImpl:390] undoTBS create SQL=CREATE SMALLFILE UNDO TABLESPACE "UNDOTBS1" DATAFILE SIZE 200M AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED
[Thread-127] [ 2013-05-30 06:32:30.908 PDT ] [DBEntryStep.executeImpl:439] 0th redoLogText = GROUP 1 SIZE 2097152K
[Thread-127] [ 2013-05-30 06:32:30.908 PDT ] [DBEntryStep.executeImpl:439] 1th redoLogText = GROUP 2 SIZE 2097152K
[Thread-127] [ 2013-05-30 06:32:30.908 PDT ] [DBEntryStep.executeImpl:456] CREATEDB without passwords = CREATE DATABASE "test"
MAXINSTANCES 32
MAXLOGHISTORY 1
MAXLOGFILES 192
MAXLOGMEMBERS 3
MAXDATAFILES 1024
DATAFILE SIZE 700M AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL
SYSAUX DATAFILE SIZE 600M AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED
SMALLFILE DEFAULT TEMPORARY TABLESPACE TEMP TEMPFILE SIZE 20M AUTOEXTEND ON NEXT 640K MAXSIZE UNLIMITED
SMALLFILE UNDO TABLESPACE "UNDOTBS1" DATAFILE SIZE 200M AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED
CHARACTER SET AL32UTF8
NATIONAL CHARACTER SET AL16UTF16
LOGFILE GROUP 1 SIZE 2097152K,
GROUP 2 SIZE 2097152K
 

If you verify the strange thing, dbca is creating an database with option “CREATEDB with out passwords”.  At the end of the day,  good to see many lesson’s learning for the day and I hope it would help my blog visitors further a step of knowledge sharing with them.


Happy Reading.. !!


Thursday, May 23, 2013

Rman Duplicate (RAC 2-Node)… ASM (same storage/disk group)….

1 comments

I would like to share the some of issues and things which we should concentrate while performing duplicate of production/development databases on same server and same shared storage. Most of the Administrators performed the task but still most of the upcoming dba’s didn’t get good hold of rman stuff. Just and lesson learning's.  I tried to play around on server as I was known with consequences (don’t play as I do – do take your management approval).

I have created initially static listener and tns entry for duplicate database which I intended to do build on same cluster.

### dup ####       
dup,dup.test.com =
  (DESCRIPTION =
   (LOAD_BALANCE=ON)
    (ADDRESS_LIST =
     (ADDRESS=(PROTOCOL=TCP)(HOST=node4-vip)(PORT=1536))
     (ADDRESS=(PROTOCOL=TCP)(HOST=node5-vip)(PORT=1536))
    )
   (CONNECT_DATA =
     (SERVICE_NAME = dup)
   )
)
### dup ####       

##########################################################
# listener_dup
##########################################################

listener_dup =
(ADDRESS_LIST=
  (ADDRESS=(PROTOCOL=tcp)
           (HOST=node4-vip.us.oracle.com)
           (QUEUESIZE=256)
           (PORT=1536)
  )
)
sid_list_listener_dup =
(SID_LIST =
  (SID_DESC= (GLOBAL_DBNAME = dup.test.com)
             (SID_NAME=dup1)
             (SDU=32768)
             (ORACLE_HOME =/u01/app/oracle/product/dup )
  )
)

########################################################

At first glance, I tot the entries are good and didn’t have any issue and then I continued with task of rman duplicate. Next step we need to create initORACLE_SID.ora file to start the instance in nomount mode.

DB_NAME=DUP
DB_UNIQUE_NAME=DUP
DB_FILE_NAME_CONVERT='+DATA/PROD','+DATA/DUP','+RECO/PROD','+RECO/DUP'
LOG_FILE_NAME_CONVERT='+DATA/PROD','+DATA/DUP','+RECO/PROD','+RECO/DUP'
sga_target=18G
pga_aggregate_target=4G
diagnostic_dest=/u02/diag/dup

I have started the instance and continued with rman duplicate (Note- I’m using the 11203 binaries)

rman target sys/sys1@prod auxiliary /

You will face any issue as below

Starting Duplicate Db at 22-MAY-13
released channel: prim1
released channel: dup1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 05/22/2013 02:21:15
RMAN-05501: aborting duplication of target database
RMAN-06217: not connected to auxiliary database with a net service name

Never try to connect to auxiliary instance as Host Authentication you will face an issue. Next continued with below

rman targetsys/sys1@prod auxiliary sys/sys1@dup (Note –Copy the password file of production,else it won’t work)

Now this time it failed with no listener issue. Now, I have reviewed my tnsnames.ora and listener.ora (bounced the listener) many times, still issue persist. Couldn’t able figure out the issue/cause. Then if you carefully observe,

service_name and global_dbname in listener.ora file and tnsnames.ora both mis-matches with domain name, which caused me to pause my work for a while. (To be honest, I made my entries to upper case and made sure that both matches then I was able to proceed further.

Now, next I have resolved my connections issue and continued with task, and next issue continues

Errors in file /u02/oraout/dup/diag/rdbms/dup/dup1/trace/dup1_ora_18406.trc:
ORA-19505: failed to identify file "+DATA/dup/controlfile/control_01.ctl"
ORA-17502: ksfdcre:3 Failed to create file +DATA/dup/controlfile/control_01.ctl
ORA-15001: diskgroup "DATA" does not exist or is not mounted
ORA-15204: database version 11.2.0.0.0 is incompatible with diskgroup DATA

As my compatible parameter maps to lower version (perhaps I didn’t placed any parameter in my auxiliary init.ora file), in order to resolve this I modified my initialization parameter with “compatible=11.2.0.3.0” and continued with task.

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 05/22/2013 03:01:48
RMAN-05501: aborting duplication of target database
RMAN-05001: auxiliary file name +DATA/admin/prod/db/test_02.dbf conflicts with a file used by the target database
RMAN-05001: auxiliary file name +DATA/admin/prod/db/users_06.dbf conflicts with a file used by the target database

Now, the cause of this issue was, I missed one of the entry to place (which production datafiles get maps to) an entry in db_file_name_convert, due to which the task was aborted.

Note – In case if any one missed out the entries to place the in parameters (db_file_name_convert or log_file_name_convert), rman takes care of aborting the process, instead of overriding /corrupting production data files.

Now, I have updated my db_file_name_convert parameter as below

DB_FILE_NAME_CONVERT='+DATA/PROD','+DATA/DUP','+RECO/PROD','+RECO_UBA/DUP','+DATA/admin/prod','+DATA/admin/dup'

 

RMAN> run {
2> allocate channel prim1 device type disk;
3> allocate auxiliary channel dup1 device type disk;
4> duplicate database to dup from active database;
5> }

Let ‘s  me provide the actual process of duplication,

Task 1 = (Spfile and control file)
    Rebounce of instance (opens in mount mode)
    set’s db_name and db_unique_name to spfile
    Rebounce of instance (opens in mount mode)
    Take’s backup of controlfile from production
    Now mount the database with controlfile.

 

Task 2 = Data file copy

    set newname for datafile n (setting new location in current rman session)
    backup as copy reuse datafile  1 auxiliary format
    Note - AS COPY --performing image copy of datafile
    So, it copies from prod === dup over network(via.. sqlnet)
    sql 'alter system archive log current';
    Note - Last Step making sure that, we are making/recovering our db till the last Archived Online Redo log file.


Task 3 =

    Archive Files Copy and re-using
    backup as copy reuse
       archivelog like  "+RECO/prod/archivelog/2013_05_22/thread_2_seq_248.126795.816051805"
       auxiliary format "/u01/app/oracle/product/dup/dbs/arch2_248_809881982.dbf" 
       .....
       .....
       ...
       catalog clone archivelog "/u01/app/oracle/product/dup/dbs/arch2_248_809881982.dbf" (updating information to controlfile)
    switch clone datafile all;

Task 4 = Last task is recovery

    set until scn  13239404328241;
       recover
       clone database
        delete archivelog ;

    archived log for thread 1 with sequence 303 is already on disk as file /u01/app/oracle/product/dup/dbs/arch1_303_809881982.dbf
    archived log for thread 2 with sequence 249 is already on disk as file /u01/app/oracle/product/dup/dbs/arch2_249_809881982.dbf
    archived log file name=/u01/app/oracle/product/dup/dbs/arch1_303_809881982.dbf thread=1 sequence=303
    archived log file name=/u01/app/oracle/product/dup/dbs/arch2_249_809881982.dbf thread=2 sequence=249
    media recovery complete, elapsed time: 00:00:02
    Finished recover at 22-MAY-13
    Oracle instance started

Note - we can controlfile the archive log generation by using parameter "db_recovery_file_dest" or "log_archive_dest_%' else it maps to %ORACLE_HOME/dbs" folder, that's what we have seen here.

Further, I would like to show the importance of alert log monitoring, if you see the below messages

Reconfiguration complete
ORACLE_BASE from environment = /u01/app/oracle
Wed May 22 03:20:07 2013
alter database mount
Set as converted control file due to db_unique_name mismatch
Changing di2dbun from prod to DUP
Successful mount of redo thread 1, with mount id 1039698535
Database mounted in Exclusive Mode
Lost write protection disabled
Completed: alter database mount
Wed May 22 03:20:17 2013
Starting background process ASMB

If you try to understand, due the process/execution of  Task –1, the Oracle is setting the db_name and db_unique_name @rman level, same mapped/written to alert log, perhaps the difference of communication to alert log is informative. So, do monitor alert log and traces files during task/execution.

I hope it the post helps to visualize things in different way…!!

Let me know your comments and things which you would like to see in upcoming posts.

Thursday, May 16, 2013

V$Log and V$LogFile vs GV$Log and GV$logfile…

0 comments

Today, I will discuss about the basic difference about the gv$log and v$log views, which was known by most of the experts who are working on Oracle from past decades. It’s an just different view/presentation of information how we can look at the stuff. Coming back to the topic, v$log  and v$logfile provides the information about the Online Redo log file information. If you check carefully v$log view provides the information about the thread#, that is the information about other nodes of RAC.

If you check the base table of the V$LOG – it fetches from GV$LOG further – maps to x$kccle le and x$kccrt

 

GV$LOG                         select le.inst_id, le.lenum, le.lethr, le.leseq, le.lesiz*le.lebsz, le.lebsz, ledup, decode(bitand(l
                               e.leflg,1),0,'NO','YES'), decode(bitand(le.leflg,24), 8, 'CURRENT',                            16,'C
                               LEARING',                            24,'CLEARING_CURRENT',        decode(sign(leseq),0,'UNUSED',
                                    decode(sign((to_number(rt.rtckp_scn)-to_number(le.lenxs))*        bitand(rt.rtsta,2)*(64-bitand
                               (rtsta, 64))),        -1,'ACTIVE','INACTIVE'))), to_number(le.lelos), to_date(le.lelot,'MM/DD/RR HH2
                               4:MI:SS','NLS_CALENDAR=Gregorian'), to_number(le.lenxs), to_date(le.lenxt,'MM/DD/RR HH24:MI:SS','NLS
                               _CALENDAR=Gregorian')  from x$kccle le, x$kccrt rt where le.ledup!=0 and le.lethr=rt.rtnum and  le.i
                               nst_id = rt.inst_id

 

V$LOG                          select   GROUP# , THREAD# , SEQUENCE# , BYTES , BLOCKSIZE, MEMBERS , ARCHIVED , STATUS , FIRST_CHANG
                               E# , FIRST_TIME, NEXT_CHANGE#, NEXT_TIME from GV$LOG where inst_id = USERENV('Instance')


GV$LOGFILE                     select inst_id,fnfno, decode(fnflg,0,'',   decode(bitand(fnflg,1),1,'INVALID',   decode(bitand(fnflg
                               ,2),2,'STALE',   decode(bitand(fnflg,4),4,'DELETED',   decode(bitand(fnflg,8+32),8,'',32,'',40,'','U
                               NKNOWN'))))), decode(bitand(fnflg,8),0,'ONLINE','STANDBY'), fnnam, decode(bitand(fnflg, 32),0,'NO','
                               YES') from x$kccfn where fnnam is not null and fntyp=3

 

V$LOGFILE                      select  GROUP# , STATUS , TYPE , MEMBER, IS_RECOVERY_DEST_FILE from GV$LOGFILE where inst_id = USERE
                               NV('Instance')

From the above queries we check the table x$kccfn – it provides all the information files are maintained across in database.

SYS@test> select count(*),FNTYP from x$kccfn group by FNTYP;

COUNT(*) FNTYP
---------- ----------
63 4 --data files
32 3 -- log files
1 7 -- temp file
1 0 -- snapshot



So, if we verify v$logfile and GV$logfile both refers to same information perhaps, gv$log provides the duplex information as below

SYS@test> select count(*) from v$logfile;

COUNT(*)
----------
32
SYS@test> select count(*) from gv$logfile;

COUNT(*)
----------
64



Let me execute the base query and verify across the result set with above output

SYS@test> select count(*)
from x$kccfn
where fnnam is not null
and fntyp=3 2 3 4 ;

COUNT(*)
----------
32


The query works fine, then how come the result from gv$logfile is misleading (duplexes). Reason for that is, when we execute the query with gv$views query, it would get executed on other instances as well to get the sum up of data,due to that the output is misleading. When coming to GV$LOG, it uses x$kccle le and x$kccrt

x$kccle ---provides information about the logfile entries


x$kccrt – provides information about the redo thread


First instance
================
SYS@test1> archive log list;
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            +RECO
Oldest online log sequence     59
Next log sequence to archive   66
Current log sequence           66

Second Instance
===============
SYS@test2>  archive log list;
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            +RECO
Oldest online log sequence     48
Next log sequence to archive   55
Current log sequence           55

   THREAD#     GROUP#  SEQUENCE# STATUS     MEMBER
---------- ---------- ---------- ---------- ------------------------------------------------------------
2 15 55 CURRENT +DATA/test/onlinelog/group_15.1269.815360505
2 15 55 CURRENT +RECO/test/onlinelog/group_15.18794.815360507
1 12 66 CURRENT +DATA/test/onlinelog/group_12.1279.815362377
1 12 66 CURRENT +RECO/test/onlinelog/group_12.16544.815362379

let’s try to fetch same details from x$kccrt

   INST_ID      RTNUM      RTSTA RTCKP_SCN                                        RTCKP_RBA_SEQ RTCKP_RBA_BNO RTCKP_RBA_BOF      RTLFH      RTLFT RTSID
---------- ---------- ---------- ------------------------------------------------ ------------- ------------- ------------- ---------- ---------- ----------
1 1 527 13219244105416 66 2 16 1 13 test1
1 2 527 13219244105429 55 2 16 5 17 test2

If we verify the column RTCKP_RBA_SEQ – it maps to archive log list and v$log (current) status record. But still question stands why we find the discrepancy when we query GV$LOG. If we try to understand the query “from” clause


from x$kccle le, x$kccrt rt where le.ledup!=0 and le.lethr=rt.rtnum and  le.inst_id = rt.inst_id

Both tables doesn’t have any column to map or distinguish records or data across the instance. Both tables hold database wide information (logical container). So, the query executes on both nodes with dynamic behavior of x$tables that would get result into duplex records.

SYS@test> select distinct INST_ID from x$kccfn group by INST_ID;

INST_ID
----------
2

SYS@test> select distinct INST_ID from x$kccfn group by INST_ID;

INST_ID
----------
1

If you observe I have executed same query on both nodes and resulted into different results and that explains the things of duplex records and misleading output from GV$log and GV$logfile views.




I hope it finds useful …!!

 
© 2013 Oracle Internals from basics to advanced....