Data … as usual

All things about data by Laurent Leturgez

Monthly Archives: November 2011

Trace Oracle CBO computations for a specific sql_id

In this post, I will explain how to trace CBO computation (aka 10053 event) for a specific sql_id and in another session.

To do this, we need to know two things:

1) How to trace another session? To do this, I will use undocumented oracle tool “oradebug”. More precisely, I will use the new event declaration syntax which is not based on event id.

2) How to trace CBO computation for a specific sql_id? To do this, I will use the new event declaration syntax (more details in the demonstration above)

To demonstrate this trick, I will consider two sessions:

– The first session (S1) is logged as an application user named LAURENT. This user owns two table T1 and T4, and we only want to trace a specific SQL Query (select count(*) from t4 where id between 500 and 550;).

– The second session (S2) is logged as SYS user who will launch oradebug commands.

* S1 (logged as LAURENT)

SQL> select count(*) from t4 where id between 500 and 550;
COUNT(*)
----------
 51

* S2 (logged as SYS) : querying the dictionary to obtain sql_id associated to the SQL query:

SQL> select sql_id,sql_text from v$sql
 2 where sql_text like 'select count(*) from t4 where id between 500 and 550%';
SQL_ID        SQL_TEXT
------------- --------------------------------------------------------------------------------
2zg40utr7a08n select count(*) from t4 where id between 500 and 550

* S1 (logged as LAURENT): obtain Oracle PID and system PID (spid) of the session we will trace. This information will be used for oradebug in the next step.

SQL> select pid,spid from v$process
 2 where addr=(select paddr from v$session
 3 where sid=(select sid from v$mystat where rownum=1));
PID        SPID
---------- ---------
25         4850

* S2 (Logged as SYS) : we will use oradebug new features to trace a specific sql_id by using the new syntax for tracing CBO Computations (trace[RDBMS.SQL_Optimizer.*])

SQL> ALTER SYSTEM FLUSH SHARED_POOL;
SQL> -- Setting Oracle PID to trace and verify by crossing the result of the system pid
SQL> oradebug setorapid 25
Oracle pid: 25, Unix process pid: 4850, image: oracle@oel (TNS V1-V3)
SQL> -- nolimit to tracefile
SQL> oradebug unlimit
Statement processed.
SQL> -- tracing SQL_Optimizer computation for a specific sql (here's our sql_id)
SQL> oradebug event trace[RDBMS.SQL_Optimizer.*][sql:2zg40utr7a08n]
Statement processed.
SQL> -- obtain the trace file name
SQL> oradebug tracefile_name
/u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_4850.trc

NB : Flushing shared pool is mandatory if you have already shared cursor for the statement to trace in the shared pool.

 

* S1 (Logged as LAURENT) : execute many sql statements in the session including our specific sql statement:

S1 (execute sql_id 2zg40utr7a08n one time, and others sql):
SQL> select count(*) from t4 where id between 500 and 550;
COUNT(*)
----------
 51

SQL> select count(*) from t4 ;
COUNT(*)
----------
 300000

SQL> select count(*) from t1;
COUNT(*)
----------
 294958

Finally, open the tracefile generated, you will only have the CBO computations and statistics for our specific sql_id:

[oracle@oel ~]$ vi /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_4850.trc
Registered qb: SEL$1 0xe47325b8 (PARSER)
---------------------
QUERY BLOCK SIGNATURE
---------------------
 signature (): qb_name=SEL$1 nbfros=1 flg=0
 fro(0): flg=4 objn=78460 hint_alias="T4"@"SEL$1"
SPM: statement not found in SMB
**************************
Automatic degree of parallelism (ADOP)
**************************
Automatic degree of parallelism is disabled: Parameter.
PM: Considering predicate move-around in query block SEL$1 (#0)
**************************
Predicate Move-Around (PM)
**************************
OPTIMIZER INFORMATION
******************************************
----- Current SQL Statement for this session (sql_id=2zg40utr7a08n) -----
select count(*) from t4 where id between 500 and 550
*******************************************

.../...

Query Block Registry:
SEL$1 0xe47325b8 (PARSER) [FINAL]
:
 call(in-use=13920, alloc=49184), compile(in-use=88336, alloc=152104), execution(in-use=6016, alloc=8088)
End of Optimizer State Dump
Dumping Hints
=============
====================== END SQL Statement Dump ======================

Update : Bertrand Drouvot has blogged a tricky way to flush a specific sql_id before generating its CBO computation trace file. See link : http://bdrouvot.wordpress.com/2013/09/16/flush-a-single-sql-statement-and-capture-a-10053-trace-for-it/

 

Advertisement

Monitor how are used your table’s columns

If you want to know how are used your columns when they are involved in sql queries, you can use a specific function in the dbms_stats package called REPORT_COL_USAGE.

This function will give you what operation have been executed on your table columns, e.g. :

SQL> set lines 150 pages 400 long 20000000 longchunksize 50000
SQL> select dbms_stats.report_col_usage('SH','SALES') from dual;
DBMS_STATS.REPORT_COL_USAGE('SH','SALES')
-------------------------------------------------------------------------------
LEGEND:
.......
EQ : Used in single table EQuality predicate
RANGE : Used in single table RANGE predicate
LIKE : Used in single table LIKE predicate
NULL : Used in single table is (not) NULL predicate
EQ_JOIN : Used in EQuality JOIN predicate
NONEQ_JOIN : Used in NON EQuality JOIN predicate
FILTER : Used in single table FILTER predicate
JOIN : Used in JOIN predicate
GROUP_BY : Used in GROUP BY expression
...............................................................................
###############################################################################
COLUMN USAGE REPORT FOR SH.SALES
................................
1. AMOUNT_SOLD : EQ RANGE
###############################################################################

In the previous example, we can see the AMOUNT_SOLD column have been accessed with an equality and a range predicates.

If I execute a query which filter on the PROD_ID column, the report will be updated:

SQL> select count(*) from sh.sales where prod_id=400;
COUNT(*)
----------
 0
SQL> select dbms_stats.report_col_usage('SH','SALES') from dual;
DBMS_STATS.REPORT_COL_USAGE('SH','SALES')
--------------------------------------------------------------------------------
LEGEND:
.......
EQ : Used in single table EQuality predicate
RANGE : Used in single table RANGE predicate
LIKE : Used in single table LIKE predicate
NULL : Used in single table is (not) NULL predicate
EQ_JOIN : Used in EQuality JOIN predicate
NONEQ_JOIN : Used in NON EQuality JOIN predicate
FILTER : Used in single table FILTER predicate
JOIN : Used in JOIN predicate
GROUP_BY : Used in GROUP BY expression
...............................................................................
###############################################################################
COLUMN USAGE REPORT FOR SH.SALES
................................
1. AMOUNT_SOLD : EQ RANGE
2. PROD_ID : EQ
###############################################################################

Another feature of this package will report joins that have been done on the table columns (You can have a look to the legend which mentions what the function can report):

SQL> select count(*) from sh.sales s, sh.products p where s.prod_id=p.prod_id;
COUNT(*)
----------
 918843
SQL> select dbms_stats.report_col_usage('SH','SALES') from dual;
DBMS_STATS.REPORT_COL_USAGE('SH','SALES')
--------------------------------------------------------------------------------
LEGEND:
.......
EQ : Used in single table EQuality predicate
RANGE : Used in single table RANGE predicate
LIKE : Used in single table LIKE predicate
NULL : Used in single table is (not) NULL predicate
EQ_JOIN : Used in EQuality JOIN predicate
NONEQ_JOIN : Used in NON EQuality JOIN predicate
FILTER : Used in single table FILTER predicate
JOIN : Used in JOIN predicate
GROUP_BY : Used in GROUP BY expression
...............................................................................
###############################################################################
COLUMN USAGE REPORT FOR SH.SALES
................................
1. AMOUNT_SOLD : EQ RANGE
2. PROD_ID : EQ EQ_JOIN
###############################################################################

If you want to reset usage statistics, use the undocumented procedure RESET_COL_USAGE:

SQL> exec dbms_stats.reset_col_usage('SH','SALES');
PL/SQL procedure successfully completed.
SQL> select dbms_stats.report_col_usage('SH','SALES') from dual;
DBMS_STATS.REPORT_COL_USAGE('SH','SALES')
--------------------------------------------------------------------------------
LEGEND:
.......
EQ : Used in single table EQuality predicate
RANGE : Used in single table RANGE predicate
LIKE : Used in single table LIKE predicate
NULL : Used in single table is (not) NULL predicate
EQ_JOIN : Used in EQuality JOIN predicate
NONEQ_JOIN : Used in NON EQuality JOIN predicate
FILTER : Used in single table FILTER predicate
JOIN : Used in JOIN predicate
GROUP_BY : Used in GROUP BY expression
...............................................................................
###############################################################################
COLUMN USAGE REPORT FOR SH.SALES
................................
###############################################################################

Note: If you execute a query using a function based index, your column name will be the virtual column name used for applying the function:

SQL> exec dbms_stats.reset_col_usage('SH','SALES');
PL/SQL procedure successfully completed.
SQL> drop index idx;
Index dropped.
SQL> create index idx on sh.sales(amount_sold*2);
Index created.
SQL> set autotrace trace
SQL> select count(*) from sh.sales where amount_sold*2>40;

Execution Plan
----------------------------------------------------------
Plan hash value: 875048923
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 13 | 25 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 13 | | |
|* 2 | INDEX RANGE SCAN| IDX | 45873 | 582K| 25 (0)| 00:00:01 |
--------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("AMOUNT_SOLD"*2>40)

Statistics
----------------------------------------------------------
 12 recursive calls
 0 db block gets
 1828 consistent gets
 1812 physical reads
 0 redo size
 528 bytes sent via SQL*Net to client
 524 bytes received via SQL*Net from client
 2 SQL*Net roundtrips to/from client
 7 sorts (memory)
 0 sorts (disk)
 1 rows processed
SQL> set autotrace off
SQL> select dbms_stats.report_col_usage('SH','SALES') from dual;
DBMS_STATS.REPORT_COL_USAGE('SH','SALES')
--------------------------------------------------------------------------------
LEGEND:
.......
EQ : Used in single table EQuality predicate
RANGE : Used in single table RANGE predicate
LIKE : Used in single table LIKE predicate
NULL : Used in single table is (not) NULL predicate
EQ_JOIN : Used in EQuality JOIN predicate
NONEQ_JOIN : Used in NON EQuality JOIN predicate
FILTER : Used in single table FILTER predicate
JOIN : Used in JOIN predicate
GROUP_BY : Used in GROUP BY expression
...............................................................................
###############################################################################
COLUMN USAGE REPORT FOR SH.SALES
................................
1. SYS_NC00008$ : RANGE
###############################################################################

If you want to desactivate this feature because your database contains a lot of tables and columns and you don’t want to overload your system, you can set the undocumented parameter “_column_tracking_level” to 0 (default value = 1).

All results of the DBMS_STATS.REPORT_COL_USAGE are based on the COL_USAGE$ dictionary table.

Finally, you can use this method to decide if a column needs a histogram or if it’s an unindexed column that needs one.

Read rdbms and listener log (xml) from SQL*Plus prompt

Recently I have searched a method to read and filter entries in alert log files (rdbms and listener.log).

A documented method consists in using adrci (ADR command interpreter) but I wanted an easier method, so I searched on the net and found this thread in Tanel Poder’s Blog (http://blog.tanelpoder.com/2009/03/21/oracle-11g-reading-alert-log-via-sql/).

This method seems to answer my questions but it only shows rdbms entries.

So I found another V$ table (undocumented) that resolves my problem : V$DIAG_ALERT_EXT.

This view is based on X$DIAG_ALERT_EXT and contains log entries about rdbms, tnslsnr etc. Now, I just have to write the code to exploit this.

1- First view for rdbms log entries:

create or replace view my_db_alert_log as
select ORIGINATING_TIMESTAMP,HOST_ID,HOST_ADDRESS,DETAILED_LOCATION,MODULE_ID,
CLIENT_ID,PROCESS_ID,USER_ID,MESSAGE_ID,MESSAGE_GROUP,MESSAGE_TEXT,PROBLEM_KEY,FILENAME
from V$DIAG_ALERT_EXT WHERE trim(COMPONENT_ID)='rdbms';

2- Another one for listener log entries:

create or replace view my_lsnr_alert_log as
select ORIGINATING_TIMESTAMP,HOST_ID,HOST_ADDRESS,DETAILED_LOCATION,MODULE_ID,
CLIENT_ID,PROCESS_ID,USER_ID,MESSAGE_ID,MESSAGE_GROUP,MESSAGE_TEXT,PROBLEM_KEY,FILENAME
from V$DIAG_ALERT_EXT WHERE trim(COMPONENT_ID)='tnslsnr';

Off course, you can add every column useful for you 😉

Now, you can query alert.log directly in SQL*Plus (for example, here’s my last hour rdbms alert log file entries):

SQL> select ORIGINATING_TIMESTAMP,DETAILED_LOCATION,MESSAGE_GROUP,MESSAGE_TEXT
  2  from my_db_alert_log
  3  where ORIGINATING_TIMESTAMP> systimestamp - INTERVAL '0 01:00:00.0' DAY TO SECOND(1)
  4  order by 1
  5  /

ORIGINATING_TIMESTAMP                  DETAILED_LOCATION    MESSAGE_GROUP             MESSAGE_TEXT
-------------------------------------- -------------------- ------------------------- --------------------------------------------------
16-NOV-11 05.33.03.090000000 PM +01:00                                                ALTER SYSTEM: Flushing buffer cache
16-NOV-11 05.57.16.259000000 PM +01:00 /u01/app/oracle/diag                           Errors in file /u01/app/oracle/diag/rdbms/db112/db
                                       /rdbms/db112/db112/t                           112/trace/db112_ora_6377.trc  (incident=139371):
                                       race/db112_ora_6377.                           ORA-00700: erreur logicielle interne, arguments :
                                       trc                                            [kgerev1], [600], [600], [700], [], [], [], [], []
                                                                                      , [], [], []

16-NOV-11 05.57.16.262000000 PM +01:00                                                Incident details in: /u01/app/oracle/diag/rdbms/db
                                                                                      112/db112/incident/incdir_139371/db112_ora_6377_i1
                                                                                      39371.trc

16-NOV-11 05.57.16.943000000 PM +01:00 /u01/app/oracle/diag Generic Internal Error    Errors in file /u01/app/oracle/diag/rdbms/db112/db
                                       /rdbms/db112/db112/t                           112/trace/db112_ora_6377.trc  (incident=139372):
                                       race/db112_ora_6377.                           ORA-00600: code d'erreur interne, arguments : [],
                                       trc                                            [], [], [], [], [], [], [], [], [], [], []

16-NOV-11 05.57.16.946000000 PM +01:00                                                Incident details in: /u01/app/oracle/diag/rdbms/db
                                                                                      112/db112/incident/incdir_139372/db112_ora_6377_i1
                                                                                      39372.trc

16-NOV-11 05.57.17.278000000 PM +01:00                                                Dumping diagnostic data in directory=[cdmp_2011111
                                                                                      6175717], requested by (instance=1, osid=6377), su
                                                                                      mmary=[incident=139371].

16-NOV-11 05.57.17.544000000 PM +01:00                                                Use ADRCI or Support Workbench to package the inci
                                                                                      dent.
                                                                                      See Note 411.1 at My Oracle Support for error and
                                                                                      packaging details.

16-NOV-11 05.57.18.425000000 PM +01:00                                                Dumping diagnostic data in directory=[cdmp_2011111
                                                                                      6175718], requested by (instance=1, osid=6377), su
                                                                                      mmary=[incident=139372].

16-NOV-11 05.57.19.201000000 PM +01:00                      ami_comp                  Sweep [inc][139372]: completed
16-NOV-11 05.57.19.220000000 PM +01:00                      ami_comp                  Sweep [inc][139371]: completed
16-NOV-11 05.57.19.222000000 PM +01:00                      ami_comp                  Sweep [inc2][139372]: completed
16-NOV-11 05.57.19.222000000 PM +01:00                      ami_comp                  Sweep [inc2][139371]: completed

12 rows selected.

And for the listener.log

SQL> select ORIGINATING_TIMESTAMP,DETAILED_LOCATION,MESSAGE_GROUP,MESSAGE_TEXT
  2  from my_lsnr_alert_log
  3  where ORIGINATING_TIMESTAMP> systimestamp - INTERVAL '0 01:00:00.0' DAY TO SECOND(1)
  4  order by 1
  5  /

ORIGINATING_TIMESTAMP                  DETAILED_LOCATION    MESSAGE_GROUP             MESSAGE_TEXT
-------------------------------------- -------------------- ------------------------- --------------------------------------------------
16-NOV-11 05.11.24.147000000 PM +01:00                                                16-NOV-2011 17:11:24 * service_update * db112 * 0
16-NOV-11 05.11.54.266000000 PM +01:00                                                16-NOV-2011 17:11:54 * service_update * db112 * 0
16-NOV-11 05.12.24.385000000 PM +01:00                                                16-NOV-2011 17:12:24 * service_update * db112 * 0
16-NOV-11 05.12.54.487000000 PM +01:00                                                16-NOV-2011 17:12:54 * service_update * db112 * 0
16-NOV-11 05.13.24.573000000 PM +01:00                                                16-NOV-2011 17:13:24 * service_update * db112 * 0
16-NOV-11 05.13.54.818000000 PM +01:00                                                16-NOV-2011 17:13:54 * service_update * db112 * 0
16-NOV-11 05.14.25.011000000 PM +01:00                                                16-NOV-2011 17:14:25 * service_update * db112 * 0
16-NOV-11 05.14.28.013000000 PM +01:00                                                16-NOV-2011 17:14:28 * service_update * db112 * 0
16-NOV-11 05.14.55.085000000 PM +01:00                                                16-NOV-2011 17:14:55 * service_update * db112 * 0
16-NOV-11 05.15.28.207000000 PM +01:00                                                16-NOV-2011 17:15:28 * service_update * db112 * 0
16-NOV-11 05.15.46.267000000 PM +01:00                                                16-NOV-2011 17:15:46 * service_update * db112 * 0
16-NOV-11 05.15.52.297000000 PM +01:00                                                16-NOV-2011 17:15:52 * service_update * db112 * 0

Monitor Oracle B*Tree index creation

Today, one of my client asks me how to monitor index creation and was a little bit curious about what happened during this process.

To answer him, I remember operation that are executed to build an index:

1- Data needed to build the index is read

2- A sort segment is created

3- The index is progressively build as a temporary segment in the destination tablespace

The first phase is monitored like a classic read operation : V$SESSION_WAIT, 10046 event trace etc.

Next phase can be monitored by querying the V$SORT_USAGE view:

select USERNAME,SESSION_NUM,TABLESPACE,CONTENTS,SEGTYPE,BLOCKS*dbbs/1024/1024 as sizeMb
  2* from v$sort_usage, (select to_number(value) dbbs from v$parameter where name='db_block_size')

USERNAME   SESSION_NUM TABLESPACE CONTENTS  SEGTYPE       SIZEMB
---------- ----------- ---------- --------- --------- ----------
LAURENT             46 TEMP       TEMPORARY SORT             101

Finally, when the sort segment has been created, you will see a new segment in the index tablespace. This segment is a TEMPORARY segment with a numeric format name (Usually the final size is close to the sort segment size) :

select segment_name,segment_type,sum(bytes)/1024/1024/1024 as SIZE_GB
from dba_segments
where tablespace_name='DWH_P_IDX'
group by segment_name,segment_type;
SEGMENT_NAME                   SEGMENT_TYPE       SIZE_GB
------------------------------ ------------------ -------
.../...
14.4483                        TEMPORARY           1.1875
.../...

If you repeat this query, you will see the segment size grow. When the operation ends, this temporary segment becomes an INDEX segment which is … your index.

Dump ASM disk header

If you want to dump ASM disk header, you can use an Oracle internal tool to obtain information about your disk, diskgroup etc. even if the disk is offline.

This tool is named KFED (Kernel File EDitor). It is fitted by default with an Oracle 11g installation, but you’ll need to build it  if you want to use it with Oracle 10g :

[oracle@oel ~]$ make -f $ORACLE_HOME/rdbms/lib/ins_rdbms.mk ikfed

Well, now have a closer look to a feature of this tool.

If you want to read information stored on the ASM Disk header, you can use it like this :

[oracle@oel ~]$ kfed read /dev/oracleasm/disks/ASM3 dsk1.dump

Now, you have in the dsk1.dump file the content of your ASM file header “/dev/oracleasm/disks/ASM3”. This file can be easily read by a text editor

 
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: T=0 NUMB=0x0
kfbh.block.obj:              2147483648 ; 0x008: TYPE=0x8 NUMB=0x0
kfbh.check:                  2930000864 ; 0x00c: 0xaea443e0
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfdhdb.driver.provstr:     ORCLDISKASM3 ; 0x000: length=12
kfdhdb.driver.reserved[0]:    860705601 ; 0x008: 0x334d5341
kfdhdb.driver.reserved[1]:            0 ; 0x00c: 0x00000000
kfdhdb.driver.reserved[2]:            0 ; 0x010: 0x00000000
kfdhdb.driver.reserved[3]:            0 ; 0x014: 0x00000000
kfdhdb.driver.reserved[4]:            0 ; 0x018: 0x00000000
kfdhdb.driver.reserved[5]:            0 ; 0x01c: 0x00000000
kfdhdb.compat:                186646528 ; 0x020: 0x0b200000
kfdhdb.dsknum:                        0 ; 0x024: 0x0000
kfdhdb.grptyp:                        2 ; 0x026: KFDGTP_NORMAL
kfdhdb.hdrsts:                        3 ; 0x027: KFDHDR_MEMBER
kfdhdb.dskname:          MIRROR_DG_0000 ; 0x028: length=14
kfdhdb.grpname:               MIRROR_DG ; 0x048: length=9
kfdhdb.fgname:           MIRROR_DG_0000 ; 0x068: length=14
kfdhdb.capname:                         ; 0x088: length=0
kfdhdb.crestmp.hi:             32959021 ; 0x0a8: HOUR=0xd DAYS=0x11 MNTH=0xa YEAR=0x7db
kfdhdb.crestmp.lo:           3500063744 ; 0x0ac: USEC=0x0 MSEC=0x3af SECS=0x9 MINS=0x34
kfdhdb.mntstmp.hi:             32959382 ; 0x0b0: HOUR=0x16 DAYS=0x1c MNTH=0xa YEAR=0x7db
kfdhdb.mntstmp.lo:            505578496 ; 0x0b4: USEC=0x0 MSEC=0xa1 SECS=0x22 MINS=0x7
kfdhdb.secsize:                     512 ; 0x0b8: 0x0200
kfdhdb.blksize:                    4096 ; 0x0ba: 0x1000
kfdhdb.ausize:                  1048576 ; 0x0bc: 0x00100000
kfdhdb.mfact:                    113792 ; 0x0c0: 0x0001bc80
kfdhdb.dsksize:                    1019 ; 0x0c4: 0x000003fb
kfdhdb.pmcnt:                         2 ; 0x0c8: 0x00000002
kfdhdb.fstlocn:                       1 ; 0x0cc: 0x00000001
kfdhdb.altlocn:                       2 ; 0x0d0: 0x00000002
kfdhdb.f1b1locn:                      2 ; 0x0d4: 0x00000002
kfdhdb.redomirrors[0]:                0 ; 0x0d8: 0x0000
kfdhdb.redomirrors[1]:                0 ; 0x0da: 0x0000
kfdhdb.redomirrors[2]:                0 ; 0x0dc: 0x0000
kfdhdb.redomirrors[3]:                0 ; 0x0de: 0x0000

Now, we can read some information about the file : on the structure “kfdhdb”, at the offset 0x048, and coded on 9 bytes, the name of the diskgroup which owns this ASM disk  file.

Most important information are detailed below :

* kfbh.endian: Endian used on this disk : 1 for little endian.

* kfdhdb.driver.provstr: Provision String used for ASM (which means in our case : ORCL:DISKASM3)

* kfdhdb.grptyp: type of diskgroup the disk is attached to.

* kfdhdb.hdrsts: header status. Here, the disk is a member of the diskgroup.

* kfdhdb.dskname: disk name in the disk group

* kfdhdb.grpname: disk group name

* kfdhdb.fgname: failure group name which owns the disk

* kfdhdb.secsize: sector size

* kfdhdb.blksize:  block size

* kfdhdb.ausize: allocation unit size

If you want to rename the diskgroup the disk belongs to, you can edit the dumpfile and use the “merge” command of KFED to apply changes to the disk header.

[oracle@oel ~]$ kfed merge /dev/oracleasm/disks/ASM3 text=dsk1.dump

Be careful when you use the “merge” command because, it seems the diskgroup name, or disk name is coded with a fixed length, so if you change the name, and this one is based on a 4 bytes word, rename it to a 4 bytes word.

Off course, using kfed is not supported by Oracle.