Oracle … as usual

Oracle by Laurent Leturgez

I’m now an Oracle ACE :)

Last week, I received an email from Oracle that announced I was accepted in the Oracle ACE Program. And yes, I’m very proud of this.ūüôā

For those who don’t know what is the Oracle ACE program, it’s an award you receive when the community recognizes your expertise in a specific domain.

How¬†does it works ? You have to be nominated by someone in the community. This person fills in a nomination form which gives details about your expertise and how you “show” it¬†to the community (Blogs, Social networking, conferences, book authoring etc etc).

In my case, I’ve been nominated by Mohamed Houri (@MohamedHouri) and my nomination has been supported by lots of people who are very well known in the community (Tanel P√∂der, Kellyn Pot’vin Gorman, Deiby Gomez, Ludovico Caldara and Franck Pachot). I would like to warmly thank them for their support.

Now I will continue being active in the community: Writing on Oracle forums, presenting at Oracle conferences (I would like to present at two conferences at least this year), blogging and social networking (mainly on twitter: @lleturgez).

My Oracle ACE profile is now available at this URL : https://apex.oracle.com/pls/apex/f?p=19297:4:::NO:4:P4_ID:15320 .

Thank you Oracle ACE (@oracleace).

 

When an Oracle 12c common user transforms himself as a local user … but keeps his username.

In Oracle 12c new architecture, you have two kinds of users. Users who are declared in the root container, and users who are declared in a pluggable database.

The first ones are named common user, and the others are name local user. The particularity of those common user is that they are synchronised with the pluggable database(s) to be known in each of them.

Oracle make the distinction between common user and local user by their name. Common usernames have a constraint, they have to start with “C## “(or the value of the parameter “COMMON_USER_PREFIX”). For example, C##LAURENT is a common username whereas LAURENT is a local username.

Oracle controls that, because if you create a common user in a PDB, you will throw an “ORA-65094: invalid local user or role name”. At the opposite, you will throw an “ORA-65096: invalid common user or role name” if you try to create a local user in the CDB$ROOT container.

A weird behaviour happened when you plug a PDB which contained synced definition of a common user.

Below, I paste a demo of what I’m saying (this example has been taken in a SE2 database so … with only one PDB, but I reproduced the problem with a PDB which was unplugged from one container and plugged into another one).

  • In the pluggable database, common user creation

SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           READ WRITE NO

SQL> create user c##laurent identified by laurent container=ALL;

User created.

SQL> grant connect to c##laurent container=ALL;

Grant succeeded.

  • In the PDB the common user is synced

SQL> conn sys/oracle@pdb1 as sysdba
Connected.

SQL> select username,common from dba_users where username like 'C##%';

USERNAME                   COM
-------------------------- ---
C##LAURENT                 YES

 

  • Now I unplug my PDB and drop it from the container

SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT

SQL> alter pluggable database pdb1 close immediate;

Pluggable database altered.

SQL> alter pluggable database pdb1 unplug into '/home/oracle/laurent.xml';

Pluggable database altered.

SQL> drop pluggable database pdb1;

Pluggable database dropped.

  • Now, let’s drop the common user in the root container to simulate a new and fresh container database
SQL> select con_id,username,common from cdb_users where username like 'C##%';

    CON_ID USERNAME      COM
---------- ------------- ---
	     1 C##LAURENT    YES

SQL> drop user c##laurent cascade;

User dropped.
  • Finally I plug my PDB in the container, open it, and query CDB_USERS to see what happened to my common user c##LAURENT
SQL> create pluggable database newone
  2  using '/home/oracle/laurent.xml'
  3  nocopy tempfile reuse;

Pluggable database created.

SQL> alter pluggable database newone open;

Pluggable database altered.

SQL> select con_id,username,common from cdb_users where username like 'C##%';

    CON_ID USERNAME       COM
---------- -------------- ---
         3 C##LAURENT     YES

SQL> conn c##laurent/laurent@oel6:1521/NEWONE
ERROR:
ORA-28000: the account is locked

Well if you have a closer look to the CON_ID, my “common user” has been affected as a local user (because defined in the PDB). So, if I’m ok with the definition of ¬†local user … it’s a local user, but with a common username … oh dear !!!

Following this URL in the documentation (https://docs.oracle.com/database/121/SQLRF/statements_8003.htm#SQLRF54735), Oracle recommends to set a value for COMMON_USER_PREFIX even with this particular behaviour.

UPDATE : A friend of mine and a fellow Oracle consultant, Pierre Labrousse, gave me a clue In the Oracle MOS Note 1511619.1, Oracle says that if you face a common user who is locked in a newly plugged PDB to do one of the following:

  • Leave the user account locked and use the objects of its schema.
  • Use Oracle Data Pump to copy these objects to another schema, and then drop the locked user account.
  • Close the PDB, connect to the root, and then create a common user with the same name¬†as the locked account. When you re-open the PDB, Oracle Database resolves the differences¬†in the roles and privileges that were commonly granted to the locked user. Afterward, you can unlock this user account in the PDB. Privileges and roles that were locally granted to the user will remain unchanged.

 

 

 

Oracle 11g, frequency histograms, and character columns …

Recently I worked on a performance issue on two Oracle 11g instances (11.2.0.3 and 11.2.0.4) and I hit a bug concerning histograms and character columns. Oracle 11g is now about 7 years old but it’s a version that is still used in lots of systems .. that’s why I decided to write this blog post.

Well, in Oracle 11g, you have two kind of histograms :

  • Frequency histograms: they are calculated when the number of distinct values (NDV) in a column is lower¬†than the number of buckets in the histogram. Frequency histograms are the most accurate¬†histogram, because they count the exact number of occurrences for each value.
  • Height balanced histograms: they are calculated when the NDV is greater than the number of buckets in the histogram. Height balanced histograms are less accurate but they give some precious information to the optimizer on data distribution.

Histograms help the cost optimizer to take the good decision when the CBO have to choose for an access path or a join. (You can find very interesting papers on this topic by Jonathan Lewis or Mohamed Houri)

The problem I met was related to frequency histograms on character columns (CHAR, VARCHAR2 and NVARCHAR2).

In the dictionary, there are two views that give interesting information about how data is distributed:

  • DBA_TAB_COL_STATISTICS: this view gives information about column statistics (density, NDV, sample_size etc.) and if there’s an histogram on the column.
  • DBA_HISTOGRAMS: this view gives information on column histograms and how data values are distributed

Let’s took an example:

LAURENT @ orcl >; create table foo (id number);

Table created.

LAURENT @ orcl > begin
 2 for i in 1..10
 3 loop
 4 insert into foo select i from dual connect by level <=i; 5 end loop; 6 end; 7 / PL/SQL procedure successfully completed. LAURENT @ orcl > commit;

Commit complete.

LAURENT @ orcl > select id,count(*) from foo group by id order by 1;

	ID   COUNT(*)
---------- ----------
	 1	    1
	 2	    2
	 3	    3
	 4	    4
	 5	    5
	 6	    6
	 7	    7
	 8	    8
	 9	    9
	10	   10

10 rows selected.

Now, we gather statistics with a frequency histogram (that’s why I chose 11 buckets):

LAURENT @ orcl > exec dbms_stats.gather_table_stats(ownname=>USER, tabname=>'FOO', method_opt=>'FOR COLUMNS ID SIZE 11', estimate_percent=>100);

PL/SQL procedure successfully completed.

LAURENT @ orcl > select table_name,column_name,num_distinct,density,num_nulls,sample_size,histogram
 2 from user_tab_col_statistics
 3 where table_name='FOO';

TABLE_NAME COLUMN_NAM NUM_DISTINCT    DENSITY  NUM_NULLS SAMPLE_SIZE HISTOGRAM
---------- ---------- ------------ ---------- ---------- ----------- ---------------
FOO        ID                   10 .009090909          0          55 FREQUENCY

Now let’s have a look to the DBA_HISTOGRAMS view (or USER or ALL depending on the context)


LAURENT @ orcl > select table_name,column_name,
  2  endpoint_number,endpoint_value,
  3  endpoint_actual_value
  4  from user_histograms
  5  where table_name='FOO';

TABLE_NAME COLUMN_NAM ENDPOINT_NUMBER ENDPOINT_VALUE ENDPOINT_A
---------- ---------- --------------- -------------- ----------
FOO        ID                       1              1
FOO        ID                       3              2
FOO        ID                       6              3
FOO        ID                      10              4
FOO        ID                      15              5
FOO        ID                      21              6
FOO        ID                      28              7
FOO        ID                      36              8
FOO        ID                      45              9
FOO        ID                      55             10

The ENDPOINT_VALUE represents the column value, for example, and because it’s a frequency histogram, if we’d like to know how many values of 7 there is in the table, we have to take the endpoint_number for this value: 28 and to substract the endpoint_number of the preceding endpoint-value (6) … So there are 28-21=7 lines with the value 7.

Now, you’ve understood this basic … let’s take another example:


LAURENT @ orcl > begin
  2  for i in 1..10000
  3  loop
  4  case mod(i,10)
  5    when 0 then insert into t values(mod(i,10), concat(lpad('X',30,'X'),'01'));
  6    when 1 then insert into t values(mod(i,10), concat(lpad('X',30,'X'),'02'));
  7    when 2 then insert into t values(mod(i,10), concat(lpad('X',30,'X'),'03'));
  8    when 3 then insert into t values(mod(i,10), concat(lpad('X',30,'X'),'04'));
  9    when 4 then insert into t values(mod(i,10), concat(lpad('X',30,'X'),'05'));
 10    when 5 then insert into t values(mod(i,10), concat(lpad('X',30,'X'),'06'));
 11    when 6 then insert into t values(mod(i,10), concat(lpad('X',30,'X'),'07'));
 12    when 7 then insert into t values(mod(i,10), concat(lpad('X',30,'X'),'08'));
 13    when 8 then insert into t values(mod(i,10), concat(lpad('X',30,'X'),'09'));
 14    when 9 then insert into t values(mod(i,10), concat(lpad('X',30,'X'),'10'));
 15  end case;
 16  end loop;
 17  commit;
 18  end;
 19  /

PL/SQL procedure successfully completed.

LAURENT @ orcl > insert into t select 10,'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' from dual connect by level <=100000; 

100000 rows created. 

LAURENT @ orcl > commit;

Commit complete.

LAURENT @ orcl > exec dbms_stats.gather_table_stats(ownname=>USER, tabname=>'T', method_opt=>'for all columns size 12', estimate_percent=>100);

PL/SQL procedure successfully completed.

LAURENT @ orcl > select table_name,column_name,density,histogram,sample_size from user_tab_col_statistics where table_name='T';

TABLE_NAME COLUMN_NAM    DENSITY HISTOGRAM       SAMPLE_SIZE
---------- ---------- ---------- --------------- -----------
T          ID         4.5455E-06 FREQUENCY            110000
T          V          4.5455E-06 FREQUENCY            110000

And if we have a look to the data distribution of the column V, no problem …


LAURENT @ orcl > select table_name,column_name,
  2  endpoint_number,endpoint_value,
  3  endpoint_actual_value
  4  from user_histograms
  5  where table_name='T' and column_name='V'
  6  order by ENDPOINT_ACTUAL_VALUE;

TABLE_NAME COLUMN_NAM ENDPOINT_NUMBER ENDPOINT_VALUE ENDPOINT_ACTUAL_VALUE
---------- ---------- --------------- -------------- --------------------------------
T          V                     1000     4.5871E+35 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX01
T          V                     2000     4.5871E+35 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX02
T          V                     3000     4.5871E+35 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX03
T          V                     4000     4.5871E+35 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX04
T          V                     5000     4.5871E+35 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX05
T          V                     6000     4.5871E+35 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX06
T          V                     7000     4.5871E+35 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX07
T          V                     8000     4.5871E+35 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX08
T          V                     9000     4.5871E+35 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX09
T          V                    10000     4.5871E+35 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX10
T          V                   110000     4.5871E+35 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

11 rows selected.

The problem comes when the size of the character column exceeds 32 bytes:


LAURENT @ orcl > drop table t purge;

Table dropped.

LAURENT @ orcl > create table t (id number, v varchar2(64));

Table created.

LAURENT @ orcl > begin
for i in 1..10000
loop
case mod(i,10)
  when 0 then insert into t values(mod(i,10), concat(lpad('X',32,'X'),'01'));
  when 1 then insert into t values(mod(i,10), concat(lpad('X',32,'X'),'02'));
  when 2 then insert into t values(mod(i,10), concat(lpad('X',32,'X'),'03'));
  when 3 then insert into t values(mod(i,10), concat(lpad('X',32,'X'),'04'));
  when 4 then insert into t values(mod(i,10), concat(lpad('X',32,'X'),'05'));
  when 5 then insert into t values(mod(i,10), concat(lpad('X',32,'X'),'06'));
  when 6 then insert into t values(mod(i,10), concat(lpad('X',32,'X'),'07'));
  when 7 then insert into t values(mod(i,10), concat(lpad('X',32,'X'),'08'));
  when 8 then insert into t values(mod(i,10), concat(lpad('X',32,'X'),'09'));
  when 9 then insert into t values(mod(i,10), concat(lpad('X',32,'X'),'10'));
end case;
end loop;
commit;
end;
/

LAURENT @ orcl > insert into t select 10,'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' from dual connect by level <=100000; 100000 rows created. LAURENT @ orcl > commit;

Commit complete.

LAURENT @ orcl > exec dbms_stats.gather_table_stats(ownname=>USER, tabname=>'T', method_opt=>'for all columns size 12', estimate_percent=>100);

PL/SQL procedure successfully completed.

LAURENT @ orcl > select table_name,column_name,density,histogram,sample_size from user_tab_col_statistics where table_name='T';

TABLE_NAME COLUMN_NAM    DENSITY HISTOGRAM       SAMPLE_SIZE
---------- ---------- ---------- --------------- -----------
T          ID         4.5455E-06 FREQUENCY            110000
T          V          4.5455E-06 FREQUENCY            110000

But … if we have a look to the DBA_HISTOGRAMS view …


LAURENT @ orcl > select table_name,column_name,
  2  endpoint_number,endpoint_value,
  3  endpoint_actual_value
  4  from user_histograms
  5  where table_name='T'
  6  order by COLUMN_NAME,ENDPOINT_ACTUAL_VALUE;

TABLE_NAME COLUMN_NAM ENDPOINT_NUMBER ENDPOINT_VALUE ENDPOINT_ACTUAL_VALUE
---------- ---------- --------------- -------------- --------------------------------
T          ID                    1000              0
T          ID                    2000              1
T          ID                    3000              2
T          ID                    4000              3
T          ID                    5000              4
T          ID                    6000              5
T          ID                    7000              6
T          ID                    8000              7
T          ID                    9000              8
T          ID                   10000              9
T          ID                  110000             10
T          V                   110000     4.5871E+35

LAURENT @ orcl > select count(*) from t where v=lpad('X',32,'X')||'05';

  COUNT(*)
----------
      1000

LAURENT @ orcl > select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS'));

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------
SQL_ID  aass8rg64h38b, child number 0
-------------------------------------
select count(*) from t where v=lpad('X',32,'X')||'05'

Plan hash value: 2966233522

-------------------------------------------------------------------------------------
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |      2 |        |      2 |00:00:00.01 |    1416 |
|   1 |  SORT AGGREGATE    |      |      2 |      1 |      2 |00:00:00.01 |    1416 |
|*  2 |   TABLE ACCESS FULL| T    |      2 |   110K |   2000 |00:00:00.01 |    1416 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("V"='XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX05')

19 rows selected.

LAURENT @ orcl > select count(*) from t where v='XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX';

  COUNT(*)
----------
    100000

LAURENT @ orcl > select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS'));

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------
SQL_ID  adnfzsc6buqa3, child number 0
-------------------------------------
select count(*) from t where v='XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'

Plan hash value: 2966233522

-------------------------------------------------------------------------------------
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |      1 |        |      1 |00:00:00.01 |     708 |
|   1 |  SORT AGGREGATE    |      |      1 |      1 |      1 |00:00:00.01 |     708 |
|*  2 |   TABLE ACCESS FULL| T    |      1 |   110K |   100K |00:00:00.01 |     708 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("V"='XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX')

19 rows selected.

So, with this kind of difference between E-rows and A-rows, you can easily face¬†performance problems, specially if this column has been indexed. So we have a frequency histogram and it doesn’t help us to determine the cardinality in the character column…

This behaviour has been reported in the Bug 18377553 : POOR CARDINALITY ESTIMATE WITH HISTOGRAMS AND VALUES > 32 BYTES.

In my case, the problem occurred with NVARCHAR2 columns (national character set was AL16UTF16), so when the column was containing more that 16 characters, performance problems occurred.

Hopefully, Oracle has published a one-off patch for this bug (available for 11.2.0.3 and 11.2.0.4 on linux x86-64), or as a workaround, you can remove histogram on these kind of columns (with the method_opt: FOR COLUMNS … SIZE 1)

In 12.1, the problem occurs only when the size of the character column exceeds 64 bits (32 characters under AL16UTF16 character set)


SQL> select * from v$version;

BANNER                                                                               CON_ID
-------------------------------------------------------------------------------- ----------
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production              0
PL/SQL Release 12.1.0.2.0 - Production                                                    0
CORE	12.1.0.2.0	Production                                                        0
TNS for Linux: Version 12.1.0.2.0 - Production                                            0
NLSRTL Version 12.1.0.2.0 - Production                                                    0

SQL> create table t (id number, v nvarchar2(64));

Table created.

SQL> begin
for i in 1..10000
loop
case mod(i,10)
  when 0 then insert into t values(mod(i,10), concat(lpad('X',32,'X'),'01'));
  when 1 then insert into t values(mod(i,10), concat(lpad('X',32,'X'),'02'));
  when 2 then insert into t values(mod(i,10), concat(lpad('X',32,'X'),'03'));
  when 3 then insert into t values(mod(i,10), concat(lpad('X',32,'X'),'04'));
  when 4 then insert into t values(mod(i,10), concat(lpad('X',32,'X'),'05'));
  when 5 then insert into t values(mod(i,10), concat(lpad('X',32,'X'),'06'));
  when 6 then insert into t values(mod(i,10), concat(lpad('X',32,'X'),'07'));
  when 7 then insert into t values(mod(i,10), concat(lpad('X',32,'X'),'08'));
  when 8 then insert into t values(mod(i,10), concat(lpad('X',32,'X'),'09'));
  when 9 then insert into t values(mod(i,10), concat(lpad('X',32,'X'),'10'));
end case;
end loop;
commit;
end;
/

PL/SQL procedure successfully completed.

SQL> insert into t select 10,'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' from dual connect by level <=100000; 100000 rows created. SQL> commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats(ownname=>USER, tabname=>'T', method_opt=>'for all columns size 12', estimate_percent=>100);

PL/SQL procedure successfully completed.

SQL> select table_name,column_name,density,histogram,sample_size from user_tab_col_statistics where table_name='T';

TABLE_NAME COLUMN_NAM    DENSITY HISTOGRAM       SAMPLE_SIZE
---------- ---------- ---------- --------------- -----------
T          ID         4.5455E-06 FREQUENCY            110000
T          V          4.5455E-06 FREQUENCY            110000

SQL> select table_name,column_name,
  2  endpoint_number,endpoint_value,
  3  endpoint_actual_value
  4  from user_histograms
  5  where table_name='T'
  6  order by COLUMN_NAME,ENDPOINT_NUMBER;

TABLE_NAME COLUMN_NAM ENDPOINT_NUMBER ENDPOINT_VALUE ENDPOINT_ACTUAL_VALUE
---------- ---------- --------------- -------------- ----------------------------------
T          ID                    1000              0 0
T          ID                    2000              1 1
T          ID                    3000              2 2
T          ID                    4000              3 3
T          ID                    5000              4 4
T          ID                    6000              5 5
T          ID                    7000              6 6
T          ID                    8000              7 7
T          ID                    9000              8 8
T          ID                   10000              9 9
T          ID                  110000             10 10
T          V                   110000     1.7849E+33  X X X X X X X X X X X X X X X X X
                                                      X X X X X X X X X X X X X X X

12 rows selected.

Problem with 12.1 databases, the one-off patch has not been released yet and It will be fixed in 12.2.

Slides of my presentation at UKOUG TECH15 conference

The conference was last December, but I was a little bit busy this month …

So here are the slides of my presentation about SIMD extensions inside and outside Oracle 12c during the Super Sunday at UKOUG TECH15 conference.

It was my pleasure to have some great attendees in the audience: Frits Hoogland (ACE Director and Oak Table Member), Ludovico Caldara (Oracle Ace), Luca Canali (Oak Table Member), Mark Fielding (Oracle ACE), Bertrand Drouvot (Oracle Ace), Franck Pachot (Oracle Ace) and all the French guys who were at the conference!

Active Average Session trending in Statspack

Active Average Session (AAS) is one of the first statistic you have to look when you analyze a performance issue.

In Oracle Standard Edition (or Enterprise Edition without Diagnostic pack), you cannot query V$ACTIVE_SESSION_HISTORY and DBA_HIST_ACTIVE_SESS_HISTORY views, nor execute ashrpt script.

If you have installed statspack and if you take snapshots, you can find Active Average Session between 2 snapshots.

AAS is calculated with the formula : DB Time / elapsed time. So, if you have recorded a db time of 300 seconds in 900 seconds (elapsed time between two snapshots), AAS is 0,33.

So it’s easy to calculate AAS between two snapshot, but the most important thing is to have AAS trending during a period.

To do that, I wrote a small PL/SQL function based on SYS_TIME_MODEL_DIF function available in STATSPACK package definition (in spcpkg.sql script).

create or replace function SYS_TIME_MODEL_DIF(i_name varchar2, bid number, eid number, db_ident number, inst_num number) RETURN number is

/* Returns the difference between statistics for the Time Model statistic
   name specified for the interval between the begin and end
   snapshot id's (bid, eid)

   In the case the data being queried is from a prior release
   which did not have the statistic requested, this function
   returns 0.
*/

beg_val_missing   boolean := false;
end_val_missing   boolean := false;
bval           number;
eval           number;

cursor SY (i_snap_id number) is
select value
  from stats$sys_time_model      stm
     , stats$time_model_statname tms
where stm.snap_id         = i_snap_id
   and stm.dbid            = db_ident
   and stm.instance_number = inst_num
   and stm.stat_id         = tms.stat_id
   and tms.stat_name       = i_name;

begin
   /* Get start value */
   open SY (bid); fetch SY into bval;
   if SY%notfound then
      beg_val_missing := true;
   end if; close SY;

   /* Get end value */
   open SY (eid); fetch SY into eval;
   if SY%notfound then
      end_val_missing := true;
   end if; close SY;

   if     beg_val_missing = true
      and end_val_missing = true      then

         /* this is likely a newer statitic which did not exist for this database version    */
        return 0;

   elsif     beg_val_missing = true
         and end_val_missing = false  then
         return -1;
   elsif     beg_val_missing = false
         and end_val_missing = true   then
         return -1;
   else

        /* Return difference */
        return eval - bval;

   end if;

end SYS_TIME_MODEL_DIF;
/

When this function has been created, you can call it to get Db time (or any value in the time model) between two snapshots (identified by their snapshot_id : bid (begin id) and eid (end id)) , for example :

SQL> select SYS_TIME_MODEL_DIF('DB time',791,792,1386309985,1) from dual;

SYS_TIME_MODEL_DIF('DBTIME',791,792,1386309985,1)
-------------------------------------------------
 10245630292

SQL> select SYS_TIME_MODEL_DIF('DB CPU',791,792,1386309985,1) from dual;

SYS_TIME_MODEL_DIF('DBCPU',791,792,1386309985,1)
------------------------------------------------
 3698356113

SQL> select SYS_TIME_MODEL_DIF('parse time elapsed',791,792,1386309985,1) from dual;

SYS_TIME_MODEL_DIF('PARSETIMEELAPSED',791,792,1386309985,1)
-----------------------------------------------------------
 211981805

Note that time is given in microseconds.

Next step, I wrote an anonymous PL/SQL script that executes this function accross all the snapshots taken between two snap_id and prints DB Time, DB CPU, and AAS in CSV format (you can adapt this script to adapt your output format):

declare
 -- Constants
 usec_to_sec number := 1000000;
 day_to_sec number := 86400;
 db_id number := &amp;dbid;
 --
 dbtim number;
 dbcpu number;
 cursor SN_tim(sn_id number) is select snap_time from stats$snapshot where snap_id=sn_id;
 cursor snaps(sn1 number, sn2 number) is select snap_id from stats$snapshot where snap_id between sn1 and sn2 order by snap_id;
 b_time date;
 e_time date;
 bid number;
 eid number;
 cur_nf number;
begin
dbms_output.put_line('BEGIN_TIME;END_TIME;DBTIME;DBCPU;AAS');
open snaps(&amp;start,&amp;end);
loop
fetch snaps into bid;
exit when snaps%notfound;
 cur_nf := 0;
 fetch snaps into eid;

 open SN_tim(bid); fetch SN_tim into b_time;
 if SN_tim%notfound then
 cur_nf := 1;
 end if;
 close SN_tim;

 open SN_tim(eid); fetch SN_tim into e_time;
 if SN_tim%notfound then
 cur_nf := 1;
 end if;
 close SN_tim;

 dbtim := SYS_TIME_MODEL_DIF('DB time',bid,eid,db_id,1);
 dbcpu := SYS_TIME_MODEL_DIF('DB CPU',bid,eid,db_id,1);
 if ((dbtim&gt;0 or dbcpu&gt;0) and cur_nf = 0) then
 dbms_output.put_line(b_time||';'||e_time||';'||round(dbtim/usec_to_sec,2) ||';'||round(dbcpu/usec_to_sec,2) ||';'|| round((dbtim/usec_to_sec) / ((e_time-b_time)*day_to_sec),2) );
 else
 dbms_output.put_line(b_time||';'||e_time||';N/A;N/A' );
 end if;
end loop;
end;

Then you can graph it in Excel or Tableau (for example) and you will get the AAS trend:

aas

Follow

Get every new post delivered to your Inbox.

Join 387 other followers