Oracle … as usual

Oracle by Laurent Leturgez

Insert data in Oracle with Java … Batched mode.

Problem / Context

Today, I worked on a performance issue at my customer. The context is a Java app which did a processing and at the end of this processing, the application inserts tons of data on an Oracle database.

The customers was complaining about slow performance on the insertion phase and asked me to have a look on it.

Database was IO Bound, and many sessions were doing some one-row inserts on various tables (INSERT INTO … VALUES (:1, :2, …) ). These one-row insert were off course processed many times, and oracle session’s stats was reporting a high number of executions.

On the Java side

As an Oracle DBA or Developer, we all know that, with PLSQL, it’s easy to write a FORALL loop that will bulk insert the data, and then reduce execution time. To do that, you need to have a cursor which is read in bulkl (BULK COLLECT) and each part is then inserted in the database. This reduces the number of operations, loops etc.


DECLARE
CURSOR s_cur IS
SELECT *
FROM servers;

TYPE fetch_array IS TABLE OF s_cur%ROWTYPE;
s_array fetch_array;
BEGIN
OPEN s_cur;
LOOP
FETCH s_cur BULK COLLECT INTO s_array LIMIT 1000;

FORALL i IN 1..s_array.COUNT
INSERT INTO servers2 VALUES s_array(i);

EXIT WHEN s_cur%NOTFOUND;
END LOOP;
CLOSE s_cur;
COMMIT;
END;
/

Source: http://psoug.org/snippet/FORALL-Insert_499.htm

In Java, the problem can be different because the data processed by your app are not necesseraly coming from a database.

Usually, statements are prepared, variables are attached and the statement is executed. When it’s done in a loop, it looks something like that:


try{
         DriverManager.registerDriver (new oracle.jdbc.driver.OracleDriver());
         Connection c = DriverManager.getConnection ("jdbc:oracle:thin:@192.168.99.8:1521:orcl","laurent","laurent");
         PreparedStatement stmt = c.prepareStatement("insert into emp(employee_id,first_name,last_name,salary) values (?,?,?,?)");
         start=System.currentTimeMillis();

         for (int i=0 ; i < 50000 ; i++) {
                  stmt.setInt(1,i);
                  stmt.setString(2,generateRandomChars("01234567890ABCDEF",16));
                  stmt.setString(3,generateRandomChars("01234567890ABCDEF",16));
                  stmt.setInt(4,r.nextInt(10000+1)+5000);
                  stmt.executeUpdate();
         }
         stop=System.currentTimeMillis();
         System.out.println("time = "+(stop-start));
         stmt.close();
         c.close();
   }
   catch(Exception e){
      System.err.println(e);
      e.printStackTrace();
   }

The problem is the statement is sent to the database 50000 times, and then increase number of operation between the database and the application. As a result, execution time is high (this demo has been executed on a VM on my laptop):

[oracle@oel6 JAVA]$ time java -cp $CLASSPATH:ojdbc7.jar:. TestNoBatchedInsert
time = 75430

real    1m16.514s
user    0m5.279s
sys     0m1.080s

As for PLSQL, there is a technique to write a beautiful and powerful java code that will process inserts by batchs, and then reduce the number of loops, calls between database and application etc.

To do that, instead of using executeStatement method X times, we can use the addBatch method that will create a “batch” which will be executed on the database when PreparedStatement’s method “executeBatch” will be called.

If you’re familiar with FORALL in PLSQL, you probably know that, the more the batchsize is big, the more memory you will use. In Java, it will be the same and the result will be a bigger heap size, that’s why it’s recommended to define a batch size and each time your batch will reach a max size, the batch will be executed. This can be done like this:

try{
        DriverManager.registerDriver (new oracle.jdbc.driver.OracleDriver());
        Connection c = DriverManager.getConnection ("jdbc:oracle:thin:@192.168.99.8:1521:orcl","laurent","laurent");
        PreparedStatement stmt = c.prepareStatement("insert into emp(employee_id,first_name,last_name,salary) values (?,?,?,?)");
        start=System.currentTimeMillis();

        int batchSize=5000;
        int count=0;
        for (int i=0 ; i < 50000 ; i++) {
                stmt.setInt(1,i);
                stmt.setString(2,generateRandomChars("01234567890ABCDEF",16));
                stmt.setString(3,generateRandomChars("01234567890ABCDEF",16));
                stmt.setInt(4,r.nextInt(10000+1)+5000);
                stmt.addBatch();

                if (++count % batchSize == 0) {
                        stmt.executeBatch();
                }
        }

        stop=System.currentTimeMillis();
        System.out.println("time = "+(stop-start));
        stmt.close();
        c.close();
}
catch(Exception e){
  System.err.println(e);
  e.printStackTrace();
}

Now, if I run this program, wall time is not really the same scale:

[oracle@oel6 JAVA]$ time java -cp $CLASSPATH:ojdbc7.jar:. TestBatchedInsert
time = 598

real    0m1.964s
user    0m3.393s
sys     0m0.269s

On the database side

AWR Reports show many things:

  • First,  if we run the java sample codes as they are described, the no batch mode will commit the database after each execute update. As a result, log file switch event will increase:

Non batched inserts:

Top 10 Foreground Events by Total Wait Time
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                                           Total Wait       Wait   % DB Wait
Event                                Waits Time (sec)    Avg(ms)   time Class
------------------------------ ----------- ---------- ---------- ------ --------
log file sync                       49,999         64       1.28   86.8 Commit
DB CPU                                            9.3              12.6
db file sequential read                 47          0       0.97     .1 User I/O
SQL*Net message to client           50,012          0       0.00     .1 Network
log file switch completion               1          0      27.60     .0 Configur
undo segment extension                   2          0       5.16     .0 Configur
direct path write                        3          0       1.08     .0 User I/O
direct path sync                         1          0       3.15     .0 User I/O
Disk file operations I/O                19          0       0.04     .0 User I/O
control file sequential read           194          0       0.00     .0 System I

Batched inserts:

Top 10 Foreground Events by Total Wait Time
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                                           Total Wait       Wait   % DB Wait
Event                                Waits Time (sec)    Avg(ms)   time Class
------------------------------ ----------- ---------- ---------- ------ --------
DB CPU                                             .3              72.9
SQL*Net more data from client          268          0       0.17   10.1 Network
log file sync                           13          0       3.29    9.5 Commit
db file sequential read                 49          0       0.37    4.0 User I/O
Disk file operations I/O                17          0       0.03     .1 User I/O
control file sequential read           134          0       0.00     .1 System I
SQL*Net more data to client             10          0       0.02     .1 Network
SQL*Net message to client               22          0       0.00     .0 Network
log file single write                    0          0                .0 System I
asynch descriptor resize                 0          0                .0 Other
  • Then, due to batched process, number of buffer gets is reduced when batched inserts are used

Non batched inserts:

Segments by Logical Reads                   DB/Inst: ORCL/orcl  Snaps: 200-201
-> Total Logical Reads:         212,288
-> Captured Segments account for   28.4% of Total
-> When ** MISSING ** occurs, some of the object attributes may not be available

                     Tablespace
Owner                   Name
-------------------- ----------
                     Subobject  Obj.                             Logical
Object Name            Name     Type        Obj#   Dataobj#        Reads  %Total
-------------------- ---------- ----- ---------- ---------- ------------ -------
LAURENT              USERS
EMP                             TABLE      93857      93860       52,576   24.77

Batched inserts:
Segments by Logical Reads                   DB/Inst: ORCL/orcl  Snaps: 201-202
-> Total Logical Reads:          11,792
-> Captured Segments account for   83.0% of Total
-> When ** MISSING ** occurs, some of the object attributes may not be available

                     Tablespace
Owner                   Name
-------------------- ----------
                     Subobject  Obj.                             Logical
Object Name            Name     Type        Obj#   Dataobj#        Reads  %Total
-------------------- ---------- ----- ---------- ---------- ------------ -------
LAURENT              USERS
EMP                             TABLE      93857      93860        3,504   29.72
  • Same for buffer gets. Execution number is less for batched inserts (number of execution depends on your batch size)

Non batched inserts:

     Buffer                 Gets              Elapsed
      Gets   Executions   per Exec   %Total   Time (s)  %CPU   %IO    SQL Id
----------- ----------- ------------ ------ ---------- ----- ----- -------------
    152,587      50,000          3.1   71.9        2.1  11.3     0 7c9ws5uvx9nyc
Module: JDBC Thin Client
insert into emp(employee_id,first_name,last_name,salary) values (:1 ,:2 ,:3 ,:4
)

Batched inserts:

     Buffer                 Gets              Elapsed
      Gets   Executions   per Exec   %Total   Time (s)  %CPU   %IO    SQL Id
----------- ----------- ------------ ------ ---------- ----- ----- -------------
      3,669          10        366.9   31.1        0.1    57     0 7c9ws5uvx9nyc
Module: JDBC Thin Client
insert into emp(employee_id,first_name,last_name,salary) values (:1 ,:2 ,:3 ,:4)

 

When developer have to insert tons on data from a java app, it’s better when they used the best way to do it …. batched inserts have to be used in that case, it will help the database to be more powerful.

That’s it for today 😉

Advertisements

Profiling Java Application with Systemtap

A couple of days ago, I read this tweet from Tanel Poder :

I’m not a JVM internals geek but I was sure there was a way to do the job without restarting the JVM, and I found some cool stuff with Systemtap.

To do this, you have to install two packages on your linux distribution: systemtap and systemtap-runtime-java (and configure correctly your user environment):

[root@spark ~]# yum install systemtap systemtap-runtime-java

Please note that I used a CentOS 7.4 distribution.

Then, and for the demo, I wrote a very small piece of Java that do these steps:

  1. Prints the JVM PID
  2. Wait for a key to be pressed. During this time, you will have to execute the systemtap script I will described later.
  3. Execute a loop ten times, each loop with print a message and wait one second, and this last step is executed in a method name “loop_and_wait”.

Here’s the sample code:


package com.premiseo;

import java.lang.*;
import java.io.BufferedReader;
import java.io.InputStreamReader;
import java.io.IOException;

class Example {
   public static void loop_and_wait(int n) throws InterruptedException{
           System.out.println("Waiting "+n+"ms... Tick");
           Thread.sleep(n);
        }

   public static void main(String[] args) {

      System.out.println("PID = "+java.lang.management.ManagementFactory.getRuntimeMXBean().getName().split("@")[0]);
      System.out.println("Press any key when ready ...");

      try {
        BufferedReader in = new BufferedReader(new InputStreamReader(System.in));
        String next = in.readLine();
      }
      catch (IOException ioe) {
        ioe.printStackTrace();
      }

      try {
        for (int i=0;i<10;i++) {
           loop_and_wait(1000);
        }
      }
      catch (InterruptedException ie) {
        ie.printStackTrace();
      }
   }
}

Then, compile and execute … very basic I said 😉

[spark@spark java]$ javac -cp $CLASSPATH:. com/premiseo/Example.java
[spark@spark java]$ java -cp $CLASSPATH:. com.premiseo.Example
PID = 9928
Press any key when ready ...

Waiting 1000ms... Tick
Waiting 1000ms... Tick
Waiting 1000ms... Tick
Waiting 1000ms... Tick
Waiting 1000ms... Tick
Waiting 1000ms... Tick
Waiting 1000ms... Tick
Waiting 1000ms... Tick
Waiting 1000ms... Tick
Waiting 1000ms... Tick

Now, to answer to Tanel, I used a short systemtap script that will profile the program and specially the loop_and_wait method. I will count the number of times the loop_and_wait method has been called, and I will account the time spent in this method execution.

To do that, I had to write two probes related to:

  • the full name of the class, including the package name: com.premiseo.Example
  • the class name where the method is defined: Example
  • the method name I want to profile: loop_and_wait

The first one will be executed when the program will start to execute the targeted method (java(“com.premiseo.Example”).class(“Example”).method(“loop_and_wait”)), the second one will be executed when the method will return (java(“com.premiseo.Example”).class(“Example”).method(“loop_and_wait”).return)

The related systemtap script is given below:

#!/usr/bin/env stap

global counter,timespent,t

probe begin {
  printf("Press Ctrl+C to stop profiling\n")
  counter=0
  timespent=0
}

probe java("com.premiseo.Example").class("Example").method("loop_and_wait")
{
  counter++
  t=gettimeofday_ms()
}

probe java("com.premiseo.Example").class("Example").method("loop_and_wait").return
{
  timespent+=gettimeofday_ms()-t
}

probe end {
   printf("Number of calls for loop_and_wait method: %ld \n",    counter)
   printf("Time Spent in method loop_and_wait: %ld msecs \n", timespent)
}

Execution of this systemtap script gave the following result (click the image for full size):

Is it dynamic? Yes, no need to restart the running JVM process you want to target. If you want to target a specific JVM process id, you can use the stap’s “-x” option, add the modify your probe definition like this:

probe java("com.premiseo.Example").class("Example").method("loop_and_wait")
{
  if (pid() == target())
    counter++
    t=gettimeofday_ms()
}

There’s a limitation, you cannot use wilcards in the java probe definition (java(“com.premiseo.Example”).class(“Example”).method(“loop*”) … for example). That would have been useful to profile a set of methods in the same class … but not possible currently.

If you want to read more about this kind of stuff, please read the following websites:

And … that’s all for today !! 😉

 

 

Brute forcing the Oracle Password file

If you want to remotely connect to your Oracle instance as sysdba, by default (ie. remote_login_passwordfile=EXCLUSIVE), you will probably use the password file (located in $ORACLE_HOME/dbs) to identify the SYS user.

Below, I used auditd to show that the password file is read by the server process when connecting remotely:

[oracle@oel6 ~]$ sudo service auditd status
auditd (pid  2422) is running...

[oracle@oel6 ~]$ sudo auditctl -l
No rules

[oracle@oel6 ~]$ sudo auditctl -w $ORACLE_HOME/dbs/orapworcl -p r
[oracle@oel6 ~]$ sudo auditctl -l
-w /u01/app/oracle/product/12.2.0/dbhome_1/dbs/orapworcl -p r
[oracle@oel6 ~]$ sqlplus sys@orcl as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Fri Dec 15 09:14:52 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Enter password:

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

[oracle@oel6 ~]$ sudo ausearch -f $ORACLE_HOME/dbs/orapworcl | grep -A 2 '^type=PATH'
type=PATH msg=audit(1513326435.361:310): item=0 name="/u01/app/oracle/product/12.2.0/dbhome_1/dbs/orapworcl" inode=109527 dev=fc:02 mode=0100640 ouid=500 ogid=500 rdev=00:00
type=CWD msg=audit(1513326435.361:310):  cwd="/u01/app/oracle/product/12.2.0/dbhome_1/dbs"
type=SYSCALL msg=audit(1513326435.361:310): arch=c000003e syscall=2 success=yes exit=7 a0=7ffe8aaf9c18 a1=0 a2=0 a3=3 items=1 ppid=1 pid=7840 auid=500 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 ses=3 tty=(none) comm="oracle_7840_orc" exe="/u01/app/oracle/product/12.2.0/dbhome_1/bin/oracle" key=(null)
--
type=PATH msg=audit(1513326435.361:311): item=0 name="/u01/app/oracle/product/12.2.0/dbhome_1/dbs/orapworcl" inode=109527 dev=fc:02 mode=0100640 ouid=500 ogid=500 rdev=00:00
type=CWD msg=audit(1513326435.361:311):  cwd="/u01/app/oracle/product/12.2.0/dbhome_1/dbs"
type=SYSCALL msg=audit(1513326435.361:311): arch=c000003e syscall=2 success=yes exit=7 a0=7ffe8aaf9c18 a1=0 a2=0 a3=3 items=1 ppid=1 pid=7840 auid=500 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 ses=3 tty=(none) comm="oracle_7840_orc" exe="/u01/app/oracle/product/12.2.0/dbhome_1/bin/oracle" key=(null)
--
type=PATH msg=audit(1513326435.361:312): item=0 name="/u01/app/oracle/product/12.2.0/dbhome_1/dbs/orapworcl" inode=109527 dev=fc:02 mode=0100640 ouid=500 ogid=500 rdev=00:00
type=CWD msg=audit(1513326435.361:312):  cwd="/u01/app/oracle/product/12.2.0/dbhome_1/dbs"
type=SYSCALL msg=audit(1513326435.361:312): arch=c000003e syscall=2 success=yes exit=7 a0=797bfbd8 a1=0 a2=0 a3=7ffe8aaf9c70 items=1 ppid=1 pid=7840 auid=500 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 ses=3 tty=(none) comm="oracle_7840_orc" exe="/u01/app/oracle/product/12.2.0/dbhome_1/bin/oracle" key=(null)
--
type=PATH msg=audit(1513326435.361:313): item=0 name="/u01/app/oracle/product/12.2.0/dbhome_1/dbs/orapworcl" inode=109527 dev=fc:02 mode=0100640 ouid=500 ogid=500 rdev=00:00
type=CWD msg=audit(1513326435.361:313):  cwd="/u01/app/oracle/product/12.2.0/dbhome_1/dbs"
type=SYSCALL msg=audit(1513326435.361:313): arch=c000003e syscall=2 success=yes exit=7 a0=797bfbd8 a1=1002 a2=0 a3=797bfdf0 items=1 ppid=1 pid=7840 auid=500 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 ses=3 tty=(none) comm="oracle_7840_orc" exe="/u01/app/oracle/product/12.2.0/dbhome_1/bin/oracle" key=(null)
--
type=PATH msg=audit(1513326435.366:314): item=0 name="/u01/app/oracle/product/12.2.0/dbhome_1/dbs/orapworcl" inode=109527 dev=fc:02 mode=0100640 ouid=500 ogid=500 rdev=00:00
type=CWD msg=audit(1513326435.366:314):  cwd="/u01/app/oracle/product/12.2.0/dbhome_1/dbs"
type=SYSCALL msg=audit(1513326435.366:314): arch=c000003e syscall=2 success=yes exit=7 a0=7ffe8aae9288 a1=0 a2=0 a3=3 items=1 ppid=1 pid=7840 auid=500 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 ses=3 tty=(none) comm="oracle_7840_orc" exe="/u01/app/oracle/product/12.2.0/dbhome_1/bin/oracle" key=(null)
--
type=PATH msg=audit(1513326435.366:315): item=0 name="/u01/app/oracle/product/12.2.0/dbhome_1/dbs/orapworcl" inode=109527 dev=fc:02 mode=0100640 ouid=500 ogid=500 rdev=00:00
type=CWD msg=audit(1513326435.366:315):  cwd="/u01/app/oracle/product/12.2.0/dbhome_1/dbs"
type=SYSCALL msg=audit(1513326435.366:315): arch=c000003e syscall=2 success=yes exit=7 a0=7ffe8aae9288 a1=0 a2=0 a3=3 items=1 ppid=1 pid=7840 auid=500 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 ses=3 tty=(none) comm="oracle_7840_orc" exe="/u01/app/oracle/product/12.2.0/dbhome_1/bin/oracle" key=(null)
--
type=PATH msg=audit(1513326435.366:316): item=0 name="/u01/app/oracle/product/12.2.0/dbhome_1/dbs/orapworcl" inode=109527 dev=fc:02 mode=0100640 ouid=500 ogid=500 rdev=00:00
type=CWD msg=audit(1513326435.366:316):  cwd="/u01/app/oracle/product/12.2.0/dbhome_1/dbs"
type=SYSCALL msg=audit(1513326435.366:316): arch=c000003e syscall=2 success=yes exit=7 a0=797bfbd8 a1=0 a2=0 a3=0 items=1 ppid=1 pid=7840 auid=500 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 ses=3 tty=(none) comm="oracle_7840_orc" exe="/u01/app/oracle/product/12.2.0/dbhome_1/bin/oracle" key=(null)
--
type=PATH msg=audit(1513326435.366:317): item=0 name="/u01/app/oracle/product/12.2.0/dbhome_1/dbs/orapworcl" inode=109527 dev=fc:02 mode=0100640 ouid=500 ogid=500 rdev=00:00
type=CWD msg=audit(1513326435.366:317):  cwd="/u01/app/oracle/product/12.2.0/dbhome_1/dbs"
type=SYSCALL msg=audit(1513326435.366:317): arch=c000003e syscall=2 success=yes exit=7 a0=797bfbd8 a1=1002 a2=0 a3=797bfdf0 items=1 ppid=1 pid=7840 auid=500 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 ses=3 tty=(none) comm="oracle_7840_orc" exe="/u01/app/oracle/product/12.2.0/dbhome_1/bin/oracle" key=(null)
[oracle@oel6 ~]$ ps -ef | grep 7840
oracle    7840     1  0 09:27 ?        00:00:00 oracleorcl (LOCAL=NO)

So, if we have a closer look to this binary file, we can find various password hashes. In my example, I configured my sqlnet with SQLNET.ALLOWED_LOGON_VERSION_SERVER=11. As a result, my password file contains 10g, 11g and 12c password hashes for the SYS user.

Below, I used xxd linux command with specific offsets on my password file to get the password hashes (in bold in the output):

  • 10g sys password hash
[oracle@oel6 ~]$ xxd -c16 -g0 -s +0x0484 -l 16 $ORACLE_HOME/dbs/orapworcl
0000484: 32363235343335323638333944303441 262543526839D04A
  • 11g sys password hash
[oracle@oel6 ~]$ xxd -c30 -g0 -s +0x04ac -l 30 $ORACLE_HOME/dbs/orapworcl
00004ac: 2adaa0a90bf26f339c49fe9948ab88a20baf82f93ef3c5da13ca5eb95314 *.....o3.I..H.......>.....^.S.

First part (20 first bytes): SHA1 digest

[oracle@oel6 ~]$ xxd -c20 -g0 -s +0x04ac -l 20 $ORACLE_HOME/dbs/orapworcl
00004ac: 2adaa0a90bf26f339c49fe9948ab88a20baf82f9 *.....o3.I..H.......

Second part (10 next bytes): Salt used by Oracle

[oracle@oel6 ~]$ xxd -c10 -g0 -s +0x4C0 -l 10 $ORACLE_HOME/dbs/orapworcl
00004c0: 3ef3c5da13ca5eb95314 >.....^.S.
  • 12c sys password hash
[oracle@oel6 ~]$ xxd -c80 -g0 -s +0x04CA -l 80 $ORACLE_HOME/dbs/orapworcl
00004ca: ad84face7a337c03baacca0bc63f97068e51edd0d6c53826ce8c347594a2800f92c736b4c83239fa47414ff2f68f45304b016ae215ed595c8b71c3c5a0ca3a0630e931d0f7d3929c9a6fb131f2fa0427 ....z3|......?...Q....8&..4u......6..29.GAO...E0K.j...Y\.q....:.0.1......o.1...'

Note: if you use the default configuration of oracle 12.2, you will only find 11g and 12c hashes.

 

Once these hashes have been found, I wrote some python scripts to brute force with a dictionary file to guess the password. Those scripts are available at this URL: https://github.com/lolo115/oracrack

First of all, I used the ora10g_hash_bf.py against the 10g hash to find the case insensitive password:

[oracle@oel6 sec]$ ./ora10g_hash_bf.py dict.txt sys 262543526839D04A
DICTFILE = dict.txt
USERNAME = sys
HASH     = 262543526839D04A
PASSWORD FOUND FOR USER sys !!! PASSWORD IS: rockyou
------
Be careful, the found password is case insensitive. The real password can include upper character(s)
Now generate all combinations for this password and run ora11g_hash_bf.py script to find the case sensitive password

Ok, the password is weak and the program found it in the dictionary file, but if we try it on our database, it fails because I used a sqlplus 12.2 client that use case sensitive passwords (and my SQLNET client configuration is the default one):

$ sqlplus sys/rockyou@orcl as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Fri Dec 15 10:06:37 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

ERROR:
ORA-01017: invalid username/password; logon denied

So, now as I know that the sys password is something like “rockyou” with upper and lower characters in it, I will use my “gen_all_comb.py” script to generate my own dictionary file that will contains all the combination for the “rockyou” password:

[oracle@oel6 sec]$ ./gen_all_comb.py rockyou > mydict.txt
[oracle@oel6 sec]$ head mydict.txt
rockyou
rockyoU
rockyOu
rockyOU
rockYou
rockYoU
rockYOu
rockYOU
rocKyou
rocKyoU
.../...

As the last operation, I have to use the ora11g_hash_bf.py script against the 11g hash (which is case sensitive) with my previously generated dictionary (mydict) and the script will find the correct password:

[oracle@oel6 sec]$ ./ora11g_hash_bf.py mydict.txt sys 2adaa0a90bf26f339c49fe9948ab88a20baf82f93ef3c5da13ca5eb95314
DICTFILE = mydict.txt
USERNAME = sys
HASH     = 2adaa0a90bf26f339c49fe9948ab88a20baf82f93ef3c5da13ca5eb95314
PASSWORD FOUND FOR USER sys !!! PASSWORD IS: RocKyoU

So let’s try:

[oracle@oel6 sec]$ sqlplus sys/RocKyoU@orcl as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Fri Dec 15 10:12:17 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL>

In this blog post, I described how to brute force SYS password that is stored in the password file without trying to connect number of times to the database and lock account etc. Please note that, I didn’t read at any time the USER$ table too.

To avoid this, you can simply disable password file usage by setting remote_login_passwordfile to NONE and remove this password file. You can set a SYS password  that is not weak (or use a password function with a profile etc.), there are too many customers that are still setting sys password to oracle, oracle123 or welcome1 ….

Finally, I didn’t write anything about the 12c hash (SHA512) because it’s a bit more complicated. Oracle 12c used a SCRAM authentication dialog, so first, you have to listen to the client/server network dialog, get some authentication keys and then run your own script. More, the 12c hash generation process uses PBKDF2 key derivation function that is a de-optimized function. As a consequence, it will slow down the execution rate of your script. So if you want to secure your 12c authentication process and avoid password brute force, you have to use only a 12c hash password (in your password file and into USER$ table), encrypt your network dialog with AES192 algorithm (starting with 12c, it can be done even in Standard Edition), use non weak passwords.

I will terminate this blog post with the traditional disclaimer … use all that is written in this blog post (including scripts) at your own risk. If you use all this stuff to hack a SYS password without any authorization … you will be the only responsible for all the consequences ! 😉

Install a Standalone Spark Environment on Oracle Linux 7

Spark is one of the most trendy project in the Apache Fundation.

From now, I usually used it directly on hadoop clusters, but each time I had to play with spark without the need of a complete hadoop cluster, or to test some basic pieces of code … It became hard to do it, specially on my laptop !!! Running a 3 node CDH cluster on your laptop requires CPU and memory !

So in this post, I decided to write how you can setup a small linux virtual machine, and install the last spark version in standalone mode.

First, of all, you need a fully operating linux box … I chose an Oracle Enterprise linux 7.4 one with  3.8.13-118 UEK kernel.

[spark@spark ~]$ sudo uname -r
3.8.13-118.19.4.el7uek.x86_64

Once installed and configured, you need to install java. In my case, I’ve installed a jdk8 SE:

[spark@spark ~]$ sudo yum localinstall /home/spark/jdk-8u121-linux-x64.rpm -y
[spark@spark ~]$ java -version
java version "1.8.0_121"
Java(TM) SE Runtime Environment (build 1.8.0_121-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)

Then, create all the required directories for Spark installation and download sources (If you need another version of Spark, you will find following this URL: https://spark.apache.org/downloads.html) :

[spark@spark ~]$ sudo mkdir /usr/local/share/spark
[spark@spark ~]$ sudo chown spark:spark /usr/local/share/spark
[spark@spark ~]$ curl -O https://d3kbcqa49mib13.cloudfront.net/spark-2.2.0.tgz
[spark@spark ~]$ tar -xvzf spark-2.2.0.tgz -C /usr/local/share/spark/
[spark@spark ~]$ cd /usr/local/share/spark/spark-2.2.0/

If you are behind a proxy server, you have to create a settings.xml file in $HOME/.m2 directory (you’ll probably have to create it). You have to do it, even if you have set http_proxy variable in your environment (beause maven, which is used during the installation process will use it).

Below, you’ll see what my settings.xml file looks like:

[spark@spark ~]$ cat ~/.m2/settings.xml
<settings>
 <proxies>
 <proxy>
 <id>example-proxy</id>
 <active>true</active>
 <protocol>http</protocol>
 <host>10.239.9.20</host>
 <port>80</port>
 </proxy>
 </proxies>
</settings>

Then, you are ready to configure MAVEN environment and launch the installation process:

[spark@spark ~]$ cd /usr/local/share/spark/spark-2.2.0/
[spark@spark spark-2.2.0]$ export MAVEN_OPTS=-Xmx2g -XX:ReservedCodeCacheSize=512m
[spark@spark spark-2.2.0]$ ./build/mvn -DskipTests clean package

At the end of the process, a summary report is printed.

[spark@spark spark-2.2.0]$ ./build/mvn -DskipTests clean package

.../...

[INFO] Replacing original artifact with shaded artifact.
[INFO] Replacing /usr/local/share/spark/spark-2.2.0/external/kafka-0-10-assembly/target/spark-streaming-kafka-0-10-assembly_2.11-2.2.0.jar with /usr/local/share/spark/spark-2.2.0/external/kafka-0-10-assembly/target/spark-streaming-kafka-0-10-assembly_2.11-2.2.0-shaded.jar
[INFO] Dependency-reduced POM written at: /usr/local/share/spark/spark-2.2.0/external/kafka-0-10-assembly/dependency-reduced-pom.xml
[INFO]
[INFO] --- maven-source-plugin:3.0.1:jar-no-fork (create-source-jar) @ spark-streaming-kafka-0-10-assembly_2.11 ---
[INFO] Building jar: /usr/local/share/spark/spark-2.2.0/external/kafka-0-10-assembly/target/spark-streaming-kafka-0-10-assembly_2.11-2.2.0-sources.jar
[INFO]
[INFO] --- maven-source-plugin:3.0.1:test-jar-no-fork (create-source-jar) @ spark-streaming-kafka-0-10-assembly_2.11 ---
[INFO] Building jar: /usr/local/share/spark/spark-2.2.0/external/kafka-0-10-assembly/target/spark-streaming-kafka-0-10-assembly_2.11-2.2.0-test-sources.jar
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Spark Project Parent POM ........................... SUCCESS [01:04 min]
[INFO] Spark Project Tags ................................. SUCCESS [ 26.598 s]
[INFO] Spark Project Sketch ............................... SUCCESS [ 6.316 s]
[INFO] Spark Project Networking ........................... SUCCESS [ 17.129 s]
[INFO] Spark Project Shuffle Streaming Service ............ SUCCESS [ 6.836 s]
[INFO] Spark Project Unsafe ............................... SUCCESS [ 9.039 s]
[INFO] Spark Project Launcher ............................. SUCCESS [ 21.286 s]
[INFO] Spark Project Core ................................. SUCCESS [02:24 min]
[INFO] Spark Project ML Local Library ..................... SUCCESS [ 20.021 s]
[INFO] Spark Project GraphX ............................... SUCCESS [ 13.117 s]
[INFO] Spark Project Streaming ............................ SUCCESS [ 33.581 s]
[INFO] Spark Project Catalyst ............................. SUCCESS [01:22 min]
[INFO] Spark Project SQL .................................. SUCCESS [02:56 min]
[INFO] Spark Project ML Library ........................... SUCCESS [02:08 min]
[INFO] Spark Project Tools ................................ SUCCESS [ 3.084 s]
[INFO] Spark Project Hive ................................. SUCCESS [ 51.106 s]
[INFO] Spark Project REPL ................................. SUCCESS [ 4.365 s]
[INFO] Spark Project Assembly ............................. SUCCESS [ 2.109 s]
[INFO] Spark Project External Flume Sink .................. SUCCESS [ 8.062 s]
[INFO] Spark Project External Flume ....................... SUCCESS [ 9.350 s]
[INFO] Spark Project External Flume Assembly .............. SUCCESS [ 2.087 s]
[INFO] Spark Integration for Kafka 0.8 .................... SUCCESS [ 12.043 s]
[INFO] Kafka 0.10 Source for Structured Streaming ......... SUCCESS [ 12.758 s]
[INFO] Spark Project Examples ............................. SUCCESS [ 19.236 s]
[INFO] Spark Project External Kafka Assembly .............. SUCCESS [ 5.637 s]
[INFO] Spark Integration for Kafka 0.10 ................... SUCCESS [ 9.345 s]
[INFO] Spark Integration for Kafka 0.10 Assembly .......... SUCCESS [ 3.909 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 14:54 min
[INFO] Finished at: 2017-09-14T12:22:31+02:00
[INFO] Final Memory: 86M/896M
[INFO] ------------------------------------------------------------------------

At this step, if you run some scripts, you’ll throw an error because, even if you have installed spark in standalone, you need hadoop librairies.

It’s an easy thing to do, we just have to download hadoop and configure our environment that way (Please download the hadoop version you need, I chose 2.8 which is the latest stable version for hadoop2, I didn’t make the test with hadoop3 as it’s still in beta):

[spark@spark ~]$ cd /usr/local/share/
[spark@spark share]$ sudo mkdir hadoop
[spark@spark share]$ sudo chown spark:spark hadoop/
[spark@spark share]$ cd hadoop/
[spark@spark hadoop]$ curl -O http://apache.mirrors.ovh.net/ftp.apache.org/dist/hadoop/common/hadoop-2.8.1/hadoop-2.8.1.tar.gz
[spark@spark hadoop]$ tar -xzf hadoop-2.8.1.tar.gz
[spark@spark hadoop]$ cat >> ~/.bashrc
export HADOOP_HOME=/usr/local/share/hadoop/hadoop-2.8.1
export LD_LIBRARY_PATH=${HADOOP_HOME}/lib/native:${LD_LIBRARY_PATH}
export SPARK_HOME=/usr/local/share/spark/spark-2.2.0
export PATH=${SPARK_HOME}/bin:${PATH}
[spark@spark hadoop]$ . ~/.bashrc
[spark@spark hadoop]$ env | egrep 'HADOOP|PATH|SPARK'
SPARK_HOME=/usr/local/share/spark/spark-2.2.0
HADOOP_HOME=/usr/local/share/hadoop/hadoop-2.8.1
LD_LIBRARY_PATH=/usr/local/share/hadoop/hadoop-2.8.1/lib/native:/usr/local/share/hadoop/hadoop-2.8.1/lib/native:
PATH=/usr/local/share/spark/spark-2.2.0/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/spark/.local/bin:/home/spark/bin

Now, we can run the SparkPi example:

[spark@spark ~]$ run-example SparkPi 500
Pi is roughly 3.141360702827214

Note: If you want to remove all those crappy INFO messages in the output, run the command below to configure log4j properties:

[spark@spark hadoop]$ cd $SPARK_HOME/conf
[spark@spark conf]$ sed 's/log4j\.rootCategory=INFO, console/log4j\.rootCategory=WARN, console/g' log4j.properties.template > log4j.properties

 

That’s done, now you’re ready to run your code on spark. Below, I wrote a sample code written in scala to create a dataframe from an oracle JDBC datasource,  and run a groupby function on it.

[spark@spark ~]$ spark-shell --driver-class-path ojdbc7.jar --jars ojdbc7.jar
Spark context Web UI available at http://192.168.99.14:4040
Spark context available as 'sc' (master = local[*], app id = local-1505397247969).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.2.0
      /_/

Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_121)
Type in expressions to have them evaluated.
Type :help for more information.

scala> :load jdbc_sample.scala
Loading jdbc_sample.scala...
import java.util.Properties
connProps: java.util.Properties = {}
res0: Object = null
res1: Object = null
df: org.apache.spark.sql.DataFrame = [PROD_ID: decimal(6,0), PROD_NAME: string ... 20 more fields]

scala> df.printSchema
root
 |-- PROD_ID: decimal(6,0) (nullable = false)
 |-- PROD_NAME: string (nullable = false)
 |-- PROD_DESC: string (nullable = false)
 |-- PROD_SUBCATEGORY: string (nullable = false)
 |-- PROD_SUBCATEGORY_ID: decimal(38,10) (nullable = false)
 |-- PROD_SUBCATEGORY_DESC: string (nullable = false)
 |-- PROD_CATEGORY: string (nullable = false)
 |-- PROD_CATEGORY_ID: decimal(38,10) (nullable = false)
 |-- PROD_CATEGORY_DESC: string (nullable = false)
 |-- PROD_WEIGHT_CLASS: decimal(3,0) (nullable = false)
 |-- PROD_UNIT_OF_MEASURE: string (nullable = true)
 |-- PROD_PACK_SIZE: string (nullable = false)
 |-- SUPPLIER_ID: decimal(6,0) (nullable = false)
 |-- PROD_STATUS: string (nullable = false)
 |-- PROD_LIST_PRICE: decimal(8,2) (nullable = false)
 |-- PROD_MIN_PRICE: decimal(8,2) (nullable = false)
 |-- PROD_TOTAL: string (nullable = false)
 |-- PROD_TOTAL_ID: decimal(38,10) (nullable = false)
 |-- PROD_SRC_ID: decimal(38,10) (nullable = true)
 |-- PROD_EFF_FROM: timestamp (nullable = true)
 |-- PROD_EFF_TO: timestamp (nullable = true)
 |-- PROD_VALID: string (nullable = true)

scala> df.groupBy("PROD_CATEGORY").count.show
+--------------------+-----+
|       PROD_CATEGORY|count|
+--------------------+-----+
|      Software/Other|   26|
|               Photo|   10|
|         Electronics|   13|
|Peripherals and A...|   21|
|            Hardware|    2|
+--------------------+-----+

And … that’s it … have fun with Spark 😉

 

How user/role password security is managed in Oracle vs. Postgres

First of all, I would like to say that I will not compare a licensed product (Oracle) vs. a freely distributed product (Postgres). This article is written to mention how password security is managed in both products, not to tell Oracle (or postgres) is the best product or vice versa.

Ok, that said, I have analyzed and worked on how password security is managed based on 4 topics: Password hashes, Salt used, password length and case sensitivity.

During all these tests, I used an Oracle 12.2 database (EE) and the last postgres server version 9.6.2 I built from sources.

HASHES

In Oracle and in Postgres, both rdbms use hash algorithms to store encrypted passwords.

Oracle

In Oracle 12.2, depending on the value of SQLNET.ALLOWED_LOGON_VERSION_SERVER in sqlnet.ora, the password can be stored using different hash algorithms.

To sum up:

  • Before Oracle 11gR1: an algorithm based on DES algorithm is used to generate password hash. This algorithm is weak due to the key length which is too short, and because the password generated is not case sensitive. The password hash is store in USER$.PASSWORD in the dictionnary.
  • From Oracle 11gR1: Oracle uses SHA1 algorithm. Nowdays, SHA1 has been broken and collisions are possible. SHA1 password hash is stored in USER$.SPARE4 (the hash is the string that starts with S:) .
  • Starting with Oracle 12c, Oracle started to use SHA2(512) combined with PBKDF2 function (See. https://en.wikipedia.org/wiki/PBKDF2). At the moment I wrote this blogpost, these algorithms haven’t been broken yet. SHA2(512) password hash is stored in USER$.SPARE4 (the hash is the string that starts with T:) .

Please note, that in Oracle 12.1, an HTML digest has been used but has been now removed in Oracle 12.2. By default, Oracle 12c stores SHA1 and SHA2(512) hashes:

SQL> create user laurent identified by laurent;

User created.

SQL> select name,spare4 from user$ where name='LAURENT';

NAME                 SPARE4
-------------------- ------------------------------------------------------------
LAURENT              S:5F0BAF84F6E102D0B1B7AEA78A5C1234E39F131EEC9C5C5E6B07CF3482
                     AF;T:B340F958041950CD2977301A15A2781D5EA3F71D51E25E9432EBC78
                     D369F13D9D568AC230422FDD2E28F1360D90F360E0A52B76B6F32522EF2B
                     3C1625489A5FBC93D710F50C4F0DDE7BFB8EB4CD4C729

Postgres

In postgres, password hashes are generated with the md5 algorithm. The hashed password is store in pg_authid table (in pg_catalog), the 3 first bytes are set to ‘md5’ followed by the hash:

postgres=# create role test with encrypted password 'test' login;

postgres=# select rolname,rolpassword from pg_authid where rolname='test';
 rolname |             rolpassword
---------+-------------------------------------
 test    | md505a671c66aefea124cc08b76ea6d30bb

Please note that, even if it’s not the default, password can be stored without encryption in the catalog.

postgres=# alter role test with unencrypted password 'passwd';
ALTER ROLE
postgres=# select rolname,rolsuper,rolpassword from pg_authid where rolname='test';
 rolname | rolsuper | rolpassword
---------+----------+-------------
 test    | f        | passwd

 

Match score

Oracle wins the hashes game because Oracle 12 is the only version that offer a strong hash algorithm (SHA2(512) + PBKDF2) that hasn’t been broken yet (SHA1 and MD5 are both weak algorithms).

SALT

As a remind, a salt is a random data that is used as an additional input to a function that hashes a password. As a result, the hashed password will be different each time it will be modified, even if the same password is used.

Oracle

  • Before Oracle 11g, Oracle don’t use a real salt. the algorithm (you can find it easily on the internet) uses DES encryption and a initial key “0x0123456789ABCDEF”.
  • From Oracle 11gR1, password is concatenated with a (supposed) random 10 bytes string and hashes with the SHA1 algorithm
  • With Oracle 12c onwards, a (supposed) random salt is used too.

As a result, if we set the same password than previous one, the password hash changes:

SQL> alter user laurent identified by passwd;

User altered.

SQL> select name,spare4 from user$ where name='LAURENT';

NAME                           SPARE4
------------------------------ ------------------------------------------------------------------------------------------------------------------------
LAURENT                        S:3F2A7144E07CE3631F88D00C9851703D107A4230B3C8A820BAC49372BFE6;T:47A8D24FFC2465100B8C5097278DDDF012EFD24303680DB8C7151ED
                               54D3DE0E2BD649A3788FDB50316B843903D686034BEA6E5FC51AF914F2C30547DCF89BF507CDEB5D9B14BA70B8572C35251DCF861


SQL> alter user laurent identified by passwd;

User altered.

SQL> select name,spare4 from user$ where name='LAURENT';

NAME                           SPARE4
------------------------------ ------------------------------------------------------------------------------------------------------------------------
LAURENT                        S:6073B325C321F6CAFD97297ECC5CD7C423F3F381A1E0111400AC8BFBA87F;T:003F805AA6CA9E86393D84FB09E0B66BEBDEC3CBBA0DCC4EE59E83F
                               85E980BF86AB262D05BC4C424392383EE6FA4A0B88A5E0C7D1A76A6A4EBE1900CF5B37C5F20ED1D42F0D2FA912DD222E72E716391

Postgres

If you have a look into the postgres C code, password is hashed in the src/backend/commands/user.c file inside a function called CreateRole or AlterRole depending on what you are doing on your role.

Let’s take the AlterRole function for example. In this function, a call is made to pg_md5_encrypt function. This function code is available in the src/backend/libpq/md5.c file.

By having a look to the pg_md5_encrypt function call and how it’s designed, it’s easy to concluded that role name is used as salt.

Postgres adds a string to the password before hashing it, but it’s not random. As a consequence, if you do the same operation as I did previously for Oracle, we can see below that the password hash remains the same which is … in my opinion, weak.

postgres=# alter role test with encrypted password 'passwd';
ALTER ROLE
postgres=# select rolname,rolsuper,rolpassword from pg_authid where rolname='test';
 rolname | rolsuper |             rolpassword
---------+----------+-------------------------------------
 test    | f        | md50104560cc42670b87acd04a5bba33344
(1 row)

postgres=# alter role test with encrypted password 'passwd';
ALTER ROLE
postgres=# select rolname,rolsuper,rolpassword from pg_authid where rolname='test';
 rolname | rolsuper |             rolpassword
---------+----------+-------------------------------------
 test    | f        | md50104560cc42670b87acd04a5bba33344

 

Match score

Oracle wins the salt match because it uses what is really defined as a salt … random data.

PASSWORD LENGTH

As we all know, the more the password is long the more it is difficult to guess (specially through a brute force attack).

Oracle

Oracle have a hard limit for password length which is set to 30 characters.

Above 30 characters length, an error is thrown:

SQL> alter user laurent identified by abcdefghijklmnopqrstuvwxyz0123;

User altered.

SQL> alter user laurent identified by abcdefghijklmnopqrstuvwxyz01234;
alter user laurent identified by abcdefghijklmnopqrstuvwxyz01234
                                 *
ERROR at line 1:
ORA-00972: identifier is too long

Postgres

For postgres, I was able to set a password with unlimited length (ALTER ROLE Statement ends correctly) but I could only login the database by using a password up to 995 characters, and only by using PGPASSWORD environment variable (see below). Above 995 characters, message “LOG:  invalid message length” is logged in the server logfile (I didn’t find any reference to this limit in the code, and but it deserves another analysis to go further):

Please note that, with the psql binary, this one uses the C function named “simple_prompt” to read the password entered in the standard input file, and this function call is made with a limit of 100 characters. So if you use psql to login, you cannot directly type a password with more than 100 characters. (See. src/bin/psql/startup.c, src/port/sprompt.c).

postgres=# alter role test with encrypted password 'abcdefghijklmnopqrstuvwxyz0123abcdefghijklmnopqrstuvwxyz0123abcdefghijklmnopqrstuvwxyz0123abcdefghijklmnopqrstuvwxyz0123abcdefghijklmnopqrstuvwxyz0123abcdefghijklmnopqrstuvwxyz0123abcdefghijklmnopqrstuvwxyz0123abcdefghijklmnopqrstuvwxyz0123';
ALTER ROLE
postgres=# \q


laurent@pg:~/pgsrc/postgresql-9.6.2$ export PGPASSWORD=abcdefghijklmnopqrstuvwxyz0123abcdefghijklmnopqrstuvwxyz0123abcdefghijklmnopqrstuvwxyz0123abcdefghijklmnopqrstuvwxyz0123abcdefghijklmnopqrstuvwxyz0123abcdefghijklmnopqrstuvwxyz0123abcdefghijklmnopqrstuvwxyz0123abcdefghijklmnopqrstuvwxyz0123

laurent@pg:~/pgsrc/postgresql-9.6.2$ psql -d laurent -U test
psql (9.6.2)
Type "help" for help.

laurent=>

Match score

On the field of password length, Postgres wins the game. Indeed, with PG you can set very long password and hacking a password can become a very long task to do (but be careful with the client tools you use). But, above 100 characters length, login to a postgres becomes painful depending on the client you use.

With Oracle or with postgres, the best thing to do is to use long password that contain all kind of characters (upper and lower case, number, special characters).

CASE SENSITIVITY

In this field, (modern) Oracle and Postgres are using case sensitive password.

But with Oracle

  • 10g
  • 11g with sec_case_sensitive_logon parameter set to FALSE
  • 12c with SQLNET.ALLOWED_LOGON_VERSION_SERVER set 8, 9, 10 or 11.

Passwords become case unsensitive which is not very secured.