Oracle … as usual

Oracle by Laurent Leturgez

Category Archives: tools

Install a Standalone Spark Environment on Oracle Linux 7

Spark is one of the most trendy project in the Apache Fundation.

From now, I usually used it directly on hadoop clusters, but each time I had to play with spark without the need of a complete hadoop cluster, or to test some basic pieces of code … It became hard to do it, specially on my laptop !!! Running a 3 node CDH cluster on your laptop requires CPU and memory !

So in this post, I decided to write how you can setup a small linux virtual machine, and install the last spark version in standalone mode.

First, of all, you need a fully operating linux box … I chose an Oracle Enterprise linux 7.4 one with  3.8.13-118 UEK kernel.

[spark@spark ~]$ sudo uname -r
3.8.13-118.19.4.el7uek.x86_64

Once installed and configured, you need to install java. In my case, I’ve installed a jdk8 SE:

[spark@spark ~]$ sudo yum localinstall /home/spark/jdk-8u121-linux-x64.rpm -y
[spark@spark ~]$ java -version
java version "1.8.0_121"
Java(TM) SE Runtime Environment (build 1.8.0_121-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)

Then, create all the required directories for Spark installation and download sources (If you need another version of Spark, you will find following this URL: https://spark.apache.org/downloads.html) :

[spark@spark ~]$ sudo mkdir /usr/local/share/spark
[spark@spark ~]$ sudo chown spark:spark /usr/local/share/spark
[spark@spark ~]$ curl -O https://d3kbcqa49mib13.cloudfront.net/spark-2.2.0.tgz
[spark@spark ~]$ tar -xvzf spark-2.2.0.tgz -C /usr/local/share/spark/
[spark@spark ~]$ cd /usr/local/share/spark/spark-2.2.0/

If you are behind a proxy server, you have to create a settings.xml file in $HOME/.m2 directory (you’ll probably have to create it). You have to do it, even if you have set http_proxy variable in your environment (beause maven, which is used during the installation process will use it).

Below, you’ll see what my settings.xml file looks like:

[spark@spark ~]$ cat ~/.m2/settings.xml
<settings>
 <proxies>
 <proxy>
 <id>example-proxy</id>
 <active>true</active>
 <protocol>http</protocol>
 <host>10.239.9.20</host>
 <port>80</port>
 </proxy>
 </proxies>
</settings>

Then, you are ready to configure MAVEN environment and launch the installation process:

[spark@spark ~]$ cd /usr/local/share/spark/spark-2.2.0/
[spark@spark spark-2.2.0]$ export MAVEN_OPTS=-Xmx2g -XX:ReservedCodeCacheSize=512m
[spark@spark spark-2.2.0]$ ./build/mvn -DskipTests clean package

At the end of the process, a summary report is printed.

[spark@spark spark-2.2.0]$ ./build/mvn -DskipTests clean package

.../...

[INFO] Replacing original artifact with shaded artifact.
[INFO] Replacing /usr/local/share/spark/spark-2.2.0/external/kafka-0-10-assembly/target/spark-streaming-kafka-0-10-assembly_2.11-2.2.0.jar with /usr/local/share/spark/spark-2.2.0/external/kafka-0-10-assembly/target/spark-streaming-kafka-0-10-assembly_2.11-2.2.0-shaded.jar
[INFO] Dependency-reduced POM written at: /usr/local/share/spark/spark-2.2.0/external/kafka-0-10-assembly/dependency-reduced-pom.xml
[INFO]
[INFO] --- maven-source-plugin:3.0.1:jar-no-fork (create-source-jar) @ spark-streaming-kafka-0-10-assembly_2.11 ---
[INFO] Building jar: /usr/local/share/spark/spark-2.2.0/external/kafka-0-10-assembly/target/spark-streaming-kafka-0-10-assembly_2.11-2.2.0-sources.jar
[INFO]
[INFO] --- maven-source-plugin:3.0.1:test-jar-no-fork (create-source-jar) @ spark-streaming-kafka-0-10-assembly_2.11 ---
[INFO] Building jar: /usr/local/share/spark/spark-2.2.0/external/kafka-0-10-assembly/target/spark-streaming-kafka-0-10-assembly_2.11-2.2.0-test-sources.jar
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Spark Project Parent POM ........................... SUCCESS [01:04 min]
[INFO] Spark Project Tags ................................. SUCCESS [ 26.598 s]
[INFO] Spark Project Sketch ............................... SUCCESS [ 6.316 s]
[INFO] Spark Project Networking ........................... SUCCESS [ 17.129 s]
[INFO] Spark Project Shuffle Streaming Service ............ SUCCESS [ 6.836 s]
[INFO] Spark Project Unsafe ............................... SUCCESS [ 9.039 s]
[INFO] Spark Project Launcher ............................. SUCCESS [ 21.286 s]
[INFO] Spark Project Core ................................. SUCCESS [02:24 min]
[INFO] Spark Project ML Local Library ..................... SUCCESS [ 20.021 s]
[INFO] Spark Project GraphX ............................... SUCCESS [ 13.117 s]
[INFO] Spark Project Streaming ............................ SUCCESS [ 33.581 s]
[INFO] Spark Project Catalyst ............................. SUCCESS [01:22 min]
[INFO] Spark Project SQL .................................. SUCCESS [02:56 min]
[INFO] Spark Project ML Library ........................... SUCCESS [02:08 min]
[INFO] Spark Project Tools ................................ SUCCESS [ 3.084 s]
[INFO] Spark Project Hive ................................. SUCCESS [ 51.106 s]
[INFO] Spark Project REPL ................................. SUCCESS [ 4.365 s]
[INFO] Spark Project Assembly ............................. SUCCESS [ 2.109 s]
[INFO] Spark Project External Flume Sink .................. SUCCESS [ 8.062 s]
[INFO] Spark Project External Flume ....................... SUCCESS [ 9.350 s]
[INFO] Spark Project External Flume Assembly .............. SUCCESS [ 2.087 s]
[INFO] Spark Integration for Kafka 0.8 .................... SUCCESS [ 12.043 s]
[INFO] Kafka 0.10 Source for Structured Streaming ......... SUCCESS [ 12.758 s]
[INFO] Spark Project Examples ............................. SUCCESS [ 19.236 s]
[INFO] Spark Project External Kafka Assembly .............. SUCCESS [ 5.637 s]
[INFO] Spark Integration for Kafka 0.10 ................... SUCCESS [ 9.345 s]
[INFO] Spark Integration for Kafka 0.10 Assembly .......... SUCCESS [ 3.909 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 14:54 min
[INFO] Finished at: 2017-09-14T12:22:31+02:00
[INFO] Final Memory: 86M/896M
[INFO] ------------------------------------------------------------------------

At this step, if you run some scripts, you’ll throw an error because, even if you have installed spark in standalone, you need hadoop librairies.

It’s an easy thing to do, we just have to download hadoop and configure our environment that way (Please download the hadoop version you need, I chose 2.8 which is the latest stable version for hadoop2, I didn’t make the test with hadoop3 as it’s still in beta):

[spark@spark ~]$ cd /usr/local/share/
[spark@spark share]$ sudo mkdir hadoop
[spark@spark share]$ sudo chown spark:spark hadoop/
[spark@spark share]$ cd hadoop/
[spark@spark hadoop]$ curl -O http://apache.mirrors.ovh.net/ftp.apache.org/dist/hadoop/common/hadoop-2.8.1/hadoop-2.8.1.tar.gz
[spark@spark hadoop]$ tar -xzf hadoop-2.8.1.tar.gz
[spark@spark hadoop]$ cat >> ~/.bashrc
export HADOOP_HOME=/usr/local/share/hadoop/hadoop-2.8.1
export LD_LIBRARY_PATH=${HADOOP_HOME}/lib/native:${LD_LIBRARY_PATH}
export SPARK_HOME=/usr/local/share/spark/spark-2.2.0
export PATH=${SPARK_HOME}/bin:${PATH}
[spark@spark hadoop]$ . ~/.bashrc
[spark@spark hadoop]$ env | egrep 'HADOOP|PATH|SPARK'
SPARK_HOME=/usr/local/share/spark/spark-2.2.0
HADOOP_HOME=/usr/local/share/hadoop/hadoop-2.8.1
LD_LIBRARY_PATH=/usr/local/share/hadoop/hadoop-2.8.1/lib/native:/usr/local/share/hadoop/hadoop-2.8.1/lib/native:
PATH=/usr/local/share/spark/spark-2.2.0/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/spark/.local/bin:/home/spark/bin

Now, we can run the SparkPi example:

[spark@spark ~]$ run-example SparkPi 500
Pi is roughly 3.141360702827214

Note: If you want to remove all those crappy INFO messages in the output, run the command below to configure log4j properties:

[spark@spark hadoop]$ cd $SPARK_HOME/conf
[spark@spark conf]$ sed 's/log4j\.rootCategory=INFO, console/log4j\.rootCategory=WARN, console/g' log4j.properties.template > log4j.properties

 

That’s done, now you’re ready to run your code on spark. Below, I wrote a sample code written in scala to create a dataframe from an oracle JDBC datasource,  and run a groupby function on it.

[spark@spark ~]$ spark-shell --driver-class-path ojdbc7.jar --jars ojdbc7.jar
Spark context Web UI available at http://192.168.99.14:4040
Spark context available as 'sc' (master = local[*], app id = local-1505397247969).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.2.0
      /_/

Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_121)
Type in expressions to have them evaluated.
Type :help for more information.

scala> :load jdbc_sample.scala
Loading jdbc_sample.scala...
import java.util.Properties
connProps: java.util.Properties = {}
res0: Object = null
res1: Object = null
df: org.apache.spark.sql.DataFrame = [PROD_ID: decimal(6,0), PROD_NAME: string ... 20 more fields]

scala> df.printSchema
root
 |-- PROD_ID: decimal(6,0) (nullable = false)
 |-- PROD_NAME: string (nullable = false)
 |-- PROD_DESC: string (nullable = false)
 |-- PROD_SUBCATEGORY: string (nullable = false)
 |-- PROD_SUBCATEGORY_ID: decimal(38,10) (nullable = false)
 |-- PROD_SUBCATEGORY_DESC: string (nullable = false)
 |-- PROD_CATEGORY: string (nullable = false)
 |-- PROD_CATEGORY_ID: decimal(38,10) (nullable = false)
 |-- PROD_CATEGORY_DESC: string (nullable = false)
 |-- PROD_WEIGHT_CLASS: decimal(3,0) (nullable = false)
 |-- PROD_UNIT_OF_MEASURE: string (nullable = true)
 |-- PROD_PACK_SIZE: string (nullable = false)
 |-- SUPPLIER_ID: decimal(6,0) (nullable = false)
 |-- PROD_STATUS: string (nullable = false)
 |-- PROD_LIST_PRICE: decimal(8,2) (nullable = false)
 |-- PROD_MIN_PRICE: decimal(8,2) (nullable = false)
 |-- PROD_TOTAL: string (nullable = false)
 |-- PROD_TOTAL_ID: decimal(38,10) (nullable = false)
 |-- PROD_SRC_ID: decimal(38,10) (nullable = true)
 |-- PROD_EFF_FROM: timestamp (nullable = true)
 |-- PROD_EFF_TO: timestamp (nullable = true)
 |-- PROD_VALID: string (nullable = true)

scala> df.groupBy("PROD_CATEGORY").count.show
+--------------------+-----+
|       PROD_CATEGORY|count|
+--------------------+-----+
|      Software/Other|   26|
|               Photo|   10|
|         Electronics|   13|
|Peripherals and A...|   21|
|            Hardware|    2|
+--------------------+-----+

And … that’s it … have fun with Spark 😉

 

Advertisements

Install and configure DTrace on Oracle Linux

Dtrace is one of the best tool to perform dynamic tracing of program execution.

Dtrace has been initially released on Solaris and now it’s ported on Linux.

In this post, I will describe very shortly how to install and configure Dtrace port on an Oracle Linux 6 box with UEK4 Kernel.

First, download dtrace-util and dtrace-util-devel packages. These packages are available at this URL : http://www.oracle.com/technetwork/server-storage/linux/downloads/linux-dtrace-2800968.html. You just have to download the correct releases depending on your UEK kernel version, and your Oracle Linux Distribution.

In my case, I chose “DTrace utilities, Oracle Linux 6 (x86_64)” for UEK4 kernel.

[root@oel6 dtrace]# ls
dtrace-utils-0.5.1-3.el6.x86_64.rpm  dtrace-utils-devel-0.5.1-3.el6.x86_64.rpm

Then, I used yum to install both packages, but before you have to configure (if not already done) your yum repo for UEK4 :

[public_ol6_UEKR4]
name=Latest Unbreakable Enterprise Kernel Release 4 for Oracle Linux $releasever ($basearch)
baseurl=http://yum.oracle.com/repo/OracleLinux/OL6/UEKR4/$basearch/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
gpgcheck=1
enabled=$uekr4

Now, install the packages:

[root@oel6 dtrace]# yum localinstall dtrace-utils-*
Loaded plugins: auto-update-debuginfo, refresh-packagekit, security, ulninfo
Setting up Local Package Process
Examining dtrace-utils-0.5.1-3.el6.x86_64.rpm: dtrace-utils-0.5.1-3.el6.x86_64
Marking dtrace-utils-0.5.1-3.el6.x86_64.rpm to be installed
Examining dtrace-utils-devel-0.5.1-3.el6.x86_64.rpm: dtrace-utils-devel-0.5.1-3.el6.x86_64
Marking dtrace-utils-devel-0.5.1-3.el6.x86_64.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package dtrace-utils.x86_64 0:0.5.1-3.el6 will be installed
--> Processing Dependency: dtrace-modules-shared-headers for package: dtrace-utils-0.5.1-3.el6.x86_64
--> Processing Dependency: libdtrace-ctf for package: dtrace-utils-0.5.1-3.el6.x86_64
--> Processing Dependency: libdtrace-ctf.so.1(LIBDTRACE_CTF_1.0)(64bit) for package: dtrace-utils-0.5.1-3.el6.x86_64
--> Processing Dependency: libdtrace-ctf.so.1()(64bit) for package: dtrace-utils-0.5.1-3.el6.x86_64
---> Package dtrace-utils-devel.x86_64 0:0.5.1-3.el6 will be installed
--> Processing Dependency: libdtrace-ctf-devel > 0.4.0 for package: dtrace-utils-devel-0.5.1-3.el6.x86_64
--> Running transaction check
---> Package dtrace-modules-shared-headers.x86_64 0:0.5.3-2.el6 will be installed
---> Package libdtrace-ctf.x86_64 0:0.5.0-3.el6 will be installed
---> Package libdtrace-ctf-devel.x86_64 0:0.5.0-3.el6 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

==================================================================================================================================================================================================================================================
 Package                                                           Arch                                       Version                                            Repository                                                                  Size
==================================================================================================================================================================================================================================================
Installing:
 dtrace-utils                                                      x86_64                                     0.5.1-3.el6                                        /dtrace-utils-0.5.1-3.el6.x86_64                                           786 k
 dtrace-utils-devel                                                x86_64                                     0.5.1-3.el6                                        /dtrace-utils-devel-0.5.1-3.el6.x86_64                                      76 k
Installing for dependencies:
 dtrace-modules-shared-headers                                     x86_64                                     0.5.3-2.el6                                        public_ol6_UEKR4                                                            30 k
 libdtrace-ctf                                                     x86_64                                     0.5.0-3.el6                                        public_ol6_UEKR4                                                            28 k
 libdtrace-ctf-devel                                               x86_64                                     0.5.0-3.el6                                        public_ol6_UEKR4                                                            15 k

Transaction Summary
==================================================================================================================================================================================================================================================
Install       5 Package(s)

Total size: 935 k
Total download size: 73 k
Installed size: 1.0 M
Is this ok [y/N]: y
Downloading Packages:
(1/3): dtrace-modules-shared-headers-0.5.3-2.el6.x86_64.rpm                                                                                                                                                                |  30 kB     00:00
(2/3): libdtrace-ctf-0.5.0-3.el6.x86_64.rpm                                                                                                                                                                                |  28 kB     00:00
(3/3): libdtrace-ctf-devel-0.5.0-3.el6.x86_64.rpm  
.../...

 

Installing those packages is not sufficient, you have to install a package containing the kernel modules for dtrace, and as the version of this package depends on your kernel, you have to run the yum command below:

[root@oel6 dtrace]# yum install dtrace-modules-`uname -r`
Loaded plugins: auto-update-debuginfo, refresh-packagekit, security, ulninfo
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package dtrace-modules-4.1.12-37.5.1.el6uek.x86_64 0:0.5.2-1.el6 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

==================================================================================================================================================================================================================================================
 Package                                                                     Arch                                           Version                                                Repository                                                Size
==================================================================================================================================================================================================================================================
Installing:
 dtrace-modules-4.1.12-37.5.1.el6uek                                         x86_64                                         0.5.2-1.el6                                            public_ol6_UEKR4                                         1.2 M

Transaction Summary
==================================================================================================================================================================================================================================================
Install       1 Package(s)

Total download size: 1.2 M
Installed size: 6.1 M
Is this ok [y/N]: y

Once the packages are installed, you have to load a bunch of modules into the kernel.

You can do that manually by running the command below:

[root@oel6 dtrace]# modprobe -a dtrace profile systrace sdt dt_test

Or, you can configure your Linux box to load those module during startup. In my case, as I run that in a OL6 box, I configured a file in /etc/sysconfig/modules/dtrace.modules and change its permissions.

[root@oel6 dtrace]# cat > /etc/sysconfig/modules/dtrace.modules
#!/bin/sh

if [ ! -c /dev/dtrace/dtrace ] ; then
        exec /sbin/modprobe -a dtrace profile systrace sdt dt_test  >/dev/null 2>&1
fi

[root@oel6 dtrace]# chmod 755 /etc/sysconfig/modules/dtrace.modules

After a reboot, dtrace runs fine with the root user:

[root@oel6 dtrace]# dtrace -l | wc -l
670

 
But not with the oracle user:

[oracle@oel6 ~]$ dtrace -l
dtrace: failed to initialize dtrace: DTrace requires additional privileges

To fix that, we have to set two small tricks:
1) Create a dtrace unix group and assign this group to the oracle user (or any user you want to grant dtrace utilization)

[root@oel6 ~]# id -a oracle
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),48(apache),54322(dba)
[root@oel6 ~]# groupadd dtrace
[root@oel6 ~]# usermod -a -G dtrace oracle
[root@oel6 ~]# id -a oracle
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),48(apache),54322(dba),54325(dtrace)

2) Configure the /dev/dtrace/dtrace Unix device to have the correct group ownership:

[root@oel6 ~]# cat /etc/udev/rules.d/10-dtrace.rules
kernel=="dtrace/dtrace", GROUP="dtrace" MODE="0660"

After a last reboot, that works fine even for my oracle user, and I can trace pmon and every process of my Oracle instances:

[oracle@oel6 ~]$ dtrace -l | wc -l
670

[oracle@oel6 ~]$ ps -ef | grep pmon
oracle    3436     1  0 18:16 ?        00:00:00 ora_pmon_orcl11
oracle    3519  3381  0 18:16 pts/0    00:00:00 grep pmon

[oracle@oel6 ~]$ cat test.d
#!/usr/sbin/dtrace -qs
syscall:::entry
/pid == $1/
{
  @num[probefunc] = count();
}

[oracle@oel6 ~]$ ./test.d 3436


  mmap                                                              1
  munmap                                                            1
  newfstat                                                          1
  getrusage                                                         4
  poll                                                              4
  times                                                             6
  close                                                            19
  open                                                             19
  read                                                             19

That’s all for today 🙂 .
 

Store your TNS entries in MS Active Directory (only for full Windows platforms)

When you manage a lot of Oracle clients, it can be difficult to manage as much tnsnames.ora files as you have Oracle clients.

In that case, it can be useful to configure solutions to centralize only one tnsnames.ora. Another solution is to use a Microsoft Active Directory to store your TNS Entries (This solution is for computers running on MS Windows).

The job is done in two steps :

  • First one is to configure Active Directory (AD) server
  • Second one is to configure your Oracle clients to query the AD server

In my lab, I have many boxes:

  • a MS Windows 2008 R2 Server (ok ok it’s an old box … but it works fine and I assume it will work fine on a 2012 R2 server). This server acts as a controller domain for the domain example.com. The server name is windows1.example.com. It hosts the Active Directory for the domain example.com and a DNS server.
  • a MS Windows where an Oracle Client is installed. (Don’t try to do this configuration on a linux box … it’s not working)

Configure AD to store Oracle TNS entries.

To do that, you first have to install an Oracle Client on the server. I won’t describe how to do that … you’re smart enough to do it yourself :). You just have to install the administrative Client.

Then, create a work directory, in my server I did that in C:\AD.  And copy all the files from $ORACLE_HOME/ldap/schema/ad to this work directory.

PS C:\> md AD
PS C:\> copy C:\app\product\11.2.0\dbhome_1\ldap\schema\ad\* C:\AD
PS C:\> dir C:\AD


    Directory: C:\AD


Mode                LastWriteTime     Length Name
----                -------------     ------ ----
-a---        09/11/2004     16:44        470 adContextCreate.lst
-a---        09/11/2004     16:44       2122 adContextCreateCommon.sbs
-a---        01/08/2001     04:16        591 adContextUpgradeFrom81600.lst
-a---        01/08/2001     04:17        407 adContextUpgradeFrom81600Common.sbs
-a---        23/05/2001     17:13        676 adDisplaySpecifiersCreate.lst
-a---        23/05/2001     17:13       5106 adDisplaySpecifiers_de.sbs
-a---        23/05/2001     17:13       5163 adDisplaySpecifiers_es.sbs
-a---        23/05/2001     17:13       5385 adDisplaySpecifiers_fr.sbs
-a---        23/05/2001     17:13       5092 adDisplaySpecifiers_it.sbs
-a---        23/05/2001     17:13       5562 adDisplaySpecifiers_ja.sbs
-a---        23/05/2001     17:13       5406 adDisplaySpecifiers_ko.sbs
-a---        23/05/2001     17:13      76035 adDisplaySpecifiers_other.sbs
-a---        23/05/2001     17:13       5453 adDisplaySpecifiers_pt_BR.sbs
-a---        23/05/2001     17:13       5069 adDisplaySpecifiers_us.sbs
-a---        23/05/2001     17:13       5225 adDisplaySpecifiers_zh_CN.sbs
-a---        01/08/2001     04:16        576 adSchemaCreate.lst
-a---        04/08/2003     17:51        219 adSchemaCreateAux.lst
-a---        13/11/2001     16:10        224 adSchemaCreateAux.sbs
-a---        09/11/2004     16:44       5445 adSchemaCreateBase.sbs
-a---        01/08/2001     04:17      11925 adSchemaCreateNet.sbs
-a---        01/08/2001     04:17       7462 adSchemaCreateRDBMS.sbs
-a---        01/08/2001     04:16        570 adSchemaUpgradeFrom81600.lst
-a---        01/08/2001     04:17        585 adSchemaUpgradeFrom81600Base.sbs
-a---        01/08/2001     04:17        509 adSchemaUpgradeFrom81600Net.sbs
-a---        01/08/2001     04:17        690 adSchemaUpgradeFrom81600RDBMS.sbs

 

Next, we will replace some values into some of these files. But before we need to note several DN (distinguished name).  :

  • DN of the root container. Usually this DN represents the domain where we will create what is called the “Oracle Context” (see below). In my case, my domain’s DN is : “DC=example,DC=com”.
  • DN of the Users branch in the AD container. In my case, it’s “CN=Users,DC=example,DC=com”.
  • DN of the user you are logged in. Basically, I will do that with the domain administrator, and its DN is: “CN=Administrator,CN=Users,DC=example,DC=com”
  • DN of the Oracle Context, in my case its DN is: “CN=OracleContext,DC=example,DC=com”. It’s in this Oracle Context that TNS Entries will be created.

I defined all these values in a set of PowerShell variables:

PS C:\> cd AD
PS C:\AD> $rootCont="DC=example,DC=com"
PS C:\AD> $usersBranch="CN=Users,DC=example,DC=com"
PS C:\AD> $userLogged="CN=Administrator,CN=Users,DC=example,DC=com"
PS C:\AD> $oracleContext="CN=OracleContext,DC=example,DC=com"

Then, all the files mentioned below will be “SEDed” to replace patterns with the correct values in new LDIF files (It has been done with Powershell, but you can do this with the tool you want: sed/cygwin, notepad etc).

PS C:\AD> cat adSchemaCreateBase.sbs | %{$_ -replace “%s_AdDomainDN%",$rootCont} | Out-File adSchemaCreateBase.ldif -Encoding UTF8
PS C:\AD> cat adSchemaCreateNet.sbs | %{$_ -replace “%s_AdDomainDN%",$rootCont} | Out-File adSchemaCreateNet.ldif -Encoding UTF8
PS C:\AD> cat adSchemaCreateRDBMS.sbs | %{$_ -replace “%s_AdDomainDN%",$rootCont} | Out-File adSchemaCreateRDBMS.ldif -Encoding UTF8
PS C:\AD> cat adSchemaUpgradeFrom81600BASE.sbs | %{$_ -replace “%s_AdDomainDN%",$rootCont} | Out-File adSchemaUpgradeFrom81600BASE.ldif -Encoding UTF8
PS C:\AD> cat adSchemaUpgradeFrom81600NET.sbs | %{$_ -replace “%s_AdDomainDN%",$rootCont} | Out-File adSchemaUpgradeFrom81600NET.ldif -Encoding UTF8
PS C:\AD> cat adSchemaUpgradeFrom81600RDBMS.sbs | %{$_ -replace “%s_AdDomainDN%",$rootCont} | Out-File adSchemaUpgradeFrom81600RDBMS.ldif -Encoding UTF8
PS C:\AD> cat adDisplaySpecifiers_us.sbs | %{$_ -replace “%s_AdDomainDN%",$rootCont} | Out-File adDisplaySpecifiers_us.ldif -Encoding UTF8
PS C:\AD> cat adDisplaySpecifiers_other.sbs | %{$_ -replace “%s_AdDomainDN%",$rootCont} | Out-File adDisplaySpecifiers_other.ldif -Encoding UTF8

PS C:\AD> cat adContextCreateCommon.sbs | %{$_ -replace “%s_OracleContextDN%" ,$oracleContext} `
>> | %{$_ -replace “%s_AdUsersDomainDN%", $usersBranch} `
>> | %{$_ -replace “%s_CurrentUserDN%", $userLogged} | Out-File adContextCreateCommon.ldif -Encoding UTF8
>>
PS C:\AD>

PS C:\AD> cat adContextUpgradeFrom81600Common.sbs | %{$_ -replace “%s_OracleContextDN%",$oracleContext} | Out-File adContextUpgradeFrom81600Common.ldif -Encoding UTF8

Please note that if your AD is installed in another supported language, you have to modify the DisplaySpecifier file related to the installation language (for example,  adDisplaySpecifiers_fr.sbs if it’s installed in French).

After that, a simple bunch of ldapmodify commands, and the entries for OracleContext will be created in the AD:

PS C:\AD> ldapmodify -c -D "cn=Administrator,cn=users,dc=example,dc=com" -w "YOUR_PASSWORD" -f C:\AD\adSchemaCreateBase.ldif
PS C:\AD> ldapmodify -c -D "cn=Administrator,cn=users,dc=example,dc=com" -w "YOUR_PASSWORD" -f C:\AD\adSchemaCreateNet.ldif
PS C:\AD> ldapmodify -c -D "cn=Administrator,cn=users,dc=example,dc=com" -w "YOUR_PASSWORD" -f C:\AD\adSchemaCreateRDBMS.ldif
PS C:\AD> ldapmodify -c -D "cn=Administrator,cn=users,dc=example,dc=com" -w "YOUR_PASSWORD" -f C:\AD\adSchemaUpgradeFrom81600BASE.ldif
PS C:\AD> ldapmodify -c -D "cn=Administrator,cn=users,dc=example,dc=com" -w "YOUR_PASSWORD" -f C:\AD\adSchemaUpgradeFrom81600NET.ldif
PS C:\AD> ldapmodify -c -D "cn=Administrator,cn=users,dc=example,dc=com" -w "YOUR_PASSWORD" -f C:\AD\adSchemaUpgradeFrom81600RDBMS.ldif
PS C:\AD> ldapmodify -c -D "cn=Administrator,cn=users,dc=example,dc=com" -w "YOUR_PASSWORD" -f C:\AD\adDisplaySpecifiers_us.ldif
PS C:\AD> ldapmodify -c -D "cn=Administrator,cn=users,dc=example,dc=com" -w "YOUR_PASSWORD" -f C:\AD\adDisplaySpecifiers_other.ldif
PS C:\AD> ldapmodify -c -D "cn=Administrator,cn=users,dc=example,dc=com" -w "YOUR_PASSWORD" -f C:\AD\adContextCreateCommon.ldif
PS C:\AD> ldapmodify -c -D "cn=Administrator,cn=users,dc=example,dc=com" -w "YOUR_PASSWORD" -f C:\AD\adContextUpgradeFrom81600Common.ldif

During execution, I had this kind off output … you can safely ignore this issue (probably due to powershell encoding which is made by default in UTF-8-BOM (original file used UTF-8 encoding):

ldapmodify.exe: no attributes to change or add (entry ´╗┐#)

Once done, you can check you have the correct number of objects in your directory. I did this check with a basic ldapsearch command:

PS C:\AD> ldapsearch -D "cn=Administrator,cn=users,dc=example,dc=com" -b "CN=Schema,CN=Configuration,dc=example,dc=com" -w "YOUR_PASSWORD" "(&(objectClass=attributeSchema)(CN=orcl*))" DN | Measure-Object -line

 Lines    Words    Characters    Property
 -----    -----    ----------    --------
    37    


PS C:\AD> ldapsearch -D "cn=Administrator,cn=users,dc=example,dc=com" -b "CN=Schema,CN=Configuration,dc=example,dc=com" -w "YOUR_PASSWORD" "(&(objectClass=classSchema)(CN=orcl*))" DN | Measure-Object -line

 Lines    Words    Characters    Property
 -----    -----    ----------    --------
    14    
          

If the AD MMC users and computers plugin, a new branch appeared: “OracleContext” and three groups related to security management of the Oracle Context:

users_ad

Then, If you want to add some entries, you have to create an ldap.ora file first, then create entries. TO finish this configuration steps, we have to grant permissions on specific branches to allow anonymous access on TNS entries. This is mandatory because Oracle client doesn’t bind the directory, and it doesn’t have to logon with a specific user to read the TNS Entry.

LDAP.ORA

This file has to be created in the $OH/network/admin. There must be 3 three parameters for the directory server name and port, the durectory server type, and the DN of the parent branch which contains the Oracle Context.
Here’s the content of my ldap.ora file (refer to Oracle documentation : Net Services Reference for more information).

DIRECTORY_SERVERS=windows1.example.com:389
DIRECTORY_SERVER_TYPE=AD
DEFAULT_ADMIN_CONTEXT="DC=example,DC=com"

CREATE TNS ENTRIES IN ACTIVE DIRECTORY

The easiest way to do that is to use Net Manager and add entry in the directory branch.

But, I prefer to import entries from a ldif file. To do that, you have to create a LDIF file which contains all the necessary properties for the TNS entry. Below, a sample of what it can contain:

PS C:\AD> cat orcl.ldif
dn: CN=orcl,CN=OracleContext,DC=example,DC=com
changetype: add
objectClass: top
objectClass: orclNetService
orclNetDescString: (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=windows1.example.com)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=orcl)))

dn: CN=coucou,CN=OracleContext,DC=example,DC=com
changetype: add
objectClass: top
objectClass: orclNetService
orclNetDescString:
 (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.99.15)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=orcl)))

And to finish, it’s easy to add them to the directory, the same way we import our ldif during AD configuration:

PS C:\AD> ldapmodify -c -D "cn=Administrator,cn=users,dc=example,dc=com" -w "YOUR_PASSWORD" -f C:\AD\orcl.ldif

adding new entry CN=orcl,CN=OracleContext,DC=example,DC=com

adding new entry CN=coucou,CN=OracleContext,DC=example,DC=com

SPECIFIC PERMISSIONS

Grant the permission “anonymous logon” on the entry we’ve created.

C:\> dsacls "CN=orcl,CN=OracleContext,DC=example,DC=com" /G "anonymous logon":GR
C:\> dsacls "CN=coucou,CN=OracleContext,DC=example,DC=com" /G "anonymous logon":GR

 

Client configuration

The client configuration is very easy, there’s only two lines to configure in SQLNET.ora file.

NAMES.DIRECTORY_PATH= (LDAP)
NAMES.LDAP_AUTHENTICATE_BIND=1

A call to tnsping will show you that trying to resolve the given alias will be done through your ldap configuration:

C:\>tnsping orcl

TNS Ping Utility for 64-bit Windows: Version 11.2.0.4.0 - Production on 12-JAN-2017 15:41:49

Copyright (c) 1997, 2013, Oracle.  All rights reserved.

Used parameter files:
C:\app\product\11.2.0\dbhome_1\network\admin\sqlnet.ora

Used LDAP adapter to resolve the alias
Attempting to contact (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=windows1.example.com)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=orcl
OK (0 msec)

Another way to verify it resolves the alias through LDAP is to enable SQLNET client tracing by addind these lines in the sqlnet.ora file:

TRACE_LEVEL_CLIENT=ADMIN
TRACE_UNIQUE_CLIENT=ON
TRACE_TIMESTAMP_CLIENT=ON
TRACE_DIRECTORY_CLIENT=c:\temp\client_trace
LOG_DIRECTORY_CLIENT=c:\temp\client_trace
DIAG_ADR_ENABLED=OFF

In the trace file, we see this information that prooves LDAP usage to resolve names:

[12-JAN-2017 15:45:21:752] nnflilc:  Opening sync conn to windows1.example.com:389
[12-JAN-2017 15:45:21:752] nnflalc: entry
[12-JAN-2017 15:45:21:752] nnflalc: native bind CN=Administrator,CN=Users,DC=example,DC=com returns 0
[12-JAN-2017 15:45:21:752] nnflalc: bind CN=Administrator,CN=Users,DC=example,DC=com returns 0x0
.../...
[12-JAN-2017 15:45:21:752] nnflrne1: Quering the directory for dn: cn=orcl,cn=OracleContext,DC=example,DC=com
[12-JAN-2017 15:45:21:752] nnflqbf: entry
[12-JAN-2017 15:45:21:752] nnflqbf: Search: Attrs[0]: objectclass
[12-JAN-2017 15:45:21:752] nnflqbf: Search:  Base: cn=orcl,cn=OracleContext,DC=example,DC=com; Scope: 0; filter: (objectclass=*) returns 0x0
[12-JAN-2017 15:45:21:752] nnflqbf: exit
[12-JAN-2017 15:45:21:752] nnflgne: entry
[12-JAN-2017 15:45:21:752] nnflgne:   DN : cn=orcl,cn=OracleContext,DC=example,DC=com
[12-JAN-2017 15:45:21:752] nnflgne: exit
.../...
[12-JAN-2017 15:45:21:752] nigtrm: Count in the NI global area is now 1
[12-JAN-2017 15:45:21:752] nigtrm: Count in the NL global area is now 1
[12-JAN-2017 15:45:21:752] nigini: entry
[12-JAN-2017 15:45:21:752] nigini: Count in the NL global area is now 2
[12-JAN-2017 15:45:21:752] nigini: Count in NI gbl area now: 2
[12-JAN-2017 15:45:21:752] nigini: exit
[12-JAN-2017 15:45:21:752] niqname: Hst is already an NVstring.
[12-JAN-2017 15:45:21:752] niqname: Inserting CID.
[12-JAN-2017 15:45:21:752] nigtrm: Count in the NI global area is now 1
[12-JAN-2017 15:45:21:752] nigtrm: Count in the NL global area is now 1
[12-JAN-2017 15:45:21:752] nigini: entry
[12-JAN-2017 15:45:21:752] nigini: Count in the NL global area is now 2
[12-JAN-2017 15:45:21:752] nigini: Count in NI gbl area now: 2
[12-JAN-2017 15:45:21:752] nigini: exit
[12-JAN-2017 15:45:21:752] niqname: Hst is already an NVstring.
[12-JAN-2017 15:45:21:752] niqname: Inserting CID.
[12-JAN-2017 15:45:21:752] niotns: entry
[12-JAN-2017 15:45:21:752] niotns: niotns: setting up interrupt handler...
[12-JAN-2017 15:45:21:752] niotns: Not trying to enable dead connection detection.
[12-JAN-2017 15:45:21:752] niotns: Calling address: (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=windows1.example.com)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=orcl)(CID=(PROGRAM=C:\app\product\11.2.0\dbhome_1\bin\sqlplus.exe)(HOST=clientWin)(USER=Administrator))))
[12-JAN-2017 15:45:21:752] nsgettrans_bystring: entry
[12-JAN-2017 15:45:21:752] nsgettrans_bystring: exit
[12-JAN-2017 15:45:21:752] nsmal: 280 bytes at 0x6baaf0
[12-JAN-2017 15:45:21:752] nscall: connecting...

How to Get Oracle Global Temporary Table (GTT) size

A very quick post to detail a script I wrote about GTT, and more precisely how to know the size occupied by a Session GTT (Global Temporary Table).

GTT are allocated as a temporary segment in the temporary tablespace. So we can get information about it in the V$SORT_USAGE view.

The problem, in this view, you don’t have the name of your table, nor the object id, but having a look in the underlying object makes me discovering the X$KTSSO view which have this information.

I wrote the following query (very quickly) to retrieve this information (This script has been tested on Oracle 11.2.0.4).


set lines 400 pages 500 trimspool on
select s.sid, s.serial#, s.program,o.obj#,o.name,decode(bitand(o.flags, 2), 0, 'N', 2, 'Y', 'N') temporary,
temp_obj.ktssoexts extents, temp_obj.ktssoblks blocks, temp_obj.ktssoblks*blk_sz.bs bytes
from obj$ o, 
	(select * from x$ktsso) temp_obj,
	(select value bs from v$parameter where name='db_block_size') blk_sz, 
	v$session s, 
	tab$ t
where o.obj# =temp_obj.KTSSOOBJN
and t.obj#=o.obj# 
and bitand(o.flags, 2)=2
and s.saddr=temp_obj.ktssoses;

       SID    SERIAL# PROGRAM                                                OBJ# NAME                           T    EXTENTS     BLOCKS      BYTES
---------- ---------- ------------------------------------------------ ---------- ------------------------------ - ---------- ---------- ----------
        43       1209 sqlplus@oel6.localdomain (TNS V1-V3)                  88956 T_SALES                        Y         35       4480   36700160
        43       1209 sqlplus@oel6.localdomain (TNS V1-V3)                   5187 PLAN_TABLE$                    Y          1        128    1048576
        35       1317 SQL Developer                                          5187 PLAN_TABLE$                    Y          1        128    1048576


This script can give you some clues about values to set for statistics on these GTTs when they are used accross many sessions and you don’t really know their size.

That’s it for today.

 

Database Load heatmap with AWR and Python

Recently, one of my customer asked me to design him a tool to quickly see the database load. He already did that for another Rdbms with a heatmap.

To do that, I used two distinct part in my work: Data Extraction, and Data Vizualisation.

Data Extraction from AWR

As he has licensed his databases with the Diagnostic pack , I decided to use AWR repository, and as I’m a bit lazy, I found a query from Marcin Przepiorowski (@pioro) and I modified it to pivot data.


col "00-01_ " for 90.99
col "01-02_ " for 90.99
col "02-03_ " for 90.99
col "03-04_ " for 90.99
col "04-05_ " for 90.99
col "05-06_ " for 90.99
col "06-07_ " for 90.99
col "07-08_ " for 90.99
col "08-09_ " for 90.99
col "09-10_ " for 90.99
col "10-11_ " for 90.99
col "11-12_ " for 90.99
col "12-13_ " for 90.99
col "13-14_ " for 90.99
col "14-15_ " for 90.99
col "15-16_ " for 90.99
col "16-17_ " for 90.99
col "17-18_ " for 90.99
col "18-19_ " for 90.99
col "19-20_ " for 90.99
col "20-21_ " for 90.99
col "21-22_ " for 90.99
col "22-23_ " for 90.99
col "23-24_ " for 90.99

WITH t AS
  (SELECT TO_CHAR(mtime,'YYYY/MM/DD') mtime,
    TO_CHAR(mtime,'HH24') d,
    LOAD AS value
  FROM
    (SELECT to_date(mtime,'YYYY-MM-DD HH24') mtime,
      ROUND(SUM(c1),2) AAS_WAIT,
      ROUND(SUM(c2),2) AAS_CPU,
      ROUND(SUM(cnt),2) AAS,
      ROUND(SUM(load),2) LOAD
    FROM
      (SELECT TO_CHAR(sample_time,'YYYY-MM-DD HH24') mtime,
        DECODE(session_state,'WAITING',COUNT(*),0)/360 c1,
        DECODE(session_state,'ON CPU',COUNT( *),0) /360 c2,
        COUNT(                               *)/360 cnt,
        COUNT(                               *)/360/cpu.core_nb load
      FROM dba_hist_active_sess_history,
        (SELECT value AS core_nb FROM v$osstat WHERE stat_name='NUM_CPU_CORES'
        ) cpu
      WHERE sample_time > sysdate - 30
      GROUP BY TO_CHAR(sample_time,'YYYY-MM-DD HH24'),
        session_state,
        cpu.core_nb
      )
    GROUP BY mtime
    )
  )
SELECT mtime,
  NVL("00-01_ ",0) "00-01_ ",
  NVL("01-02_ ",0) "01-02_ ",
  NVL("02-03_ ",0) "02-03_ ",
  NVL("03-04_ ",0) "03-04_ ",
  NVL("04-05_ ",0) "04-05_ ",
  NVL("05-06_ ",0) "05-06_ ",
  NVL("06-07_ ",0) "06-07_ ",
  NVL("07-08_ ",0) "07-08_ ",
  NVL("08-09_ ",0) "08-09_ ",
  NVL("09-10_ ",0) "09-10_ ",
  NVL("10-11_ ",0) "10-11_ ",
  NVL("11-12_ ",0) "11-12_ ",
  NVL("12-13_ ",0) "12-13_ ",
  NVL("13-14_ ",0) "13-14_ ",
  NVL("14-15_ ",0) "14-15_ ",
  NVL("15-16_ ",0) "15-16_ ",
  NVL("16-17_ ",0) "16-17_ ",
  NVL("17-18_ ",0) "17-18_ ",
  NVL("18-19_ ",0) "18-19_ ",
  NVL("19-20_ ",0) "19-20_ ",
  NVL("20-21_ ",0) "20-21_ ",
  NVL("21-22_ ",0) "21-22_ ",
  NVL("22-23_ ",0) "22-23_ ",
  NVL("23-24_ ",0) "23-24_ "
FROM t pivot( SUM(value) AS " " FOR d IN ('00' AS "00-01",'01' AS "01-02",'02' AS "02-03",'03' AS "03-04",'04' AS "04-05",'05' AS "05-06",'06' AS "06-07",'07' AS "07-08",
                                          '08' AS "08-09",'09' AS "09-10",'10' AS "10-11", '11' AS "11-12",'12' AS "12-13",'13' AS "13-14",'14' AS "14-15",'15' AS "15-16",
                                          '16' AS "16-17",'17' AS "17-18",'18' AS "18-19",'19' AS "19-20",'20' AS "20-21",'21' AS "21-22", '22' AS "22-23",'23' AS "23-24") 
            )
ORDER BY mtime

Nota: this query can be used to extract Database Load, Active Average Session (AAS), CPU part of AAS, and the Wait part of AAS. You just have to replace the value alias in the WITH Block.

Another point, I didn’t look at the plan, so I didn’t tune the statement … if you have some tricks to tune it … feel free to comment 😉

This query gives this kind of result:

MTIME      00-01_  01-02_  02-03_  03-04_  04-05_  05-06_  06-07_  07-08_  08-09_  09-10_  10-11_  11-12_  12-13_  13-14_  14-15_  15-16_  16-17_  17-18_  18-19_  19-20_  20-21_  21-22_  22-23_  23-24_
---------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- -------
2016/11/23    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.03    0.03    0.00    0.04    0.02    0.00    0.00    0.04    0.01    0.01    0.01    0.04    0.01
2016/11/24    0.02    0.03    0.01    0.03    0.05    0.02    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00
2016/12/06    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.01    0.01    0.02    0.02    0.00    0.00    0.00    0.00
2016/12/07    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.01    0.00    0.00    0.04    0.03    0.01    0.02    0.01    0.02    0.01    0.01    0.06    0.03
2016/12/08    0.03    0.05    0.02    0.02    0.03    0.03    0.02    0.02    0.03    0.02    0.01    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00
2016/12/12    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.03    0.02    0.05    0.02
2016/12/13    0.03    0.01    0.01    0.01    0.02    0.03    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00
2016/12/14    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.02    0.02    0.02    0.02    0.18    0.03    0.02    0.01    0.01    0.00    0.00    0.00    0.00    0.00

When the result is greater than 1, your database is loaded, but if it’s greater or equal that 2 … it’s really overloaded. In both cases, you have to analyze why ?

The problem with this kind of results is it’s only numbers and you have to analyze the complete results to detect when the problem occurs, here comes the data vizualisation part 😉

Data Vizualisation with Python

On the Oracle database Area, a guy has made a lot for visualization of performance data, it’s Kyle Hailey (@dbavigil). And Visualize your data is a very important thing to quickly identify bottleneck. So if you’re interested by data vizualisation you can visit his blog, you will find a lot of resources to extract AWR data and to visualize them (and other stuff).

So I decided to use Python to vizualise database load data extracted previously. To do that, I used python 3.4.5 with a bunch of packages installed:

  • cx_Oracle to connect Oracle Database and execute queries on your Oracle Database
  • numpy for arrays manipulation
  • plotly to graph the heatmap

The python script is available by following this URL: https://app.box.com/s/irjzi64lbne6xhw3t57aixqiy95toxht. Then, you have to modify it to enter the connection string (line 33).

My customer asked me that my script generates an HTML file  to visualize the heatmap in a browser (One of the reason I used plotly).

This script generates the file in /var/tmp (but you can modify it at the end) and then open the browser locally (So don’t forget to export X11 display if you run it on a linux server).

Below you can see the heatmap (X Axis is the hour range, Y Axis represent a day analyzed, the most recent day is on the first line of the heatmap.).

heatmap_s

This heatmap will help my customer to quickly visualize his database load, I hope it will help you too ;).

Of course, you can use the tool of your choice to visualize data (Tableau, Excel, Kibana etc.)