Data … as usual

All things about data by Laurent Leturgez

Category Archives: OCI

Colors identification for images stored in the Cloud with Python

I recently worked on some Python code to detect which are the main colors in an image.

To do that, my images were stored in an Oracle Cloud Infrastructure block storage bucket.

The process had to be done in 3 steps:

  • I had first to extract them by using the “oci” python package.
  • Then I had to convert the unstructured binary image to a structured numpy array.
  • And finally, I used an unsupervised ML routine (KMeans Clustering) to analyze the numpy array and detect which were the main colors in this image.

Reading Images stored in an OCI block storage bucket

To read images, or more generally, files store inΒ  an OCI block storage bucket. You need to have configured your client environment to access the OCI.

To do that, you will need various OCIDs (user, tenant), some keys (private and public). I will not develop this part because I already did it in a previous post … see here !

Once your configuration is ok, you have to load it into your python script, get an ObjectStorageClient object from the configuration, and request the namespace data of your ObjectStorageClient.

After that, it becomes easy to read an object (file) inside a bucket referenced inside the namespace.

This is done by the following code


compartment_id = config["tenancy"]
object_storage = oci.object_storage.ObjectStorageClient(config)
namespace = object_storage.get_namespace().data

bucket_name="python-bucket"
object_name="union_jack.jpg"
my_object = object_storage.get_object(namespace,
                                      bucket_name,
                                      object_name)

print("type(my_object.data.content) = ",type(my_object.data.content))

As you can see, I printed the class type of the object content … and without any surprise, it’s a “bytes” class.

type(my_object.data.content) =  <class 'bytes'>

Note: If your images are stored by another cloud provider. They usually have a Python SDK in order to do the same things πŸ˜‰

Converting an unstructured binary image to a numpy array

Once I did that, if I want to process my image I have to convert it in a usable data structure. And, with Python, the best data structure to process images is a numpy array, so I had to find a way to convert my binary soap (Bytes) to a structures numpy array.

As I don’t want to use a temporary file to do that stuff, I used a BytesIO object to process them directly in memory. At the end of the stream, I used a pillow Image (new name for the deprecated PIL package) from the BytesIO stream.

After that, a conversion to a numy array was possible. Please note that I had to convert a bit my numpy array structure. As you may know, an image file is represented in a multi-dimension array.

The first two dimensions represent the pixels of your Image. Added to that, you have 3rd dimension which encode for Red, Green and Blue values of each pixel. Sometimes a fourth value is added for what is called “Alpha” which is intended in transparency encoding. As I don’t know how were encoded Images, and as I don’t need to process the Alpha layer, I converted my 3 or 4 layers array into a 3 layers array (R,G and B encoding only).

The following code do the stuff:


from PIL import Image
from io import BytesIO

im=Image.open(BytesIO(my_object.data.content))
img=np.array(im)[:,:,:3]
print("img.shape=",img.shape)

This will produce the result below:

img.shape= (640, 1280, 3)

So my image is represented by a numpy array (ndarray). my image width is 640 pixels, height is 1280 pixels and each pixel is encoded by 3 values for Red, Green and Blue.

Using a clustering ML algorithm to detect colors

Next step, but not least. We have to choose a method to detect colors in the image.

First, I thought about getting the “average” color, but doing this is not a good way, because in the case of your image is equally colored by yellow, blue, red, and green … your average color will be a crappy brown which is not realistic.

The best way to get colors is to run a unsupervised machine learning algorithm (K-Means) to group all your colors into clusters based on R, G and B values. No matter the ML framework you will use to execute the KMeans, after execute your program you will get, the center point of each cluster which represent the color associated with the cluster and the differents labels for your clusters. Then you will be able to count the number of occurence of your label, and you will get the number of points inside your cluster.

It becomes easy to count the number of points in each color, this is for the most important thing in this algorithm. The other key point is how to structure your data as input for your KMeans.

This is simply resolved by flattening your image representation (in the numpy array). The array is flatten to a one-dimension list of triplets (reprensenting your RGB values).

In the following code, I used opencv (cv2 package) which is often used for image detection and capturing. This package is delivered with a kmeans algorithm that is optimized for image processing.


import cv2

# pixels is the 1D array, results of the img flattening process (made by reshape function)
pixels = np.float32(img.reshape(-1, 3))
print("Pixel shape = ", pixels.shape)

# Here is the number of colors we are trying to detect.
n_colors = 5

# Opencv kmeans parameters (See the following URL for more information: 
# https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_ml/py_kmeans/py_kmeans_opencv/py_kmeans_opencv.html
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 200, .1)
flags = cv2.KMEANS_RANDOM_CENTERS

# palette represents clusters centers
# Labels represents the cluster labels.
#   As we have 5 colors, labels are 0,1,2,3,4 
_, labels, palette = cv2.kmeans(pixels, n_colors, None, criteria, 10, flags)
# And counts represents the number of occurence for each label
_, counts = np.unique(labels, return_counts=True)

# Our dominant color is the color that have the maximum number of occurence in the "counts" array
dominant = palette[np.argmax(counts)]
print("dominant color (RVB) =",dominant)

If you prefer to use tensorflow, the code below will do the stuff


import tensorflow as tf
# this is for removing all the tensorflow INFO and WARN messages
tf.logging.set_verbosity(tf.logging.ERROR)

# pixels is the 1D array, results of the img flattening process (made by reshape function)
pixels = np.float32(img.reshape(-1, 3))
print("Pixel shape = ", pixels.shape)

def input_fn():
    return tf.train.limit_epochs(tf.convert_to_tensor(pixels, dtype=tf.float32), num_epochs=1)

n_colors = 5

kmeans = tf.contrib.factorization.KMeansClustering(num_clusters=n_colors, 
                                                   use_mini_batch=False)

num_iterations = 20
for _ in range(num_iterations):
    kmeans.train(input_fn)
    print('Training ... score:', kmeans.score(input_fn))
    cluster_centers = kmeans.cluster_centers()

cluster_indices = list(kmeans.predict_cluster_index(input_fn))
counts=np.unique(cluster_indices, return_counts=True)[1]
palette=cluster_centers

dominant = palette[np.argmax(counts)]
print("dominant =",dominant)

Now we have our results, we are able to produce a nice plot with:

  • the initial picture,
  • the dominant colors gradient,
  • the main dominant color
  • the second dominant color (I did that because In the code I worked on, many pictures had a white background which was detected and the main color in 99% of the cases)

And to do that, I used the matplotlib library:


import matplotlib as mpl
%matplotlib notebook
from matplotlib import pyplot as plt

indices = np.argsort(counts)[::-1]  
freqs = np.cumsum(np.hstack([[0], counts[indices]/counts.sum()]))
rows = np.int_(img.shape[0]*freqs)

dom_patch = np.zeros(shape=img.shape, dtype=np.uint8)
main_patch=np.ones(shape=img.shape, dtype=np.uint8)*np.uint8(palette[indices[0]])
second_patch=np.ones(shape=img.shape, dtype=np.uint8)*np.uint8(palette[indices[1]])

for i in range(len(rows) - 1):
    dom_patch[rows[i]:rows[i + 1], :, :] += np.uint8(palette[indices[i]])

fig, (ax0, ax1, ax2, ax3 ) = plt.subplots(1, 4 , figsize=(9,6))
ax0.imshow(img)
ax0.set_title('Original')
ax0.axis('off')

ax1.imshow(dom_patch)
ax1.set_title('Dominant colors')
ax1.yaxis.set_major_locator(plt.NullLocator())
ax1.xaxis.set_major_locator(plt.NullLocator())

ax2.imshow(main_patch)
ax2.set_title('Main color')
ax2.yaxis.set_major_locator(plt.NullLocator())
ax2.xaxis.set_major_locator(plt.NullLocator())

ax3.imshow(second_patch)
ax3.set_title('Second color')
ax3.yaxis.set_major_locator(plt.NullLocator())
ax3.xaxis.set_major_locator(plt.NullLocator())
                                                                                                              
plt.show(fig) 

Please note that, this code was running inside a jupyter notebook … so adapt the code if you want to run it in another context.

This will produce that kind of result :

 

 

Advertisements

Dealing with Oracle Cloud Infrastructure and Python

Oracle provides various SDK to create resources in the OCI.

Recently, I played with the Python SDK for OCI. In this blog post, I will show you the basics to create a simple bucket in the Object Storage part of OCI, and simply put a file on this Bucket.

OCI Client configuration

First, you will need to install the python OCI package. The best for that is to create a python virtual environment, activate it, and install all the packages you need inside.


mbp:python_venv $ python -m virtualenv oci
Using base prefix '/Users/leturgezl/miniconda3/envs/general'
New python executable in /Users/leturgezl/python_venv/oci/bin/python
Installing setuptools, pip, wheel...
done.

mbp:python_venv $ source oci/bin/activate


(oci) mbp:python_venv leturgezl$ pip install oci numpy pandas

Now that packages are installed, we have to configure the client to access OCI.

To do that, we need many things :

  • User OCID: this can be found in the User’s Page in OCI
  • Tenancy OCI: this can be found in the Tenancy’s page in OCI
  • Your OCI region
  • A private key file, its public key, and the related fingerprint.

The keys have been generated like this (I used a key without passphrase)


# Private key generation

$ mkdir ~/.oci
$ openssl genrsa -out ~/.oci/oci_api_key.pem 2048
$ chmod go-rwx ~/.oci/oci_api_key.pem

# Public key generation
$ openssl rsa -pubout -in ~/.oci/oci_api_key.pem -out ~/.oci/oci_api_key_public.pem

# Fingerprint generation
$ openssl rsa -pubout -outform DER -in ~/.oci/oci_api_key.pem | openssl md5 -c

Once you did that, you will have to configure you user in OCI to add it the public key, the fingerprint given in the interface must match with the previous command:

OCI User 1

 

It’s important to keep your private key “private” (don’t send it to another people, or leave it without any protection on your laptop).

Now, your local environment is configured, we will need a dictionary structure in our python script to use the SDK.

This dictionary can be build manually and embedded in the code, then you will have to fill the required fields (Key file location is the private key location):

config = {
    "user": "ocid1.user.oc1..aaaaaaaamcel7xygkvhe....aaaaaaaaaaaaaaaaaaaaa" ,
    "key_file": "~/.oci/oci_api_key.pem",
    "fingerprint": "35:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa",
    "tenancy": "ocid1.tenancy.oc1..aaaaaaaahgagkf7xygkvhe....aaaaaaaaaaaaaaaaaaaaa",
    "region": "eu-frankfurt-1"
}

Or, you can configure a local “config” file in your ~/.oci/ directory and then load it in the code with the given python code below:


$ cat ~/.oci/config
[DEFAULT]
user=ocid1.user.oc1..aaaaaaaamcel7xygkvhe....aaaaaaaaaaaaaaaaaaaaa
fingerprint=35:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa
key_file=~/.oci/oci_api_key.pem
tenancy=ocid1.tenancy.oc1..aaaaaaaahgagkf7xygkvhe....aaaaaaaaaaaaaaaaaaaaa
region=eu-frankfurt-1

Note : you can embed more than one user profile in this file. The only one required is the DEFAULT profile.


>>> import oci
>>> import pandas as pd
>>> config=oci.config.from_file()
>>> df=pd.DataFrame.from_dict(config, orient='index')
>>> df
                                                                       0
log_requests                                                       False
additional_user_agent
pass_phrase                                                         None
user                             ocid1.user.oc1..aaaaaaaamcel7xygkvhe...
fingerprint              35:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa
key_file                                          ~/.oci/oci_api_key.pem
tenancy                     ocid1.tenancy.oc1..aaaaaaaahgagkf7xygkvhe...
region                                                    eu-frankfurt-1

You can read the oci config file and select another profile by using this:


config = oci.config.from_file(profile_name="laurent")

Or use another file by using this parameter


config = oci.config.from_file(file_location="~/OCI_config.uat")

You can see there are more parameters in the dictionary, you can find the details by reading this: https://docs.cloud.oracle.com/iaas/Content/API/Concepts/sdkconfig.htm.

Creating an Object Storage bucket in OCI

Now our client is well configured to access OCI through a user and his keys.

it’s really easy to create a bucket. We have to request an ObjectStorageClient object and use it to create the bucket:

import oci
from oci.object_storage.models import CreateBucketDetails

compartment_id = config["tenancy"]
object_storage = oci.object_storage.ObjectStorageClient(config)

namespace = object_storage.get_namespace().data
bkt_name = "python-bucket"
object_name = "python_file"

print("Creating a new bucket {!r} in compartment {!r}".format(bkt_name, compartment_id))
request = CreateBucketDetails()
request.compartment_id = compartment_id
request.name = bkt_name
bucket = object_storage.create_bucket(namespace, request)

This will produce that kind of output:

Creating a new bucket 'python-bucket' in compartment 'ocid1.tenancy.oc1..aaaaaaaahgagkf7xygkvhe...'

And in the OCI web interface, our bucket appeared :

OCI Bucket 1

Put a file into the Object storage bucket

Now we have a bucket created in our compartment, it’s easy to put a file on it (I’ll put a binary file which is a PNG file).

To do that, the below code will be enough (considering variables have been initiated by previous code parts … see above)


with open("images/myimage.png", mode='rb') as file:
my_data = file.read()

obj = object_storage.put_object(
namespace,
bkt_name,
object_name,
my_data)

In the OCI console, inside the previously created bucket, the file has been created and is available:

OCI Bucket 2

As you can see, deploying resources on the OCI is easy and you can deploy your full infrastructure with a bunch of code.

Next investigations will be made soon, specially to deploy virtual machines, storage and databases of course.

That’s it for today πŸ™‚