Automatic Colorization of Black and White images using ML in Python

In this tutorial, we will learn how to convert an old black & white image into a colored image automatically by using Python and it’s libraries OpenCV, DNN, and Caffe. This project takes a black and white image as its input and returns an automatically colored image as the output.

To proceed with further explanation on the coloring of black & white images using Python, we need to download 3 files. 

  1. colorization_release_v2.caffemodel: It is a pre-trained model stored in the Caffe framework’s format that can be used to predict new unseen data.
  2. colorization_deploy_v2.prototxt: It consists of different parameters that define the network and it also helps in deploying the Caffe model.
  3. pts_in_hull.npy: It is a NumPy file that stores the cluster center points in NumPy format. It consists of 313 cluster kernels, i.e (0-312).

You can download these files from the link below.
Download the Caffe model, Prototxt, and NumPy file.
Now, let’s begin the step by step explanation for the conversion of black & white image into a colored image. First, we need to import the libraries that we will be using.

Automatic Colorization of Black and White images code

import numpy as np
import matplotlib.pyplot as plt
import cv2

First, we need to import the libraries that we will be using in this code.
In case you don’t have these libraries installed, you can install them using the windows command prompt. To install these libraries in your Python application you can use the commands below.

OpenCV - pip install opencv-python
Matplotlib - pip install matplotlib
NumPy - pip install numpy
image = 'test_sample.jpg'

The name of the test sample(black and white image) is stored in a variable named “image”. The reason for saving the name of the test sample in a separate variable is to use the same name to save the colored image of the test sample.

prototxt = "../b&w_to_color/model/colorization_deploy_v2.prototxt"
caffe_model = "../b&w_to_color/model/colorization_release_v2.caffemodel"
pts_npy = "../b&w_to_color/model/pts_in_hull.npy"

Next, we will provide the path where the files “.cafffemodel”, “.prototxt”, “.npy” and the testing image is located. These paths will be used to access the model from the specified location.

test_image =  "../b&w_to_Color/"+image

Now, we defined the path where the test image is located and merged it with the variable “image”. Merging of the specified path with the variable “image”, we can get access to the test sample image.

net = cv2.dnn.readNetFromCaffe(prototxt, caffe_model)
pts = np.load(pts_npy)

In line 9, we are loading our Caffe model. The function cv2.dnn.readNetFromCaffe() accepts two parameters.

  • prototxt – path to “.prototxt” file
  • caffe_model – path to “.caffemodel” file

In line 10, we loaded the “.npy” file using NumPy.

layer1 = net.getLayerId("class8_ab")
print(layer1)
layer2 = net.getLayerId("conv8_313_rh")
print(layer2)
pts = pts.transpose().reshape(2, 313, 1, 1)
net.getLayer(layer1).blobs = [pts.astype("float32")]
net.getLayer(layer2).blobs = [np.full([1, 313], 2.606, dtype="float32")]

Next step is to get the layer id from the caffee model by using the function “.getLayerId()”. The “.getLayerId()” takes one parameter.

Example : net.getLayerId(“name of the layer”)

In line 11 & line 13 we are fetching the layer IDs of the two outputs (“class8_ab”, “conv8_313_rh”) from the last layer of the network. Line 15-17, we are transposing our NumPy file and reshaping the cluster centers stored in them as a 1×1 matrix, then adding it to our model.

To understand how we got the above two output names, please refer to the below image.

layer {
  name: "class8_ab"
  type: "Convolution"
  bottom: "class8_313_rh"
  top: "class8_ab"
  convolution_param {
    num_output: 2
    kernel_size: 1
    stride: 1
    dilation: 1
  }
}

This is the last part of the code in our “.prototxt” file. As we can see, the number of outputs is two and above that are the output names.
Now moving forward with our code.

# Read image from the path
test_image = cv2.imread(test_image)
# Convert image into gray scale
test_image = cv2.cvtColor(test_image, cv2.COLOR_BGR2GRAY)
# Convert image from gray scale to RGB format
test_image = cv2.cvtColor(test_image, cv2.COLOR_GRAY2RGB)
# Check image using matplotlib
plt.imshow(test_image)
plt.show()

In line 19, we are using OpenCV to read our test image from the path. Next, we are converting the image from BGR format to GRAY format and then again converting it from gray format to RGB format. After the conversion process, we are using the Matplotlib library to print/check the image.

# Normalizing the image
normalized= test_image.astype("float32") / 255.0
# Converting the image into LAB
lab_image = cv2.cvtColor(normalized, cv2.COLOR_RGB2LAB)
# Resizing the image
resized = cv2.resize(lab, (224, 224))
# Extracting the value of L for LAB image
L = cv2.split(resized)[0]
L -= 50   # OR we can write L = L - 50

Now, we are performing the scaling operation by normalizing the image pixels between 0-1. Then we are converting the image format from RGB to LAB. To learn more about LAB color space, please visit LAB Color Space. In line 32, we are resizing the image into 224×224 shape. The cv2.split() function splits the image into three channels, i.e. L, A, B. It is used to extract the L-channel from the LAB image by using its index number.

# Setting input
net.setInput(cv2.dnn.blobFromImage(L))
# Finding the values of 'a' and 'b'
ab = net.forward()[0, :, :, :].transpose((1, 2, 0))
# Resizing
ab = cv2.resize(ab, (test_image.shape[1], test_image.shape[0]))

In line 37, we are providing the L-channel as an input to our model and then predicting the “a” and “b” values from the model in the next line. In line 41, we are resizing “a” and “b” into the shape of our input image.

L = cv2.split(lab_image)[0]
# Combining L,a,b
LAB_colored = np.concatenate((L[:, :, np.newaxis], ab), axis=2)
# Checking the LAB image
plt.imshow(LAB_colored)
plt.title('LAB image')
plt.show()

Next, the L-channel is extracted again but from the original LAB image, because the dimensions of all three planes(L, a, b) should be the same. Then we combine the L-channel with “a” and “b” by using Numpy to get the LAB colored image. Then, we use Matplotlib to show the image.

# Converting LAB image to RGB_colored
RGB_colored = cv2.cvtColor(LAB_colored,cv2.COLOR_LAB2RGB)
# Limits the values in array
RGB_colored = np.clip(RGB_colored, 0, 1)
# Changing the pixel intensity back to [0,255]
RGB_colored = (255 * RGB_colored).astype("uint8")
# Checking the image
plt.imshow(RGB_colored)
plt.title('Colored Image')
plt.show()

We have obtained a LAB colored image, but the image is not understandable. So, we need to convert the LAB image into RGB format, which we did in line 50. In the next line, we used np.clip() for clipping the RGB image between “0” and “1”. Clipping means, if the interval is [0,1], then all the values smaller than zero will become zero, and all the values larger than one will become one. 

If we remember, we normalized our image pixels between 0-1 in line 28. In line 54, we are changing the image pixels back between 0-255. 

Now, after plotting the RGB image by using Matplotlib, we will get a perfectly colored image for our black and white test image.

# Converting RGB to BGR
RGB_BGR = cv2.cvtColor(RGB_colored, cv2.COLOR_RGB2BGR)
# Saving the image in desired path
cv2.imwrite("../results/"+image, RGB_BGR)

To save the colored image, first, it is converted from RGB format to BGR format and then OpenCV is used to save the image at the described path. As we can see cv2.imwrite() takes to arguments, i.e. path (the location where the file should be saved) and RGB_BGR (the file).

Python program to make black and white picture to color

# Importing libraries
import numpy as np
import matplotlib.pyplot as plt
import cv2

# Name of testing image
image = 'test_sample.jpg'

# Path of our caffemodel, prototxt, and numpy files
prototxt = "C:/Users/faisa_er1g244/Desktop/B&W_to_Color/colorization_deploy_v2.prototxt"
caffe_model = "C:/Users/faisa_er1g244/Desktop/B&W_to_Color/colorization_release_v2.caffemodel"
pts_npy = "C:/Users/faisa_er1g244/Desktop/B&W_to_Color/pts_in_hull.npy"

test_image =  "C:/Users/faisa_er1g244/Desktop/B&W_to_Color/test_samples/"+image

# Loading our model
net = cv2.dnn.readNetFromCaffe(prototxt, caffe_model)
pts = np.load(pts_npy)
 
layer1 = net.getLayerId("class8_ab")
print(layer1)
layer2 = net.getLayerId("conv8_313_rh")
print(layer2)
pts = pts.transpose().reshape(2, 313, 1, 1)
net.getLayer(layer1).blobs = [pts.astype("float32")]
net.getLayer(layer2).blobs = [np.full([1, 313], 2.606, dtype="float32")]

# Converting the image into RGB and plotting it
# Read image from the path
test_image = cv2.imread(test_image)
# Convert image into gray scale
test_image = cv2.cvtColor(test_image, cv2.COLOR_BGR2GRAY)
# Convert image from gray scale to RGB format
test_image = cv2.cvtColor(test_image, cv2.COLOR_GRAY2RGB)
# Check image using matplotlib
plt.imshow(test_image)
plt.show()

# Converting the RGB image into LAB format
# Normalizing the image
normalized = test_image.astype("float32") / 255.0
# Converting the image into LAB
lab_image = cv2.cvtColor(normalized, cv2.COLOR_RGB2LAB)
# Resizing the image
resized = cv2.resize(lab_image, (224, 224))
# Extracting the value of L for LAB image
L = cv2.split(resized)[0]
L -= 50   # OR we can write L = L - 50

# Predicting a and b values
# Setting input
net.setInput(cv2.dnn.blobFromImage(L))
# Finding the values of 'a' and 'b'
ab = net.forward()[0, :, :, :].transpose((1, 2, 0))
# Resizing
ab = cv2.resize(ab, (test_image.shape[1], test_image.shape[0]))

# Combining L, a, and b channels
L = cv2.split(lab_image)[0]
# Combining L,a,b
LAB_colored = np.concatenate((L[:, :, np.newaxis], ab), axis=2)
# Checking the LAB image
plt.imshow(LAB_colored)
plt.title('LAB image')
plt.show()

## Converting LAB image to RGB
RGB_colored = cv2.cvtColor(LAB_colored,cv2.COLOR_LAB2RGB)
# Limits the values in array
RGB_colored = np.clip(RGB_colored, 0, 1)
# Changing the pixel intensity back to [0,255],as we did scaling during pre-processing and converted the pixel intensity to [0,1]
RGB_colored = (255 * RGB_colored).astype("uint8")
# Checking the image
plt.imshow(RGB_colored)
plt.title('Colored Image')
plt.show()

# Saving the colored image
# Converting RGB to BGR
RGB_BGR = cv2.cvtColor(RGB_colored, cv2.COLOR_RGB2BGR)
# Saving the image in desired path
cv2.imwrite("C:/Users/faisa_er1g244/OneDrive/Desktop/B&W_to_Color/output_images/"+image, RGB_BGR)

So, this was the step by step guide to automatically convert any black and white image into a colored image. I hope you were able to thoroughly understand the code. Thank You.

3 responses to “Automatic Colorization of Black and White images using ML in Python”

  1. Bikas says:

    I am trying this code in colab and getting below error:

    —————————————————————————
    error Traceback (most recent call last)
    in ()
    3 normalized= test_image.astype(“float32”) / 255.0
    4 # Converting the image into LAB
    —-> 5 lab_image = cv2.cvtColor(normalized, cv2.COLOR_RGB2LAB)
    6 # Resizing the image
    7 resized = cv2.resize(lab_image, (224, 224))

    error: OpenCV(4.1.2) /io/opencv/modules/imgproc/src/color.simd_helpers.hpp:92: error: (-2:Unspecified error) in function ‘cv::impl::{anonymous}::CvtHelper::CvtHelper(cv::InputArray, cv::OutputArray, int) [with VScn = cv::impl::{anonymous}::Set; VDcn = cv::impl::{anonymous}::Set; VDepth = cv::impl::{anonymous}::Set; cv::impl::{anonymous}::SizePolicy sizePolicy = (cv::impl::::SizePolicy)2u; cv::InputArray = const cv::_InputArray&; cv::OutputArray = const cv::_OutputArray&]’
    > Invalid number of channels in input image:
    > ‘VScn::contains(scn)’
    > where
    > ‘scn’ is 1

  2. Lorena says:

    I am trying to and getting below error:

    test_image = cv2.cvtColor(test_image, cv2.COLOR_BGR2GRAY)

    error: OpenCV(4.1.2) /io/opencv/modules/imgproc/src/color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function ‘cvtColor’

  3. ash says:

    What is the dataset used in the above colorization program

Leave a Reply

Your email address will not be published. Required fields are marked *