2022 T2 Lab 1 Specification
1. Contrast Stretching
Contrast is a measure of the range of intensity values in an image and is defined as the difference between the maximum pixel value and minimum pixel value. The full contrast of an 8-bit image is 255 (max) – 0 (min) = 255. Any value less than that means the image has lower contrast than possible. Contrast stretching attempts to improve the contrast of the image by stretching the range of intensity values using linear scaling.
Assume that ?? _is the original input image and ?? _is the output image. Let ?? _and ?? _be the minimum and maximum pixel values allowed (for an 8-bit image that means ??=0 and ??=255) and let ?? _and ?? _be the minimum and maximum pixel values found in ??. Then the contrast-stretched image ?? _is given by the function:
Question 1: Write an algorithm that performs contrast stretching as per Equation (1) above. Read the given gray-scale image Kitten.png and run your algorithm to see whether it indeed improves the image quality. The result should look like this
Also write an algorithm that finds the coordinates of the minimum pixel value and the coordinates of the maximum pixel value in an image. Do not use the built-in OpenCV functions for these tasks but write your own code. Run it on both the input image and the output image and print the values of these pixels to confirm whether your contrast stretching algorithm works correctly.
2. Intensity Histogram
The histogram of an image shows the counts of the intensity values. It gives only statistical information about the pixels and removes the location information. For a digital image with ?? _gray levels, from 0 to ??−1, the histogram is a discrete function ℎ(??)=???? _where ??∈[0,??−1] is the ??th gray level and ???? _is the number of pixels with that gray level.
Question 2: Write an algorithm that computes and plots the histogram of an image. Do not use the built-in OpenCV functions for computing the histogram but write your own code to perform this task. Then run your algorithm on Kitten.png and its contrast-stretched version from Question 1 and visually compare the histograms
3 Image Edges
Edges are an important source of semantic information in images. They occur in human visual perception at divisions between areas of different intensity, colour, or texture. A gray-scale image can be thought of as a 2D landscape with areas of different intensity living at different heights. A transition between areas of different intensity in an image ?? _means there must be a steep slope, which we formalise as the gradient (vector):
As the image ?? _is discrete, we need to approximate the continuous derivatives ????/????and ????/????by finite differences. Simple examples of convolution kernels that perform finite differencing are the Sobel filters defined as follows
Question 3: Write an algorithm that computes the two Sobel images ????/????≈??∗???? _and ????/????≈??∗???? _from an input image. Use the given image CT.png to test your algorithm. Do not use the built-in OpenCV functions for computing the Sobel images but write your own code to perform this task. You may verify the output of your own algorithm by comparing with the output of built-in functions.
Notice that the calculations may produce negative output pixel values. Thus, make sure you use the right data types for the calculations and the output image.
After that, compute the gradient magnitude image
In other words, create a new output image having the same size as the input image and the Sobel images, and then for every pixel in the output image compute the value as the square root of the sum of the squared value of the Sobel image ????/????and the squared value of the Sobel image ????/????at that pixel position.
Here again, notice that the calculations may produce intermediate values outside the 8-bit range. Thus, make sure you use the right data types for the calculations.
The final result should look like this: