Is the calculated entropy from the gray image (directly) the same as the entropy function extracted from GLCM (texture function)?
No, these two entropies are quite different:
skimage.filters.rank.entropy(grayImg, disk(5)) gives an array of the same size as grayImg , which contains local entropy from an image computed on a circular disk centered at the corresponding pixel and 5 pixel radius. Take a look at Entropy (information theory) to find out how entropy is calculated. The values ββin this array are useful for segmentation (see this link to see an example of detecting an object based on entropy). If your goal is to describe the entropy of an image through a single (scalar) value, you can use skimage.measure.shannon_entropy(grayImg) . This function basically applies the following formula to the full image:

Where
is the number of gray levels (256 for 8-bit images),
is the probability that the pixel will have a gray level
and
is the base of the logarithm function. When
set to 2, the return value is measured in bits .- The Gray Level Match Matrix (GLCM) is a histogram of matching shades of gray at a given offset in the image. Functions such as entropy, energy, contrast, correlation, etc., are usually extracted to describe the texture of an image. From several matching matrices calculated for different offsets. In this case, the entropy is defined as follows:

Where
and
again are the number of gray levels and the base of the logarithm function, respectively, and
means the probability of two pixels separated by a specified offset with intensity
and
. Unfortunately, entropy is not one of the GLCM properties that you can calculate using scikit-image * . If you want to compute this function, you need to pass GLCM to skimage.measure.shannon_entropy .
* At the time when it was last edited, the latest version of scikit-image 0.13.1.
If not, what is the correct way to extract all texture objects from an image?
There are many possibilities for describing the texture of an image, for example, local binary patterns, Gabor filters, bursts, law masks, and many others. Haralick GLCM is one of the most popular texture descriptors. One possible approach to describing image texture through GLCM functions is to compute GLCM for different offsets (each offset is determined by distance and angle) and extract various properties from each GLCM.
Consider, for example, three distances (1, 2, and 3 pixels), four angles (0, 45, 90, and 135 degrees) and two properties (energy and uniformity). This leads to
displacements (and therefore 12 GLCMs) and vector size functions
. Here is the code:
import numpy as np from skimage import io, color, img_as_ubyte from skimage.feature import greycomatrix, greycoprops from sklearn.metrics.cluster import entropy rgbImg = io.imread('/img/01626ffdcd9476a0adbe66921077a4ee.jpg') grayImg = img_as_ubyte(color.rgb2gray(rgbImg)) distances = [1, 2, 3] angles = [0, np.pi/4, np.pi/2, 3*np.pi/4] properties = ['energy', 'homogeneity'] glcm = greycomatrix(grayImg, distances=distances, angles=angles, symmetric=True, normed=True) feats = np.hstack([greycoprops(glcm, prop).ravel() for prop in properties])
Results obtained using this image:
:
In [56]: entropy(grayImg) Out[56]: 5.3864158185167534 In [57]: np.set_printoptions(precision=4) In [58]: print(feats) [ 0.026 0.0207 0.0237 0.0206 0.0201 0.0207 0.018 0.0206 0.0173 0.016 0.0157 0.016 0.3185 0.2433 0.2977 0.2389 0.2219 0.2433 0.1926 0.2389 0.1751 0.1598 0.1491 0.1565]