홈으로ArticlesAll Issue
ArticlesContent-Based Image Retrieval Using a Combination of Texture and Color Features
  • Hee-Hyung Bu1, Nam-Chul Kim2,*and Sung-Ho Kim1

Human-centric Computing and Information Sciences volume 11, Article number: 23 (2021)
Cite this article 3 Accesses
https://doi.org/10.22967/HCIS.2021.11.023

Abstract

Image retrieval is headed towards the ultimate goal of achieving the performance very similar to human cognitive ability. As an attempt of such work, this paper proposes a content-based image retrieval using a combination of texture features extracted from Gabor local correlation and uniform magnitude local binary pattern in value component and color features from color autocorrelogram in hue and saturation components. The texture features have multi-resolution multi-direction characteristics. In contrast, the color features have spatial structural information for color, which is rotation-invariant. Further, the HSV color space used herein is similar to the human visual system. Especially, two-dimensional (2D) Gabor transform used to extract parts of texture features, mimics the biological visual strategy of embedding angular and spectral analysis within global spatial coordinates, as using empirical 2D receptive field profiles obtained from orientation-selective neurons in cat visual cortex as the weighting functions. Based on the experimental results, we confirm that the proposed combined method outperforms compared existing methods and the methods using partial ones stemming from the proposed features in terms of retrieval performance.


Keywords

Content-Based Image Retrieval,Gabor Local Correlation,Uniform Magnitude Local Binary Pattern,Color Autocorrelogram,HSV Color Space


Introduction

Currently, we are living in the age of information in which data, particularly images, exist in digital format. In industrial area, several image retrieval systems have been developed. For example, web services are utilized to ease user experience; however, methods that use keywords are limited in application because many uploaded images do not have keywords or because of different user perspectives.
Content-based image retrieval systems automatically retrieve images by image features extracted based on image content. Many image features of interest include texture, color, and shape [14]. Therefore, herein, we propose a method that utilizes a combination of texture and color features. The texture features are extracted using Gabor local complex correlation and uniform magnitude local binary pattern (UMLBP). Gabor wavelets are modeled on the receptive fields of the orientation-selective simple cells [5], which is significant in the aspect of the human visual system. A set of two-dimensional (2D) Gabor wavelets proposed by Daugman [5], samples the frequency domain in a log-polar manner. The Gabor wavelets efficiently reduce image redundancy and have robustness to noise. The multi-resolution and multi-direction Gabor representations have received special attentions as receptive fields of simple cells in the primary visual cortex of mammals are oriented and have the characteristic of local spatial frequencies. In [6], the authors proposed Gabor features that provide the best pattern retrieval accuracy and distinctively describe images. Thus, the adaptive filter selection strategy is suggested to reduce image processing computations while maintaining a reasonable level of retrieval performance. Studying Gabor wavelets is important as they can significantly contribute to research areas such as texture analysis and image processing. In content-based image retrieval systems, a set of 2D Gabor wavelets is often used for extracting texture features. Joshieand Mukherjee [6] proposed the fusion technique using Gabor and scale invariant feature transform descriptors. A content-based image retrieval (CBIR) system using collective color and texture feature extraction with linear discriminant analysis was proposed by Jain and Salankar [8]. In [9], the authors proposed a CBIR using color moment and Gabor texture features. The image retrieval approach using Gabor features proposed by Manjunathand Ma [6] outperforms approaches that use pyramid-structured wavelet transform features, tree-structured wavelet transform features, and multi-resolution simultaneous autoregressive model features. Rotation-invariant and scale-invariant Gabor features for texture image retrieval have been proposed by Han and Ma [10], Rahman et al. [11], and Chen et al. [12], which use energy features but not correlation features. Although Gabor transforms implemented using Gabor wavelets are suitable for extracting texture features that thanks to Gabor wavelets responding well to edges and texture changes, they yield relatively low retrieval performance. In this paper, we propose an advanced Gabor local complex correlation feature unlike existing Gabor features with magnitude or real part. In addition, one of our goals is to select features well-matched to Gabor features and to improve the retrieval performance of the fused features. In details, we adopts the UMLBP because it is harmonized with the Gabor local complex correlation as texture features. Regarding color features, we adopts the color autocorrelogram [13]. The color autocorrelogram is extracted in HSV color space and harmonized with our texture features. Based on the result, the fused feature demonstrates high retrieval performance.
Recently, CBIR using deep learning has become a big stream. Ahmed et al. [14] proposed a CBIR using fusion of the spatial color information with shaped extracted features and object recognition, where a bag-of-words (BoW) approach is employed when retrieving. Shakaramiand Tarrah [15] proposed a combined method of deep features and handcrafted-PCA (principal component analysis) features, where deep features are extracted from improved AlexNet convolutional neural network (CNN) and handcrafted features from histogram of oriented gradients (HOG) and local binary patterns (LBP). Kan et al. [16] proposed a supervised deep feature embedding with a handcrafted feature model, which includes a new loss function combining the distance metric with the label information. Wang et al. [17] proposed enhancing a sketch-based image retrieval by CNN semantic re-ranking, which uses two CNNs of Q-Net and N-Net for classification. The obtained category information of sketches and natural images are used to re-rank initial retrieval results. Ghrabat et al. [18] proposed a greedy learning of deep Boltzmann machine’s variance and search algorithm for efficient image retrieval, which utilizes classifying an image with the optimized features by a classification algorithm. These methods using classification can be used only if the information of grouping images are already known, where in most cases low level features are functioned of vital importance.
In this paper, we concentrate on extracting low-level local features that is able to be adopted in deep learning. Its combination is able to use to CBIR directly without prior knowledge of class information. In our contribution, we suggest an excellent combination of the advanced Gabor local complex correlation and the scale based UMLBP for texture features and the color autocorrelogram for color features. The proposed method has multi-resolution and multi-direction characteristics for texture and rotation-invariant spatial structural information for color. In the next section, we describe the proposed image retrieval system and explain the details of the feature extraction methods. The experimental processes and results are discussed in Section 3. Finally, in Section 4, we present the conclusions of the paper.


The Proposed Image Retrieval System

Our proposed system is achieved through the combined features extracted from the Gabor local complex correlation, UMLBP, and color autocorrelogram. The Gabor wavelets represent specific frequencies in different specific directions. Herein, we used a set of Gabor wavelets to extract texture features in this paper. The objective of this paper is to propose the method using Gabor wavelets enhanced for image retrieval performance and to combine the Gabor feature and other harmonized features. Fig. 1 represents the block diagram of our proposed content-based image retrieval system. The proposed system is conducted in the following four steps:
Step 1converts an input RGB query image to a HSV image.
Step 2conducts feature extraction processes. First, it performs the Gabor transform in the value (V) component and then extracts the local correlation features. Second, it extracts the UMLBP features in the V component. Finally, it extracts the color autocorrelogram features in the domain of the hue (H) and saturation (S) components.
Step 3combines features to obtain a feature vector.
Step 4 computes the similarity of the two feature vectors, i.e., the feature vector of the query image and each of the feature vectors of test images stored in the image database, and retrieved the most similar images.

Fig. 1. The proposed content-based image retrieval block diagram.

Gabor Wavelets Transform
The 2D Gabor wavelets are Gaussian functions modulated by sine plane waves with specific frequencies and directions. They provide the local spatial frequency information. Herein, the 2D Gabor wavelet proposed by Han and Ma [10] is expressed as follows:

표(1)

where $σ_x$ and $σ_y$ represent the spatial variances of the Gabor wavelet. The pair of x and ydenotes the spatial location of the kernel. $j=\sqrt{-1}$ and $W$ represents the modulation frequency of the Gaussian function. Moreover, the 2D Gabor wavelet set using g(x,y) for multiresolution and multi-direction is as follows:

표(2)

where x'=$a^{-s}$ (xcos$θ_n$+ysin$θ_n$ ),y'=$a^{-s}$ (-xsin$θ_n$+ycos$θ_n$ ),a>1, and $θ_n$=nπ/K for s=0,1,…,S-1 and n=0,1,…,K-1. The symbol S represents the number of scales and K refers to the number of directions. The symbols a, $σ_x$, and $σ_y$ are as follows [10]:

표(3)

where $U_l$ and $U_h$ (=W) denote the lower and upper center frequencies, respectively, and $σ_u$ and $σ_v$ are variances in the frequency domain [10] as follows:

표(4)

Spatial frequency components provide essential information, which is not gained directly from pixels. The Gabor transform is used to significantly represent the energy characteristics of the frequency of the local region, and it is unaffected by the change of the object size and illumination. The 2D Gabor transform is the convolution of the input image and 2D Gabor kernel. For the given input image I(x,y) and the 2D Gabor kernel with scale s and direction n, the convolution is given as follows.

표(5)

where the operator * stands for the convolution and g_(s,n) is called point spread function or impulse response. The convolution outputs s × n images whose sizes are the same as those of input image I.

Gabor Correlation Feature Extraction
The correlation coefficient is a covariance between two random variables scaled by the product of their standard deviations. It represents the strength of the relationship between the relative movements of the two variables [19]. Given two images A and B, the mean operation for the computation of a complex correlation coefficient is expressed as

표(6)

where the symbol T represents a 3×3 window, pis a pixel position, and|T| stands for the size of T. The VAR (variance) operation is also written as

표(7)

Using the above expression, we define the complex COR (correlation coefficient) as follows:

표(8)

The details of the proposed correlation feature extraction are as follows.
Step 1 transforms a query image to a gray image.
Step 2 creates a Gabor kernel with scale s and direction n.
Step 3 conducts a Gabor transform for the gray image resulted in Step 1.
Step 4 creates the correlation coefficient image from the complex image derived in Step 3, which can be expressed as follows:

표(9)

whereRe{∙} represents the real part of the complex number. The vector $δθ_n$ represents $δθ_n$=(cos$θ_n$,sin$θ_n$ ). $J_{s,n}$ represents the Gabor transformed image by parameters of scale s and direction n; $ρ_{s,n}$ (p) is the correlation coefficient in T between a center pixel p and another centered at p-s∙δ$θ_n$.
Step 5 calculates the global average $μ_{s,n}^ρ$ and global standard deviation $σ_{s,n}^ρ$ for the result derived in Step 4, and they are expressed as follows:

표(10)

표(11)

The superscript ρ denotes the local correlation. The operator std represents the calculated standard deviation, and p represents the pixel position.
The Gabor correlation feature vector is expressed as follows:

표(12)

where $[μ_{s,n}^ρ]$ and $[σ_{s,n}^ρ]$ denote for the vectors of the global averages and the global standard deviations, respectively, for local correlation.

Uniform Magnitude Local Binary Pattern Feature Extraction
The UMLBP refers to the uniform local binary pattern (ULBP) of magnitude images, and it has rotation-invariant characteristics. The magnitude local binary pattern (MLBP) is the absolute local difference vector, which represents the image local structure well compared to the LBP [20, 21] using only a sign vector. The details of the UMLBP texture feature extraction are as follows:
Step 1 converts a query image to a gray image.
Step 2 calculates the absolute difference $y_{r,θ_n}$ of the value of position p and the value of the position 2^(r-1)-distant from p in the direction θ, where r is the resolution level.
Step 3 calculates the average μ_r of magnitude components of $y_{r,θ_n}$ in resolution.
Therefore, the MLBP with N directions on an image of resolution r is expressed as follows:

표(13)

표(14)

wherep denotes the pixel position, N denotes the total number of directions, |$y_{r,θ}$ (p) | denotes the absolute difference of the value of position p and the value of the position $2^{r-1}$-distant from p in the direction θ. The symbol r signifies that r∈{1,2,⋯,M}, M is the total number of resolution levels, $θ=(2π∙n)/N$, and $μ_r$ is the average of magnitude components in resolution r.
Step 4 evaluates the UMLBP as follows:

표(15)

where the operator U represents the sum of bits in the MLBP.M
The UMLBP features are extracted from the normalized histogram of UMLBP in each resolution as follows:

표(16)

where i∈{0,1,2,⋯,N,N+1}, |P| is the image size, and δ is the Kronecker delta.
Thus, the dimensions of features areM(N+2).

Color Autocorrelogram Feature Extraction in 2D of H and S Components
The HSV color model, similar to the human visual system, is often used for separating chrominance components from luminance components, as a cause of the robustness to a variant of illumination. The HSV color space comprises chrominance components of H and S and the luminance components of V. The color autocorrelogram [13] expresses the spatial correlation of colors using distance, unlike histograms using statistics. It has characteristics of robustly tolerating substantial appearance change by a variant of positions viewing, and camera zooming. The used color autocorrelogram features in this paper are extracted in the 2D domain after quantizing vectors of H and S components. The components of the 2D domain can be expressed as follows:

표(17)

where the subscript QHS denotes the quantized image composed of H and S components. Further, our experiment uses eight levels for hue and saturation components. The quantization is conducted of uniformly spaced between minimum and maximum values. The color autocorrelogram computes the probability of center pixel having neighbor of pixels of the same color with distance k as follows:

표(18)

wherec denotes the pixel value on the image $I_{QHS}$. pis a pixel position.


Experimental Results

The method proposed in this paper is the combined method of the Gabor local complex correlation, UMLBP, and color autocorrelogram. The retrieval performance of the proposed method is compared with those of the existing methods. The compared methods include methods that use partial features of our proposed method and the existing methods, such as color histogram, color structure descriptor (CSD), scalable color descriptor (SCD), ULBP, CLBP, color autocorrelogram, and UMLBP. Some of these features are extracted from the RGB space. In our experiment, Corel [22], VisTex [23], and Corel-1K [24] databases are used. Corel database has 990 color images of 192×128 of most artificial objects, which comprises 11 groups with 90 images in each group such as cars, flowers, airplanes, houses, etc. VisTexdatabase has 1,200 color images of 128×128 with most homogeneous patterns. It contains 75 groups with 16 images in each group, such as bark, fabric, tile, water, etc. Corel-1K database has 1,000 color images of 384×256 or 256×384. It includes 10 groups which consist of 100 images in each group, such as Africans, beaches, buildings, etc.
The feature vector of our proposed method has 136 dimensions composed of 32 dimensions of Gabor correlation, 40 dimensions of UMLBP, and 64 dimensions of color autocorrelogram. The existing methods used herein include color histogram, CLBP, SCD, and CSD of MPEG-7, and ULBP which has often referenced recently in image processing research areas. Table 1 shows the dimensions and color spaces of the methods used in our experiments.

Table 1. Dimensions and color spaces of the methods used in the experiments

Methods Dimension Color space Remark
Color histogram 128 RGB
SCD 128 HSV  
CSD 128 HMMD
ULBP 40 V  
Color autocorrelogramHS 64 HS 8:08
Color autocorrelogramRGB 216 RGB 6:06:06
UMLBP 40 V Scale 1–4
CLBPRGB 186 RGB  
CLBPRGB + Color autocorrelogramRGB 250(186, 64) RGB
Gabor correlation 32 V Scale 1–4,Directions 8
UMLBP + Color autocorrelogramHS 104(40, 64) V, HS
UMLBP + Gabor correlation 72(40, 32) V  
Gabor correlation + Color autocorrelogramHS 96(32, 64) V, HS
Proposed 136 HSV  

The similarity between one of the feature vectors of the target images and a query feature vector is measured by Mahalanobis distance [25] defined as follows:

표(19)

where | ∙ | represents the absolutes, $f_i^d$ is the i-th component feature of $f^q$ feature vector extracted from query image q, $f_i^d$ is the i-th component feature of $f^d$ feature vector extracted from database images and stored in feature database, M denotes the metric order, n represents the feature vector dimension, and $σ_i$ is the standard deviation of the i-th component features of feature vectors in the entire feature database which is already built.
The generally used measurement for retrieval performance [26] is as follows:

표(20)

표(21)

where | ∙ | denotes the size of the set, q is a query image, A(q) represents the retrieved image set for the query image, and R(q) is the relevant image set for the query image.
The proposed Gabor local complex correlation method shows better retrieval performance than one using absolute as shown in Fig. 2, where the difference shows approximately 6.6% when retrieving 10 images in Corel database.
Fig. 2. Precision against recall for Gabor correlations using complex and absolute, respectively in Corel database.
Tables 2–4 show the average gains for precision and recall. The average gains of the proposed method is 29.43% and 16.39% for accuracy and recall, respectively, in Corel database and those of 34.2% and 22.58% in VisTexdatabase over the methods using one of our partial features. Moreover, over the methods using pairs of our partial features, the average gains are 5.75% and 3.31% for precision and recall, respectively, in Corel database and 4.26% and 3.07% in VisTexdatabase. However, over the existing methods, the average gains are 21.81% and 12.21% for precision and recall, respectively, in Corel database and 7.07% and 5.15% in VisTexdatabase.

Table 2. Average gains (%) for precision and recall of the methods using one of our partial features
  Corel database VisTex database
Precision Recall Precision Recall
Gabor correlation 18.14 10.16 9.03 6.43
UMLBP 55.76 30.8 73.77 48.6
Color autocorrelogramHS 14.38 8.2 19.8 13.57

Table 3. Average gains (%) for precision and recall of the methods using pairs of our partial features
  Corel database VisTex database
Precision Recall Precision Recall
Gabor correlation + RMLBP 11.24 6.52 3.67 2.83
UMLBP + Color autocorrelogramHS 5.2 3.12 5.83 4.07
Gabor correlation + Color autocorrelogramHS 0.82 0.28 3.27 2.3

Table 4. Average gains (%) for precision and recall of the existing methods
  Corel database VisTex database
Precision Recall Precision Recall
Color histogram 33.98 19.38 11.6 8.37
SCD 11.98 6.44 7.27 5.17
CSD 16.92 9.24 5.8 4.3
UMLBP 22.66 12.52 10.03 7.13
Color autocorrelogramRGB 33.88 19.16 2.8 1.9
CLBPRGB 21.8 12.1 10.3 7.7
CLBPRGB + Color autocorrelogramRGB 15.84 8.94 3.6 3

Figs. 3–5 show graphs for precision against the recall. The apparent points are 10, 30, 50, 70, and 89, corresponding to the number of retrieved images in Colreldatabase and 5, 10, and 15 in VisTexdatabase. In Fig. 3(a) and 3(b), the 2D color autocorrelogram produces 64.44% and 67.33% average precisions in Corel and VisTexdatabases, respectively, which is found to be the best color feature in the experiments. In Fig. 4(a) and4(b), as a result of the methods using a pair of our partial features, “Gabor correlation + UMLBP” produces 67.58% and 83.47% average precisions in Corel and VisTexdatabases, respectively, which is the best harmony for texture features. “UMLBP + Color autocorrelogramHS” provides 73.62% and 81.3% average precisions, and “Gabor correlation + Color autocorrelogramHS” provides 78% and 83.87% average precisions in Corel and VisTexdatabases, respectively.
Fig. 5(a) and 5(b) show the retrieval performance of the proposed method against the existing methods. As shown in these figures, the combination of CLBPRGB and Color autocorrelogramRGB using RGB color space yields relatively high retrieval performance, which is 62.98% and 83.53% average precisions in Corel and VisTex databases, respectively; however, our method using HSV color space outperformed these earlier methods. Furthermore, we confirmed that our method is superior by 3.27% in Corel-1K database against the performance of the method of demonstrated in a recent research [27]. Fig. 6(a) and 6(b) show the top five retrieved images using Corel and ViTex databases accordingly. Based on the result, we confirmed that all result images are similar.
Fig. 3. Precision against recall for ones of our partial features in (a) Corel and (b) VisTex databases.
Fig. 4. Precision against recall for pairs of our partial features in (a) Corel and (b) VisTex databases.
Fig. 5. Precision against recall for the proposed method and the existing methods in (a) Corel and (b) VisTex databases.
Fig. 6. Examples of query images and their top-five retrieved images in (a) Corel and (b) VisTex databases.


Conclusion

In this paper, we proposed a content-based image retrieval method using combined features extracted from the Gabor local complex correlation, UMLBP, and color autocorrelogram. The Gabor local complex correlation and UMLBP features were used as texture features, and color autocorrelogram features were used as color features.As results of the experiments, the proposed method demonstrates excellent retrieval performance over the compared methods, which include the methods using our proposed partial features and the existing methods. It can be explained by major factors of using Gabor wavelets and the harmony of the combined features.
In a future research, we hope to develop advanced methods with less complexity and improved retrieval performance by adding shape features and adopting recent popular deep learning and bag of word of semantic technologies. It also applies to image databases with different types of contents.


Author’s Contributions

Hee-Hyung Bu wrote the manuscript. All the authors have reviewed the manuscript.


Funding

None.


Competing Interests

The authors declare that they have no competing interests.


Author Biography

author
Name : Hee-Hyung Bu
Affiliation : The School of Computer Science and Engineering, Kyungpook National University
Biography :
She received her B.S., M.S. and Ph.D. degrees in Computer Engineering from Mokpo National University, Jeonnam, Chonnam National University, Gwangju, and Kyungpook National University, Daegu, Korea, in 2004, 2006 and 2013, respectively. Since September 2019, she has been with the school of Computer Science and Engineering at Kyungpook National University, where she works as an invited professor. Her research interests include image retrieval, video compression and image processing.

author
Name : Nam-Chul Kim
Affiliation : The School of Electronic Engineering, Kyungpook National University
Biography :
He received his B.S. degree in Electronic Engineering from Seoul National University, in 1978, and M.S. and Ph.D. degrees in Electrical Engineering from Korea Advanced Institute of Science and Technology, Seoul, Korea, in 1980 and 1984, respectively. Since March 1984, he has been with the School of Electronics Engineering at Kyungpook National University, Daegu, Korea, where he is currently a professor emeritus. During 1991–1992, he was a visiting scholar in the Department of Electrical and Computer Engineering, Syracuse University, Syracuse, NY. His research interests are image processing and computer vision, biomedical image processing, and image and video coding.

author
Name : Sung-Ho Kim
Affiliation : The School of Computer Science and Engineering, Kyungpook National University
Biography :
He received his B.S. degree in Electronics from Kyungpook National University, Korea in 1981, and his M.S. and Ph.D. degrees in Computer Science from KAIST(Korea Advanced Institute of Science and Technology), Korea in 1983 and 1994 respectively. He has been a faculty member of the School of Computer Science & Engineering at Kyungpook National University since 1986.His researchinterests include real-time image processing and telecommunication, multi-media systems, etc.


References

[1] C. Li, S. Zhao, K. Xiao, and Y. Wang, “Face recognition based on the combination of enhanced local texture feature and DBN under complex illumination conditions,” Journal of Information Processing Systems, vol. 14, no. 1, pp. 191-204, 2018.
[2] J. Huang, X. Wang, and J. Wang, “Gait recognition algorithm based on feature fusion of GEI dynamic region and Gabor wavelets,” Journal of Information Processing Systems, vol. 14, no. 4, pp. 892-903, 2018.
[3] S. Zhou and S. Xiao, “3D face recognition: a survey,” Human-centric Computing and Information Sciences, vol. 8, article no. 35, 2018. https://doi.org/10.1186/s13673-018-0157-2
[4] A. Sun, Y. Li, Y. M. Huang, Q. Li, and G. Lu, “Facial expression recognition using optimized active regions,” Human-centric Computing and Information Sciences, vol. 8, article no. 33, 2018. https://doi.org/10.1186/s13673-018-0156-3
[5] J. G. Daugman, “Complete discrete 2-D Gabor transforms by neural networks for image analysis and compression,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 36, no. 7, pp. 1169-1179, 1988.
[6] B. S. Manjunath and W. Y. Ma, “Texture features for browsing and retrieval of image data,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 18, no. 8, pp. 837-842, 1996.
[7] C. Joshi and S. Mukherjee, “Empirical analysis of SIFT, Gabor and fused feature classification using SVM for multispectral satellite image retrieval,” in Proceedings of 2017 4th International Conference on Image Information Processing (ICIIP), Shimla, India, 2017, pp. 1-6.
[8] N. Jain and S. S. Salankar, “Content based image retrieval using improved Gabor wavelet transform and linear discriminant analysis,” in Proceedings of 2018 3rd International Conference for Convergence in Technology (I2CT), Pune, India, 2018, pp. 1-4.
[9] Z. C. Huang, P. P. Chan, W. W. Ng, and D. S. Yeung, “Content-based image retrieval using color moment and Gabor texture feature,” in Proceedings of 2010 International Conference on Machine Learning and Cybernetics, Qingdao, China, 2010, pp. 719-724.
[10] J. Han and K. K. Ma, “Rotation-invariant and scale-invariant Gabor features for texture image retrieval,” Image and Vision Computing, vol. 25, no. 9, pp. 1474-1481, 2007.
[11] M. H. Rahman, M. R. Pickering, M. R. Frater, and D. Kerr, “Texture feature extraction method for scale and rotation invariant image retrieval,” Electronics Letters, vol. 48, no. 11, pp. 626-627, 2012.
[12] G. Chen, N. Chen, and X. Lin, “The image retrieval based on scale and rotation-invariant texture features of Gabor wavelet transform,” in Proceedings of 2013 4th World Congress on Software Engineering, Hong Kong, China, 2013, pp. 340-344.
[13] J. Huang, S. R. Kumar, M. Mitra, W. J. Zhu, and R. Zabih, “Image indexing using color correlograms,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, PR, 1997, pp. 762-768.
[14] K. T. Ahmed, S. Ummesafi, and A. Iqbal, “Content based image retrieval using image features information fusion,” Information Fusion, vol. 51, pp. 76-99, 2019.
[15] A. Shakarami and H. Tarrah, “An efficient image descriptor for image classification and CBIR,” Optik, vol. 214, article no. 164833, 2020. https://doi.org/10.1016/j.ijleo.2020.164833
[16] S. Kan, Y. Cen, Z. He, Z. Zhang, L. Zhang, and Y. Wang, “Supervised deep feature embedding with handcrafted feature,” IEEE Transactions on Image Processing, vol. 28, no. 12, pp. 5809-5823, 2019.
[17] L. Wang, X. Qian, Y. Zhang, J. Shen, and X. Cao, “Enhancing sketch-based image retrieval by CNN semantic re-ranking,” IEEE Transactions on Cybernetics, vol. 50, no. 7, pp. 3330-3342, 2020.
[18] M. J. J. Ghrabat, G. Ma, Z. A. Abduljabbar, M. A. Al Sibahee, and S. J. Jassim, “Greedy learning of Deep Boltzmann Machine (GDBM)’s variance and search algorithm for efficient image retrieval,” IEEE Access, vol. 7, pp. 169142-169159, 2019.
[19] H. H. Bu, N. C. Kim, B. H. Lee, and S. H. Kim, “Content-based image retrieval using texture features extracted from local energy and local correlation of Gabor transformed images,” Journal of Information Processing Systems, vol. 13, no. 5, pp. 1372-1381, 2017.
[20] Z. Guo, L. Zhang, and D. Zhang, “A completed modeling of local binary pattern operator for texture classification,” IEEE Transactions on Image Processing, vol. 19, no. 6, pp. 1657-1663, 2010.
[21] T. Ojala, M. Pietikainen, and T. Maenpaa, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 971-987, 2002.
[22] Y. D. Chun, N. C. Kim, and I. H. Jang, “Content-based image retrieval using multiresolution color and texture features,” IEEE Transactions on Multimedia, vol. 10, no. 6, pp. 1073-1084, 2008.
[23] R. Picard, C. Graczyk, S. Mann, J. Wachman, L. Picard, and L. Campbell, “Vision Texture,” Massachusetts Institute of Technology, Cambridge, MA, 1995 [Online]. Available: https://vismod.media.mit.edu/vismod/imagery/VisionTexture/vistex.html.
[24] J. Z. Wang, J. Li, and G. Wiederhold, “SIMPLIcity: semantics-sensitive integrated matching for picture libraries,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 9, pp. 947-963, 2001.
[25] W. Y. Ma and B. S. Manjunath, “A comparison of wavelet transform features for texture image annotation,” in Proceedings of International Conference on Image Processing, Washington, DC, 1995, pp. 256-259.
[26] D. Comaniciu, P. Meer, K. Xu, and D. Tyler, “Retrieval performance improvement through low rank corrections,” in Proceedings IEEE Workshop on Content-Based Access of Image and Video Libraries (CBAIVL), Fort Collins, CO, 1999, pp. 50-54.
[27] S. R. Dubey, S. K. Singh, and R. K. Singh, “Multichannel decoded local binary patterns for content-based image retrieval,” IEEE Transactions on Image Processing, vol. 25, no. 9, pp. 4018-4032, 2016.

About this article
Cite this article

Hee-Hyung Bu1, Nam-Chul Kim2,*and Sung-Ho Kim1, Content-Based Image Retrieval Using a Combination of Texture and Color Features, Article number: 11:23 (2021) Cite this article 3 Accesses

Download citation
  • Recived29 March 2020
  • Accepted22 December 2020
  • Published30 May 2021
Share this article

Anyone you share the following link with will be able to read this content:

Provided by the Springer Nature SharedIt content-sharing initiative

Keywords