This thesis addresses the challenges of semantic image retrieval and labeled data scarcity
in Content-Based Image Retrieval (CBIR) by introducing SO-DRCNN, a novel Self-Optimizing DeepRec Convolutional Neural Network framework. SO-DRCNN leverages
a hybrid approach, combining the strengths of handcrafted features (Ternion Paradigm:
HOG, ICH, SERC) and deep learning. A pre-trained ResNet-50 backbone, enhanced with
Recurrent Patching (Bi-LSTM), Spatial Pyramid Pooling (SPP/ASPP), and Attention
mechanisms, extracts high-level semantic features. A key innovation is the Siamese-Driven Feature Fusion, where a Siamese network, trained with a contrastive loss, learns
to adaptively combine handcrafted and deep features, optimizing the fused representation
for similarity. This self-supervised training strategy (Auto-Embedder) eliminates the
need for manual image labels. Experiments on benchmark datasets demonstrate that SODRCNN achieves state-of-the-art retrieval accuracy, outperforming traditional methods
and demonstrating the effectiveness of the learned fusion strategy. The system is also
integrated with Elasticsearch for scalable retrieval. This work contributes a robust,
efficient, and interpretable solution for semantic CBIR.