Volume 11, Issue 4 (2-2025)                   jhbmi 2025, 11(4): 0-0 | Back to browse issues page


XML Persian Abstract Print


Download citation:
BibTeX | RIS | EndNote | Medlars | ProCite | Reference Manager | RefWorks
Send citation to:
Mendeley  
Zotero  
RefWorks

Zavar M, Ghaffari H, Tabatabaee H. Enhancing HIFU Lesion Area Detection through Supervised and Contrastive Self-Supervised Learning with Wavelet-Based Feature Extraction and Hard Negatives (HWCSSL). jhbmi 2025; 11 (4)
URL: http://jhbmi.ir/article-1-897-en.html
Associate Professor, PhD in Computer Engineering, Department of Computer Engineering, Ferdows Branch, Islamic Azad University, Ferdows, Iran
Abstract:   (972 Views)
Introduction: Artificial intelligence (AI) has transformed various domains, including classification, detection, and prediction, through sophisticated machine learning algorithms aimed at enhancing the quality of life and service delivery. Conventional approaches that rely on manually engineered features often face significant limitations in complex tasks like medical diagnosis, particularly concerning suboptimal feature extraction and heightened sensitivity to noise interference. The emergence of deep neural networks, with their capacity for automated feature extraction, has revolutionized contemporary data analysis methodologies. This study leverages this technological advancement to detect lesions generated by High-Intensity Focused Ultrasound (HIFU), an innovative therapeutic technique developed for oncological treatment and hemorrhage management. The proposed methodology integrates both supervised and self-supervised learning paradigms to enhance detection accuracy by effectively utilizing labeled and unlabeled clinical datasets.
Method: A key challenge in traditional methods is optimal feature extraction and hyperparameter tuning. This research develops an innovative framework that combines supervised and contrastive self-supervised learning. The model processes both RF signals and B-mode images as inputs, simultaneously handling labeled and unlabeled data. Data augmentation techniques, including wavelet transforms and hard negative sampling, are employed. Model optimization is achieved through advanced algorithms and fine-tuning of hyperparameters to enhance performance in complex scenarios. This approach enables the concurrent analysis of multimodal data, improving diagnostic accuracy.
Results: The proposed model demonstrates substantial improvements in HIFU-induced lesion classification performance. Quantitative evaluation metrics, including accuracy, precision, recall, and F1-score, consistently validate the model's efficacy in distinguishing between healthy and pathological tissue regions. The multimodal integration of signal processing and image analysis components yields remarkable enhancements in overall system performance. The incorporation of self-supervised learning algorithms combined with wavelet transform techniques significantly augments the model's feature extraction capabilities, resulting in superior diagnostic accuracy compared to conventional approaches.
Conclusion: This investigation presents a robust and precise framework for HIFU lesion detection that effectively employs self-supervised learning methodologies. The developed system successfully extracts clinically relevant information from unlabeled datasets while achieving significant improvements in diagnostic reliability. By addressing critical challenges in medical imaging applications, particularly in non-invasive therapeutic interventions, this approach demonstrates considerable potential for enhancing clinical workflows. The proposed methodology establishes a foundational platform for the future development of advanced diagnostic tools in therapeutic ultrasound applications.
     
Type of Study: Original Article | Subject: Artificial Intelligence in Healthcare
Received: 2024/11/2 | Accepted: 2024/12/25

Add your comments about this article : Your username or Email:
CAPTCHA

Send email to the article author


Rights and permissions
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.