1st International Conference on Emerging Technologies for Dependable Internet of Things, ICETI 2024, Sana´a, Yemen, 25 - 26 Kasım 2024, (Tam Metin Bildiri)
Imaging in complex scattering media, such as fog or turbid environments, presents significant c hallenges d ue t o random light scattering, which distorts images and obscures object details. Traditional techniques have struggled to produce high-quality reconstructions, especially when dealing with optically thick or dynamic scattering media. In this paper, we present a powerful deep-learning-based framework that includes an optical system designed to generate severely degraded speckle patterns. To address this, we fine-tuned a U-Net architecture to recover original images from highly distorted speckle patterns. Our approach uses multi-scale feature extraction and Mean Squared Error (MSE) as a loss function, to improve structural detail and overall image quality. Comprehensive evaluations were performed using a variety of metrics, including the Peak Signal-to-Noise Ratio (PSNR) and the Structural Similarity Index (SSIM), demonstrating that the proposed model consistently outperforms existing methods, including GAN-based approaches, in reconstructing extensively scattered images. Quantitative results reveal that our model has an SSIM of 0.91 and a PSNR of 23.67, which are much higher than those produced by competing models. Qualitative comparisons support the greater reconstruction of small details and finer edges.