Guiding visually impaired people to find an object by using image to speech over the smart phone cameras


Denizgez T. M., Kamiloglu O., Kul S., SAYAR A.

2021 International Conference on INnovations in Intelligent SysTems and Applications, INISTA 2021, Kocaeli, Türkiye, 25 - 27 Ağustos 2021 identifier

  • Yayın Türü: Bildiri / Tam Metin Bildiri
  • Doi Numarası: 10.1109/inista52262.2021.9548122
  • Basıldığı Şehir: Kocaeli
  • Basıldığı Ülke: Türkiye
  • Anahtar Kelimeler: Audio processing, Mobile Application, MobileNet-SSD, Object Detection, Tensorflow-Lite, Text-to-Speech, Visual Impaired, Voice Recognition
  • Kocaeli Üniversitesi Adresli: Evet

Özet

© 2021 IEEE.One of the most common diseases in the world is visual impairment. According to the World Health Organization, one of every four people has a visual impairment, and this statistic is increasing every day depending on the use of technological devices. Nowadays there are a lot of solutions for visually impaired people but most of the solutions are expensive or not useful. In this paper, we proposed a novel system that can help visually impaired people to find an object. The proposed system guides visually impaired people to find an object by using image to speech technique over the smart phone cameras. Voice directions are given to the users. The novelty here is using users hand as a reference object. The system detects the hand of the user, falling in the camera view, and recognizes the other target objects where the camera angle sees, then the system guides the visually impaired person to the location of the target object by image-to-speech technique. This approach is based on calculating target object's position according to the user's hand. The positions are calculated as directions by using deep learning and image processing techniques and outcomes are notified to the user by the speech. The system uses Convolutional Neural Network (CNN) for object detection. The model is based on the Single Shot MultiBox Detector (SSD) approach. SSD approach has higher accuracy than You Only Look Once (YOLO) and a higher frame per second (fps) than Fast R-CNN, Faster R-CNN or R-CNN for object detection on smartphones. The TensorFlow-Lite model we use is based on SSD and was trained on Common Objects in Context (COCO) dataset which has ninety-one object classes.