Journal of Real-Time Image Processing, cilt.20, sa.2, 2023 (SCI-Expanded)
Drivable area detection is an important component of various levels of autonomous driving starting from advanced driver assistance systems (ADAS) to fully automated vehicles. A drivable area detection system detects the road segment in front of the vehicle for it to drive freely and safely. Using LIght Detection And Ranging (LIDAR) or cameras, these systems need to identify areas free of vehicles, pedestrians and other objects constituting as obstacles for the vehicles movement. As such areas can vary from asphalt to dirt road with or without lane markings and with many obstacle configurations, learning-based approaches have provided effective algorithms using large training data. While accuracy is of high importance, training and runtime complexity of these methods also matter. In this work, we propose a deep learning-based method that detects the drivable area from a single image providing comparable performance with improved training and runtime performance. The model splits the given image in thin slices which are processed by a simple convolutional network regressor to model the drivable with a single parameter. The experiments on benchmark data shows comparable accuracy against the literature while showing improvement in runtime performance. It shows 237 fps operating speed and 92.55% detection performance on a Titan XP GPU while providing similar detection performance at above 30 fps on a low cost Jetson Nano module. Our code is available at https://github.com/Acuno41/D3NET.