RGB-Based ML System for Atmospheric Visibility Estimation
Visibility degradation is the cause of many road accidents and air crashes around the globe, it is caused and controlled by multiple key factors that hinder the observer from having a clear vision of what's ahead, playing a crucial role in aviation safety. Alas, researchers and engineers created many tools to measure or restore images suffering from those degradations. This paper compares different image-based deep learning architectures for Atmospheric Visibility Range Classification. A Vision Transformer trained from scratch and three pre-trained Convolution Neural Network (CNN) models were examined and contrasted in terms of accuracy, obtaining a validation accuracy of more than 95 %. Our experiment shows that the different models trained on the Federal Aviation Administration (FAA) initial dataset can be used towards building an efficient tool to aid in Flight Control for long-range visibility estimation. Results showed that DenseNet121 was the best performing model with 99 % training accuracy and 98 % validation accuracy and shows early convergence, while the Vision Transformer kept on showing signs of improving but falling behind only by a bit behind DenseNet121 by the end of the experiment.