Navigation and Guidance for Autonomous Quadcopter Drones Using Deep Learning on Indoor Corridors
DOI:
https://doi.org/10.33795/jartel.v12i4.422Keywords:
Quadcopter, ResNet50V2, CNN, deep learning, python, TensorFlowAbstract
Autonomous drones require accurate navigation and localization algorithms to carry out their duties. Outdoors drones can utilize GPS for navigation and localization systems. However, GPS is often unreliable or not available at all indoors. Therefore, in this research, an autonomous indoor drone navigation model was created using a deep learning algorithm, to assist drone navigation automatically, especially in indoor corridor areas. In this research, only the Caddx Ratel 2 FPV camera mounted on the drone was used as an input for the deep learning model to navigate the drone forward without a collision with the wall in the corridor. This research produces two deep learning models, namely, a rotational model to overcome a drone's orientation deviations with a loss of 0.0010 and a mean squared error of 0.0009, and a translation model to overcome a drone's translation deviation with a loss of 0.0140 and a mean squared error of 0.011. The implementation of the two models on autonomous drones reaches an NCR value of 0.2. The conclusion from the results obtained in this research is that the difference in resolution and FOV value in the actual image captured by the FPV camera on the drone with the image used for training the deep learning model results in a discrepancy in the output value during the implementation of the deep learning model on autonomous drones and produces low NCR implementation values.
References
T. Elmokadem, “Advanced Algorithms of Collision Free Navigation and Flocking for Autonomous UAVs,” no. October, 2021, [Online]. Available: http://arxiv.org/abs/2111.00166.
[R. P. Padhy, S. Ahmad, S. Verma, S. Bakshi, and P. K. Sa, “Localization of unmanned aerial vehicles in corridor environments using deep learning,” Proc. - Int. Conf. Pattern Recognit., pp. 9423–9428, 2020, doi: 10.1109/ICPR48806.2021.9412096.
K. Gryte, J. M. Hansen, T. A. Johansen, and T. I. Fossen, “Robust navigation of UAV using inertial sensors aided by UWB and RTK GPS,” AIAA Guid. Navig. Control Conf. 2017, 2017, doi: 10.2514/6.2017-1035.
C. Mei, G. Sibley, M. Cummins, P. Newman, and I. Reid, “RSLAM: A system for large-scale mapping in constant-time using stereo,” Int. J. Comput. Vis., vol. 94, no. 2, pp. 198–214, 2011, doi: 10.1007/s11263-010-0361-7.
M. Achtelik, A. Bachrach, R. He, S. Prentice, and N. Roy, “Stereo vision and laser odometry for autonomous helicopters in GPS-denied indoor environments,” Unmanned Syst. Technol. XI, vol. 7332, p. 733219, 2009, doi: 10.1117/12.819082.
D. K. Kim and T. Chen, “Deep Neural Network for Real-Time Autonomous Indoor Navigation.”
B. Bender, M. E. Atasoy and F. Semiz, "Deep Learning-Based Human and Vehicle Detection in Drone Videos," 2021 6th International Conference on Computer Science and Engineering (UBMK), 2021, pp. 446-450, doi: 10.1109/UBMK52708.2021.9558888.
D. K. Behera and A. Bazil Raj, "Drone Detection and Classification using Deep Learning," 2020 4th International Conference on Intelligent Computing and Control Systems (ICICCS), 2020, pp. 1012-1016, doi: 10.1109/ICICCS48265.2020.9121150.
“Keras Applications.” https://keras.io/api/applications/ (accessed Jul. 21, 2022).
A. Jierula, S. Wang, T. M. Oh, and P. Wang, “Study on accuracy metrics for evaluating the predictions of damage locations in deep piles using artificial neural networks with acoustic emission data,” Appl. Sci., vol. 11, no. 5, pp. 1–21, 2021, doi: 10.3390/app11052314.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2022 Ahmad Wilda Yulianto, Dhandi Yudhit Yuniar, Yoyok Heru Prasetyo

This work is licensed under a Creative Commons Attribution 4.0 International License.