Image-Based Object Detection Approaches to be Used
Russian journal of nonlinear dynamics, Tome 18 (2022) no. 5, pp. 787-802.

Voir la notice de l'article provenant de la source Math-Net.Ru

This paper investigates the problem of object detection for real-time agents’ navigation using embedded systems. In real-world problems, a compromise between accuracy and speed must be found. In this paper, we consider a description of the architecture of different object detection algorithms, such as R-CNN and YOLO, to compare them on different variants of embedded systems using different datasets. As a result, we provide a trade-off study based on accuracy and speed for different object detection algorithms to choose the appropriate one depending on the specific application task.
Keywords: robot navigation, object detection, embedded systems, YOLO algorithms, R-CNN algorithms, object semantics.
@article{ND_2022_18_5_a3,
     author = {A. Ali Deeb and F. Shahhoud},
     title = {Image-Based {Object} {Detection} {Approaches} to be {Used}},
     journal = {Russian journal of nonlinear dynamics},
     pages = {787--802},
     publisher = {mathdoc},
     volume = {18},
     number = {5},
     year = {2022},
     language = {en},
     url = {http://geodesic.mathdoc.fr/item/ND_2022_18_5_a3/}
}
TY  - JOUR
AU  - A. Ali Deeb
AU  - F. Shahhoud
TI  - Image-Based Object Detection Approaches to be Used
JO  - Russian journal of nonlinear dynamics
PY  - 2022
SP  - 787
EP  - 802
VL  - 18
IS  - 5
PB  - mathdoc
UR  - http://geodesic.mathdoc.fr/item/ND_2022_18_5_a3/
LA  - en
ID  - ND_2022_18_5_a3
ER  - 
%0 Journal Article
%A A. Ali Deeb
%A F. Shahhoud
%T Image-Based Object Detection Approaches to be Used
%J Russian journal of nonlinear dynamics
%D 2022
%P 787-802
%V 18
%N 5
%I mathdoc
%U http://geodesic.mathdoc.fr/item/ND_2022_18_5_a3/
%G en
%F ND_2022_18_5_a3
A. Ali Deeb; F. Shahhoud. Image-Based Object Detection Approaches to be Used. Russian journal of nonlinear dynamics, Tome 18 (2022) no. 5, pp. 787-802. http://geodesic.mathdoc.fr/item/ND_2022_18_5_a3/

[1] Yoon, Y., Gruber, S., Krakow, L., and Pack, D., “Autonomous Target Detection and Localization Using Cooperative Unmanned Aerial Vehicles”, Optimization and Cooperative Control Strategies, Lect. Notes Control Inf. Sci., 381, eds. M. J. Hirsch, C. W. Commander, P. M. Pardalos, R. Murphey, Springer, Berlin, 2009, 195–205

[2] Gietelink, O., Ploeg, J., De Schutter, B., and Verhaegen, M., “Development of Advanced Driver Assistance Systems with Vehicle Hardware-in-the-Loop Simulations”, Veh. Syst. Dyn., 44:7 (2006), 569–590

[3] Gerónimo, D., López, A. M., Sappa, A. D., and Graf, T., “Survey of Pedestrian Detection for Advanced Driver Assistance Systems”, IEEE Trans. Pattern Anal. Mach. Intell., 32:7 (2010), 1239–1258

[4] Ferguson, D., Darms, M., Urmson, C., and Kolski, S., “Detection, Prediction, and Avoidance of Dynamic Obstacles in Urban Environments”, IEEE Intelligent Vehicles Symposium (Eindhoven, Netherlands, Jun 2008), 1149–1154

[5] Hirz, M. and Walzel, B., “Sensor and Object Recognition Technologies for Self-Driving Cars”, Comput. Aided Des. Appl., 15:4 (2018), 501–508

[6] Hinas, A., Roberts, J., and Gonzalez, F., “Vision-Based Target Finding and Inspection of a Ground Target Using a Multirotor UAV System”, Sensors, 17:12 (2017), 2929, 17 pp.

[7] Pathak, A. R., Pandey, M., and Rautaray, S., “Application of Deep Learning for Object Detection”, Procedia Comput. Sci., 132 (2018), 1706–1717

[8] Tijtgat, N., Van Ranst, W., Volckaert, B., Goedeme, T., and De Turck, F., “Embedded Real-Time Object Detection for a UAV Warning System”, Proc of the IEEE Internat. Conf. on Computer Vision Workshops (ICCVW'2017), 2110–2118

[9] Han, S., Shen, W., and Liu, Z., Deep Drone: Object Detection and Tracking for Smart Drones on Embedded System, Stanford Univ., Stanford, Calif., 2016, 8 pp.

[10] Zhao, Zh.-Q., Zheng, P., Xu, Sh., and Wu, X., “Object Detection with Deep Learning: A Review”, IEEE Trans. Neural Netw. Learn. Syst., 30:11 (2019), 3212–3232

[11] Girshick, R., Donahue, J., Darrell, T., and Malik, J., “Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation”, Proc of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR, Columbus, Ohio, Jun 2014), 580–587

[12] Uijlings, J. R. R., van de Sande, K. E. A., Gevers, T., and Smeulders, A. W. M., “Selective Search for Object Recognition”, Int. J. Comput. Vis., 104:2 (2013), 154–171

[13] Drucker, H., Burges, Ch. J. C., Kaufman, L., Smola, A., and Vapnik, V., “Support Vector Regression Machines”, NIPS'1996: Advances in Neural Information Processing Systems: Vol. 9, eds. M. C. Mozer, M. Jordan, T. Petsche, MIT Press, Cambridge, Mass., 1996, 155–161

[14] Hearst, M. A., Dumais, S. T., Osuna, E., Platt, J., and Scholkopf, B., “Support Vector Machines”, IEEE Intell. Syst., 13:4 (1998), 18–28

[15] Neubeck, A. and Van Gool, L., “Efficient Non-Maximum Suppression”, Proc. of the 18th Internat. Conf. on Pattern Recognition (ICPR, Hong Kong, Aug 2006), 850–855

[16] Hosang, J., Benenson, R., and Schiele, B., “Learning Non-Maximum Suppression”, Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR, Honolulu, Hawaii, Jul 2017), 6469–6477

[17] Girshick, R., Fast R-CNN, Proc. of the IEEE Internat. Conf. on Computer Vision (ICCV, Santiago, Chile, Dec 2015), 1440–1448

[18] Zitnick, C. L. and Dollár, P., “Edge Boxes: Locating Object Proposals from Edges”, Computer Vision: ECCV 2014, Lecture Notes in Comput. Sci., 8693, eds. D. Fleet, T. Pajdla, B. Schiele, T. Tuytelaars, Springer, Cham, 2014, 391–405

[19] Ren, Sh., He, K., Girshick, R., and Sun, J., “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks”, IEEE Trans. Pattern Anal. Mach. Intell., 39:6 (2015), 1137–1149

[20] He, K., Gkioxari, G., “Dollár, P., and Girshick, R., Mask R-CNN”, Proc. of the IEEE Internat. Conf. on Computer Vision (ICCV, Venice, Italy, Oct 2017), 2980–2988

[21] Long, J., Shelhamer, E., and Darrell, T., “Fully Convolutional Networks for Semantic Segmentation”, Proc. of the IEEE Internat. Conf. on Computer Vision and Pattern Recognition (CVPR, Boston, Mass., 2015), 3431–3440

[22] Redmon, J., Divvala, S., Girshick, R., and Farhadi, A., “You Only Look Once: Unified, Real-Time Object Detection”, Proc. of the IEEE Internat. Conf. on Computer Vision and Pattern Recognition (CVPR, Las Vegas, Nev., 2016), 779–788

[23] Redmon, J. and Farhadi, A., “Yolo9000: Better, Faster, Stronger”, Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR, Honolulu, Hawaii, Jul 2017), 6517–6525

[24] Redmon, J. and Farhadi, A., YOLOv3: An Incremental Improvement, 2018, 6 pp., arXiv: 1804.02767 [cs.CV]

[25] Tsoumakas, G. and Katakis, I., “Multi-Label Classification: An Overview”, Int. J. Data Warehous. Min., 3:3 (2007), 1–13

[26] Lin, T.-Y., Dollár, P., Girshick, R., He, K., Hariharan, B., and Belongie, S., “Feature Pyramid Networks for Object Detection”, Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR, Honolulu, Hawaii, Jul 2017), 2117–2125

[27] Ding, S., Long, F., Fan, H., Liu, L., and Wang, Y., “A Novel YOLOv3-tiny Network for Unmanned Airship Obstacle Detection”, Proc. of the IEEE 8th Data Driven Control and Learning Systems Conference (DDCLS, Dali, China, May 2019), 277–281

[28] Mao, Q.-C., Sun, H.-M., Liu, Y.-B., and Jia, R.-S., “Mini-YOLOv3: Real-Time Object Detector for Embedded Applications”, IEEE Access, 7 (2019), 133529–133538 \enlargethispage*{\baselineskip}

[29] Fang, W., Wang, L., and Ren, P., “Tinier-YOLO: A Real-Time Object Detection Method for Constrained Environments”, IEEE Access, 8 (2020), 1935–1944

[30] Adarsh, P., Rathi, P., and Kumar, M., “YOLO v3-Tiny: Object Detection and Recognition Using One Stage Improved Model”, Proc. of the 6th Internat. Conf. on Advanced Computing and Communication Systems (ICACCS, Coimbatore, India, Mar 2020), 687–694

[31] Xiao, D., Shan, F., Li, Z., Le, B. T., Liu, X., and Li, X., “A Target Detection Model Based on Improved Tiny-YOLOv3 under the Environment of Mining Truck”, IEEE Access, 7 (2019), 123757–123764

[32] Meet Jetson, the Platform for AI at the Edge, , 2021 https://developer.nvidia.com/embedded-computing

[33] Jetson Nano Developer Kit, , 2021 https://developer.nvidia.com/embedded/jetson-nano-developer-kit

[34] Jetson TX1 Module, , 2021 https://developer.nvidia.com/embedded/buy/jetson-tx1

[35] Jetson TX2 Module, , 2021 https://developer.nvidia.com/embedded/jetson-tx2

[36] NVidia Jetson AGX Xavier: The AI Platform for Autonomous Machines, , 2021 https://www.nvidia.com/en-us/autonomous-machines/jetson-agx-xavier/

[37] Jetson AGX Xavier Developer Kit, , 2021 https://developer.nvidia.com/embedded/jetson-agx-xavier-developer-kit

[38] NVidia Jetson AGX Xavier Delivers 32 TeraOps for New Era of AI in Robotics, , 2021 https://devblogs.nvidia.com/nvidia-jetson-agx-xavier-32-teraops-ai-robotics/

[39] Everingham, M., Van Gool, L., Williams, Ch. K. I., Winn, J., and Zisserman, A., “The PASCAL Visual Object Classes (VOC) Challenge”, Int. J. Comput. Vis., 88:2 (2010), 303–338

[40] Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Zitnick, C. L., and Dollár, P., “Microsoft COCO: Common Objects in Context”, European Conf. on Computer Vision (Santiago, Chile, Dec 2015), 740–755

[41] Shen, Z., Liu, Z., Li, J., Jiang, Y.-G., Chen, Y., and Xue, X., “Object Detection from Scratch with Deep Supervision”, IEEE Trans. Pattern Anal. Mach. Intell., 42:2 (2019), 398–412

[42] Zhang, F., Luan, J., Xu, Zh., and Chen, W., “DetReco: Object-Text Detection and Recognition Based on Deep Neural Network”, Math. Probl. Eng., 2020 (2020), 2365076, 15 pp.

[43] Hossain, S., and Lee, D. J., “Deep Learning-Based Real-Time Multiple-Object Detection and Tracking from Aerial Imagery via a Flying Robot with GPU-Based Embedded Devices”, Sensors, 19:15 (2019), 3371, 24 pp.

[44] Murthy, C. B., Hashmi, M. F., Bokde, N. D., and Geem,Z. W., “Investigations of Object Detection in Images/Videos Using Various Deep Learning Techniques and Embedded Platforms: A Comprehensive Review”, Appl. Sci., 10:9 (2020), 3280, 46 pp.