Mots-clés : UAV images, smart agriculture.
@article{VSPUI_2024_20_1_a2,
author = {A. E. Molin and I. S. Blekanov and E. P. Mitrofanov and O. A. Mitrofanova},
title = {Synthetic data generation methods for training neural networks in the task of segmenting the level of crop nitrogen status in images of unmanned aerial vehicles in an agricultural field},
journal = {Vestnik Sankt-Peterburgskogo universiteta. Prikladna\^a matematika, informatika, processy upravleni\^a},
pages = {20--33},
year = {2024},
volume = {20},
number = {1},
language = {ru},
url = {http://geodesic.mathdoc.fr/item/VSPUI_2024_20_1_a2/}
}
TY - JOUR AU - A. E. Molin AU - I. S. Blekanov AU - E. P. Mitrofanov AU - O. A. Mitrofanova TI - Synthetic data generation methods for training neural networks in the task of segmenting the level of crop nitrogen status in images of unmanned aerial vehicles in an agricultural field JO - Vestnik Sankt-Peterburgskogo universiteta. Prikladnaâ matematika, informatika, processy upravleniâ PY - 2024 SP - 20 EP - 33 VL - 20 IS - 1 UR - http://geodesic.mathdoc.fr/item/VSPUI_2024_20_1_a2/ LA - ru ID - VSPUI_2024_20_1_a2 ER -
%0 Journal Article %A A. E. Molin %A I. S. Blekanov %A E. P. Mitrofanov %A O. A. Mitrofanova %T Synthetic data generation methods for training neural networks in the task of segmenting the level of crop nitrogen status in images of unmanned aerial vehicles in an agricultural field %J Vestnik Sankt-Peterburgskogo universiteta. Prikladnaâ matematika, informatika, processy upravleniâ %D 2024 %P 20-33 %V 20 %N 1 %U http://geodesic.mathdoc.fr/item/VSPUI_2024_20_1_a2/ %G ru %F VSPUI_2024_20_1_a2
A. E. Molin; I. S. Blekanov; E. P. Mitrofanov; O. A. Mitrofanova. Synthetic data generation methods for training neural networks in the task of segmenting the level of crop nitrogen status in images of unmanned aerial vehicles in an agricultural field. Vestnik Sankt-Peterburgskogo universiteta. Prikladnaâ matematika, informatika, processy upravleniâ, Tome 20 (2024) no. 1, pp. 20-33. http://geodesic.mathdoc.fr/item/VSPUI_2024_20_1_a2/
[1] Yang S., Chen Q., Yuan X., Liu X., “Adaptive coherency matrix estimation for polarimetric SAR imagery based on local heterogeneity coefficients”, IEEE Transactions on Geoscience and Remote Sensing, 56 (2016), 6732–6745 | DOI
[2] Kussul N., Lavreniuk M., Skakun S., Shelestov A., “Deep learning classification of land cover and crop types using remote sensing data”, IEEE Geoscience and Remote Sensing Letters, 14 (2017), 778–782 | DOI
[3] Jadhav J. K., Singh R. P., “Automatic semantic segmentation and classification of remote sensing data for agriculture”, Mathematical Models in Engineering, 4 (2018), 112–137 | DOI
[4] Dechesne C., Mallet C., Le Bris A., Gouet-Brunet V., “Semantic segmentation of forest stands of pure species as a global optimization problem”, ISPRS Annals of Photogrammetry Remote Sensing, Spatial Information Sciences, 4 (2017), 141–148 | DOI
[5] Zou K., Chen X., Wang Y., Zhang C., Zhang F., “A modified U-Net with a specific data argumentation method for semantic segmentation of weed images in the field”, Computers and Electronics in Agriculture, 187 (2021), 106242 | DOI | MR
[6] Anand T., Sinha S., Mandal M., Chamola V., Yu F., “AgriSegNet: Deep aerial semantic segmentation framework for IoT-assisted precision agriculture”, Mathematical Models in Engineering, 21 (2021), 17581–17590
[7] Singh P., Verma A., Alex J., “Disease and pest infection detection in coconut tree through deep learning techniques”, Computers and Electronics in Agriculture, 182 (2021), 105986 | DOI
[8] Zhao S., Liu J., Bai Z., Hu C., Jin Y., “Crop pest recognition in real agricultural environment using convolutional neural networks by a parallel attention mechanism”, Mathematical Models in Engineering, 13 (2022), 1–14
[9] Blekanov I., Molin A., Zhang D., Mitrofanov E., Mitrofanova O., Yin L., “Monitoring of grain crops nitrogen status from uav multispectral images coupled with deep learning approaches”, Computers and Electronics in Agriculture, 212 (2023), 108047 | DOI
[10] Salas E. A. L., Subburayalu S. K., Slater B., Dave R., Parekh P., Zhao K., Bhattacharya B., “Assessing the effectiveness of ground truth data to capture landscape variability from an agricultural region using Gaussian simulation and geostatistical techniques”, Heliyon, 7:7 (2021), e07439 | DOI
[11] Lynda D., Brahim F., Hamid S., Hamadoun C., “Towards a semantic structure for classifying IoT agriculture sensor datasets: an approach based on machine learning and web semantic technologies”, Journal of King Saud University — Computer and Information Sciences, 35:8 (2023), 101700 | DOI
[12] Wang H., Ding J., He S., Feng C., Zhang C., Fan G., Wu Y., Zhang Y., “MFBP-UNet: A network for pear leaf disease segmentation in natural agricultural environments”, Plants, 12 (2023), 3209 | DOI | MR
[13] Sa I., Popovic M., Khanna R., Chen Z., Lottes P., Liebisch F., Nieto J., Stachniss C., Walter A., Siegwart R., “WeedMap: A large-scale semantic weed mapping framework using aerial multispectral imaging and deep neural network for precision farming”, Remote Sensing, 10 (2018), 1423 | DOI
[14] Nasiri A., Omid M., Taheri-Garavand A., Jafari A., “Deep learning-based precision agriculture through weed recognition in sugar beet fields”, Sustainable Computing: Informatics and Systems, 35 (2022), 100759 | DOI
[15] Takahashi R., Matsubara T., Uehara K., “Data augmentation using random image cropping and patching for deep CNNs”, EEE Transactions on Circuits and Systems for Video Technology, 30 (2020), 2917–2931 | DOI
[16] Su D., Kong H., Qiao Y., Sukkarieh S., “Data augmentation for deep learning based semantic segmentation and crop-weed classification in agricultural robotics”, Computers and Electronics in Agriculture, 190 (2021), 106418 | DOI
[17] Picon A., San-Emeterio M. G., Bereciartua-Perez A., Klukas C., Eggers T., Navarra-Mestre R., “Deep learning-based segmentation of multiple species of weeds and corn crop using synthetic and real image datasets”, Computers and Electronics in Agriculture, 194 (2022), 106719 | DOI
[18] Venkataramanan A., Faure-Giovagnoli P., Regan C., Heudre D., Figus C., Usseglio-Polatera P., Pradalier C., Laviale M., “Usefulness of synthetic datasets for diatom automatic detection using a deep-learning approach”, Engineering Applications of Artificial Intelligence, 117:B (2023), 105594 | DOI
[19] Yang S., Zheng L., Yang H., Zhang M., Wu T., Sun S., Tomasetto F., Wang M., “A synthetic datasets based instance segmentation network for high-throughput soybean pods phenotype investigation”, Expert Systems with Applications, 192 (2022), 116403 | DOI
[20] Abbas A., Jain S., Gour M., Vankudothu S., “Tomato plant disease detection using transfer learning with C-GAN synthetic images”, Computers and Electronics in Agriculture, 187 (2021), 106279 | DOI
[21] Tempelaere A., Van De Looverbosch T., Kelchtermans K., Verboven P., Tuytelaars T., Nicolai B., “Synthetic data for $X$-ray CT of healthy and disordered pear fruit using deep learning”, Postharvest Biology and Technology, 200 (2023), 112342 | DOI
[22] Ronneberger O., Fischer P., Brox T., “U-Net: Convolutional networks for biomedical image segmentation”, Medical Image Computing and Computer-Assisted Intervention — MICCAI 2015, Lecture Notes in Computer Science, 9351, 2015, 234–241 | DOI
[23] Oktay O., Schlemper J., Folgoc L., Lee M., Heinrich M., Misawa K., Mori K., McDonagh S., Hammerla N. Y., Kainz B., Glocker B., Rueckert D., Attention U-Net: Learning where to look for the pancreas, 2018, arXiv: 1804.03999
[24] Alom Z., Hasan M., Yakopcic C., Taha T. M., Asari V. K., Recurrent residual convolutional neural network based on U-Net (R2U-Net) for medical image segmentation, 2018, arXiv: 1802.06955
[25] Zhou Z., Siddiquee M. R., Tajbakhsh N., Liang J., UNet++: A nested U-Net architecture for medical image segmentation, 2018, arXiv: 1807.10165
[26] Huang H., Lin L., Tong R., Hu H., Zhang Q., Iwamoto Y., Han X., Chen Y. W., Wu J., UNet 3+: A full-scale connected UNet for medical image segmentation, 2020, arXiv: 2004.08790
[27] Chen J., Lu Y., Yu Q., Luo X., Adeli E., Wang Y., Lu L., Yuille A. L., Zhou Y., TransUNet: Transformers make strong encoders for medical image segmentation, 2021, arXiv: 2102.04306
[28] Hatamizadeh A., Tang Y., Nath V., Yang D., Myronenko A., Landman B., Roth H., Xu D., “UNETR: Transformers for 3$d$ medical image segmentation”, Proceedings of the IEEE/CVF. Winter Conference on Applications of Computer Vision, 2022, 1748–1758
[29] Breiman L., “Random forests”, Machine Learning, 45 (2001), 5–32 | DOI | MR | Zbl
[30] Tianqi C., Carlos G., “XGBoost: Scalable tree boosting system”, Proceedings of the 22$^{\rm nd}$ ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, 785–794