Skip to main content

2024 | OriginalPaper | Buchkapitel

Plant Data Generation with Generative AI: An Application to Plant Phenotyping

verfasst von : Swati Bhugra, Siddharth Srivastava, Vinay Kaushik, Prerana Mukherjee, Brejesh Lall

Erschienen in: Applications of Generative AI

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Plant phenotyping is the study of plants’ physiological, morphological and biochemical traits resulting from their interaction with the environment. These traits (e.g., leaf area, leaf count, tillering, wilting etc.) are crucial in current plant research, focused on improving plant quality i.e., disease resistance, drought resistance and productivity. With the advancement in sensor technologies, image based analysis via various computer vision methods (e.g., image classification, segmentation, object detection etc.) have emerged in plant phenotyping. Specifically, state-of-the-art deep learning models have been employed for high-throughput study of plant traits. However, the application of deep learning models is currently limited due to the high variability in plant traits among various plant species and unstructured plant imaging. Additionally, complex plant traits pose high data collection and annotation costs. In this context, generative artificial intelligence (AI) based on the evolution of generative adversarial networks (GANs) for data synthesis can relieve the current bottleneck of data scarcity and plant species gap. This chapter reviews the application of state-of-the-art GANs for plant image datasets such as leaf, weed, disease etc. It also discusses the current Generative AI challenges and future directions for agriculture data synthesis.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Fiorani, F., & Schurr, U. (2013). Future scenarios for plant phenotyping. Annual Review of Plant Biology, 64, 267–291.CrossRef Fiorani, F., & Schurr, U. (2013). Future scenarios for plant phenotyping. Annual Review of Plant Biology, 64, 267–291.CrossRef
2.
Zurück zum Zitat Fasoula, D. A., Ioannides, I. M., & Omirou, M. (2020). Phenotyping and plant breeding: Overcoming the barriers. Frontiers in Plant Science, 10, 1713.CrossRef Fasoula, D. A., Ioannides, I. M., & Omirou, M. (2020). Phenotyping and plant breeding: Overcoming the barriers. Frontiers in Plant Science, 10, 1713.CrossRef
3.
Zurück zum Zitat Li, L., Zhang, Q., & Huang, D. (2014). A review of imaging techniques for plant phenotyping. Sensors 14(11), 20 078–20 111. Li, L., Zhang, Q., & Huang, D. (2014). A review of imaging techniques for plant phenotyping. Sensors 14(11), 20 078–20 111.
4.
Zurück zum Zitat Ferentinos, K. P. (2018). Deep learning models for plant disease detection and diagnosis. Computers and Electronics in Agriculture, 145, 311–318.CrossRef Ferentinos, K. P. (2018). Deep learning models for plant disease detection and diagnosis. Computers and Electronics in Agriculture, 145, 311–318.CrossRef
5.
Zurück zum Zitat Esgario, J. G., Krohling, R. A., & Ventura, J. A. (2020). Deep learning for classification and severity estimation of coffee leaf biotic stress. Computers and Electronics in Agriculture, 169, 105162.CrossRef Esgario, J. G., Krohling, R. A., & Ventura, J. A. (2020). Deep learning for classification and severity estimation of coffee leaf biotic stress. Computers and Electronics in Agriculture, 169, 105162.CrossRef
6.
Zurück zum Zitat Söderkvist, O. (2001). Computer vision classification of leaves from swedish trees. Söderkvist, O. (2001). Computer vision classification of leaves from swedish trees.
7.
Zurück zum Zitat Ubbens, J. R., & Stavness, I. (2017). Deep plant phenomics: A deep learning platform for complex plant phenotyping tasks. Frontiers in Plant Science, 8, 1190.CrossRef Ubbens, J. R., & Stavness, I. (2017). Deep plant phenomics: A deep learning platform for complex plant phenotyping tasks. Frontiers in Plant Science, 8, 1190.CrossRef
8.
Zurück zum Zitat Mohanty, S. P., Hughes, D. P., & Salathé, M. (2016). Using deep learning for image-based plant disease detection. Frontiers in Plant Science, 7, 1419.CrossRef Mohanty, S. P., Hughes, D. P., & Salathé, M. (2016). Using deep learning for image-based plant disease detection. Frontiers in Plant Science, 7, 1419.CrossRef
9.
Zurück zum Zitat David, E., Madec, S., Sadeghi-Tehran, P., Aasen, H., Zheng, B., Liu, S., Kirchgessner, N., Ishikawa, G., Nagasawa, K., Badhon, M. A. et al. (2020). Global wheat head detection (gwhd) dataset: A large and diverse dataset of high-resolution rgb-labelled images to develop and benchmark wheat head detection methods. Plant Phenomics. David, E., Madec, S., Sadeghi-Tehran, P., Aasen, H., Zheng, B., Liu, S., Kirchgessner, N., Ishikawa, G., Nagasawa, K., Badhon, M. A. et al. (2020). Global wheat head detection (gwhd) dataset: A large and diverse dataset of high-resolution rgb-labelled images to develop and benchmark wheat head detection methods. Plant Phenomics.
10.
Zurück zum Zitat Sordo, M., & Zeng, Q. (2005). On sample size and classification accuracy: A performance comparison. In Biological and medical data analysis: 6th International symposium, ISBMDA. Proceedings 6 (pp. 193–201). Springer. Sordo, M., & Zeng, Q. (2005). On sample size and classification accuracy: A performance comparison. In Biological and medical data analysis: 6th International symposium, ISBMDA. Proceedings 6 (pp. 193–201). Springer.
11.
Zurück zum Zitat Prusa, J., Khoshgoftaar, T. M., & Seliya, N. (2015). The effect of dataset size on training tweet sentiment classifiers. In 2015 IEEE 14th International conference on machine learning and applications ICMLA) (pp. 96–102). IEEE. Prusa, J., Khoshgoftaar, T. M., & Seliya, N. (2015). The effect of dataset size on training tweet sentiment classifiers. In 2015 IEEE 14th International conference on machine learning and applications ICMLA) (pp. 96–102). IEEE.
12.
Zurück zum Zitat Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., & Fei-Fei, L. (2009). Imagenet: A large-scale hierarchical image database. In IEEE conference on computer vision and pattern recognition (pp. 248–255). IEEE. Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., & Fei-Fei, L. (2009). Imagenet: A large-scale hierarchical image database. In IEEE conference on computer vision and pattern recognition (pp. 248–255). IEEE.
13.
Zurück zum Zitat Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., & Zitnick, C. L. (2014). Microsoft coco: Common objects in context. In Computer vision-ECCV, 13th European conference proceedings, Part V 13 (pp. 740–755). Springer. Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., & Zitnick, C. L. (2014). Microsoft coco: Common objects in context. In Computer vision-ECCV, 13th European conference proceedings, Part V 13 (pp. 740–755). Springer.
14.
Zurück zum Zitat Everingham, M., Van Gool, L., Williams, C. K., Winn, J., & Zisserman, A. (2010). The pascal visual object classes (voc) challenge. International Journal of Computer Vision, 88, 303–338.CrossRef Everingham, M., Van Gool, L., Williams, C. K., Winn, J., & Zisserman, A. (2010). The pascal visual object classes (voc) challenge. International Journal of Computer Vision, 88, 303–338.CrossRef
15.
Zurück zum Zitat Minervini, M., Fischbach, A., Scharr, H., & Tsaftaris, S. A. (2016). Finely-grained annotated datasets for image-based plant phenotyping. Pattern Recognition Letters, 81, 80–89.CrossRef Minervini, M., Fischbach, A., Scharr, H., & Tsaftaris, S. A. (2016). Finely-grained annotated datasets for image-based plant phenotyping. Pattern Recognition Letters, 81, 80–89.CrossRef
16.
Zurück zum Zitat Uchiyama, H., Sakurai, S., Mishima, M., Arita, D., Okayasu, T., Shimada, A., & Taniguchi, R.-I. (2017). An easy-to-setup 3d phenotyping platform for komatsuna dataset. In Proceedings of the IEEE international conference on computer vision workshops (pp. 2038–2045). Uchiyama, H., Sakurai, S., Mishima, M., Arita, D., Okayasu, T., Shimada, A., & Taniguchi, R.-I. (2017). An easy-to-setup 3d phenotyping platform for komatsuna dataset. In Proceedings of the IEEE international conference on computer vision workshops (pp. 2038–2045).
17.
Zurück zum Zitat Shadrin, D. G., Kulikov, V., & Fedorov, M. V. (2018). Instance segmentation for assessment of plant growth dynamics in artificial soilless conditions. BMVC, p. 329. Shadrin, D. G., Kulikov, V., & Fedorov, M. V. (2018). Instance segmentation for assessment of plant growth dynamics in artificial soilless conditions. BMVC, p. 329.
18.
Zurück zum Zitat Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2020). Generative adversarial networks. Communications of the ACM, 63(11), 139–144.MathSciNetCrossRef Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2020). Generative adversarial networks. Communications of the ACM, 63(11), 139–144.MathSciNetCrossRef
19.
Zurück zum Zitat Nazki, H., Yoon, S., Fuentes, A., & Park, D. S. (2020). Unsupervised image translation using adversarial networks for improved plant disease recognition. Computers and Electronics in Agriculture, 168, 105117.CrossRef Nazki, H., Yoon, S., Fuentes, A., & Park, D. S. (2020). Unsupervised image translation using adversarial networks for improved plant disease recognition. Computers and Electronics in Agriculture, 168, 105117.CrossRef
20.
Zurück zum Zitat Fahlgren, N., Gehan, M. A., & Baxter, I. (2015). Lights, camera, action: High-throughput plant phenotyping is ready for a close-up. Current Opinion in Plant Biology, 24, 93–99.CrossRef Fahlgren, N., Gehan, M. A., & Baxter, I. (2015). Lights, camera, action: High-throughput plant phenotyping is ready for a close-up. Current Opinion in Plant Biology, 24, 93–99.CrossRef
21.
Zurück zum Zitat Ren, M., & Zemel, R. S. (2017). End-to-end instance segmentation with recurrent attention. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 6656–6664). Ren, M., & Zemel, R. S. (2017). End-to-end instance segmentation with recurrent attention. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 6656–6664).
22.
Zurück zum Zitat Salvador, A., Bellver, M., Campos, V., Baradad, M., Marques, F., Torres, J., & Giro-i Nieto, X. (2017). Recurrent neural networks for semantic instance segmentation. arXiv preprint arXiv:1712.00617. Salvador, A., Bellver, M., Campos, V., Baradad, M., Marques, F., Torres, J., & Giro-i Nieto, X. (2017). Recurrent neural networks for semantic instance segmentation. arXiv preprint arXiv:​1712.​00617.
23.
Zurück zum Zitat Giuffrida, M. V., Doerner, P., & Tsaftaris, S. A. (2018). Pheno-deep counter: A unified and versatile deep learning architecture for leaf counting. The Plant Journal, 96(4), 880–890.CrossRef Giuffrida, M. V., Doerner, P., & Tsaftaris, S. A. (2018). Pheno-deep counter: A unified and versatile deep learning architecture for leaf counting. The Plant Journal, 96(4), 880–890.CrossRef
24.
Zurück zum Zitat Cruz, A. C., Luvisi, A., De Bellis, L., & Ampatzidis, Y. (2017). Vision-based plant disease detection system using transfer and deep learning. In Asabe annual international meeting. American Society of Agricultural and Biological Engineers, (p. 1). Cruz, A. C., Luvisi, A., De Bellis, L., & Ampatzidis, Y. (2017). Vision-based plant disease detection system using transfer and deep learning. In Asabe annual international meeting. American Society of Agricultural and Biological Engineers, (p. 1).
25.
Zurück zum Zitat DeChant, C., Wiesner-Hanks, T., Chen, S., Stewart, E. L., Yosinski, J., Gore, M. A., Nelson, R. J., & Lipson, H. (2017). Automated identification of northern leaf blight-infected maize plants from field imagery using deep learning. Phytopathology, 107(11), 1426–1432.CrossRef DeChant, C., Wiesner-Hanks, T., Chen, S., Stewart, E. L., Yosinski, J., Gore, M. A., Nelson, R. J., & Lipson, H. (2017). Automated identification of northern leaf blight-infected maize plants from field imagery using deep learning. Phytopathology, 107(11), 1426–1432.CrossRef
26.
Zurück zum Zitat Dwibedi, D., Misra, I., & Hebert, M. (2017). Cut, paste and learn: Surprisingly easy synthesis for instance detection. In Proceedings of the IEEE international conference on computer vision (pp. 1301–1310). Dwibedi, D., Misra, I., & Hebert, M. (2017). Cut, paste and learn: Surprisingly easy synthesis for instance detection. In Proceedings of the IEEE international conference on computer vision (pp. 1301–1310).
27.
Zurück zum Zitat Ubbens, J., Cieslak, M., Prusinkiewicz, P., & Stavness, I. (2018). The use of plant models in deep learning: An application to leaf counting in rosette plants. Plant Methods, 14, 1–10.CrossRef Ubbens, J., Cieslak, M., Prusinkiewicz, P., & Stavness, I. (2018). The use of plant models in deep learning: An application to leaf counting in rosette plants. Plant Methods, 14, 1–10.CrossRef
28.
Zurück zum Zitat Shapiro, L. G., & Stockman, G. C. (2001). Computer vision. Pearson. Shapiro, L. G., & Stockman, G. C. (2001). Computer vision. Pearson.
29.
Zurück zum Zitat Gonzalez, R. C. (2009). Digital image processing. Pearson education India. Gonzalez, R. C. (2009). Digital image processing. Pearson education India.
30.
Zurück zum Zitat Afifi, M., Price, B., Cohen, S., & Brown, M. S. (2019). When color constancy goes wrong: Correcting improperly white-balanced images. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 1535–1544). Afifi, M., Price, B., Cohen, S., & Brown, M. S. (2019). When color constancy goes wrong: Correcting improperly white-balanced images. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 1535–1544).
31.
Zurück zum Zitat Taylor, L., & Nitschke, G. (2018). Improving deep learning with generic data augmentation. In IEEE symposium series on computational intelligence (SSCI), IEEE (pp. 1542–1547). Taylor, L., & Nitschke, G. (2018). Improving deep learning with generic data augmentation. In IEEE symposium series on computational intelligence (SSCI), IEEE (pp. 1542–1547).
32.
Zurück zum Zitat da Costa, G. B. P., Contato, W. A. , Nazare, T. S., Neto, J. E., & Ponti, M. (2016). An empirical study on the effects of different types of noise in image classification tasks. arXiv preprint arXiv:1609.02781. da Costa, G. B. P., Contato, W. A. , Nazare, T. S., Neto, J. E., & Ponti, M. (2016). An empirical study on the effects of different types of noise in image classification tasks. arXiv preprint arXiv:​1609.​02781.
33.
Zurück zum Zitat Tang, Y., & Eliasmith, C. (2010). Deep networks for robust visual recognition. In Proceedings of the 27th international conference on machine learning (ICML-10) (pp. 1055–1062). Tang, Y., & Eliasmith, C. (2010). Deep networks for robust visual recognition. In Proceedings of the 27th international conference on machine learning (ICML-10) (pp. 1055–1062).
34.
Zurück zum Zitat Nazaré, T. S., da Costa, G. B. P., Contato, W. A., & Ponti, M. (2018). Deep convolutional neural networks and noisy images. In Progress in pattern recognition, image analysis, computer vision, and applications: 22nd Iberoamerican Congress, CIARP, Proceedings 22 (pp. 416–424). Springer. Nazaré, T. S., da Costa, G. B. P., Contato, W. A., & Ponti, M. (2018). Deep convolutional neural networks and noisy images. In Progress in pattern recognition, image analysis, computer vision, and applications: 22nd Iberoamerican Congress, CIARP, Proceedings 22 (pp. 416–424). Springer.
35.
Zurück zum Zitat Kuznichov, D., Zvirin, A., Honen, Y., & Kimmel R. (2019). Data augmentation for leaf segmentation and counting tasks in rosette plants. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops. Kuznichov, D., Zvirin, A., Honen, Y., & Kimmel R. (2019). Data augmentation for leaf segmentation and counting tasks in rosette plants. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops.
36.
Zurück zum Zitat Toda, Y., Okura, F., Ito, J., Okada, S., Kinoshita, T., Tsuji, H., & Saisho, D. (2019). Learning from synthetic dataset for crop seed instance segmentation. BioRxiv (p. 866921). Toda, Y., Okura, F., Ito, J., Okada, S., Kinoshita, T., Tsuji, H., & Saisho, D. (2019). Learning from synthetic dataset for crop seed instance segmentation. BioRxiv (p. 866921).
37.
Zurück zum Zitat Gomes, D. P. S., & Zheng, L. (2020). Recent data augmentation strategies for deep learning in plant phenotyping and their significance. In Digital image computing: Techniques and applications (DICTA) (pp. 1–8). IEEE. Gomes, D. P. S., & Zheng, L. (2020). Recent data augmentation strategies for deep learning in plant phenotyping and their significance. In Digital image computing: Techniques and applications (DICTA) (pp. 1–8). IEEE.
38.
Zurück zum Zitat Qiongyan, L., Cai, J., Berger, B., Okamoto, M., & Miklavcic, S. J. (2017). Detecting spikes of wheat plants using neural networks with laws texture energy. Plant Methods, 13, 1–13.CrossRef Qiongyan, L., Cai, J., Berger, B., Okamoto, M., & Miklavcic, S. J. (2017). Detecting spikes of wheat plants using neural networks with laws texture energy. Plant Methods, 13, 1–13.CrossRef
39.
Zurück zum Zitat Aristid, L. (1968). Mathematical models for cellular interactions in development ii. Simple and branching filaments with two-sided inputs. Journal of Theoretical Biology, 18(3), 300–315.CrossRef Aristid, L. (1968). Mathematical models for cellular interactions in development ii. Simple and branching filaments with two-sided inputs. Journal of Theoretical Biology, 18(3), 300–315.CrossRef
40.
Zurück zum Zitat Allen, M., DeJong, T., & Prusinkiewicz, P. (2004). L-peach, an l-systems based model for simulating the architecture and carbon partitioning of growing fruit trees. VII International Symposium on Modelling in Fruit Research and Orchard Management, 707, 71–76. Allen, M., DeJong, T., & Prusinkiewicz, P. (2004). L-peach, an l-systems based model for simulating the architecture and carbon partitioning of growing fruit trees. VII International Symposium on Modelling in Fruit Research and Orchard Management, 707, 71–76.
41.
Zurück zum Zitat Leitner, D., Klepsch, S., Knieß, A., & Schnepf, A. (2010). The algorithmic beauty of plant roots-an l-system model for dynamic root growth simulation. Mathematical and Computer Modelling of Dynamical Systems, 16(6), 575–587.MathSciNetCrossRef Leitner, D., Klepsch, S., Knieß, A., & Schnepf, A. (2010). The algorithmic beauty of plant roots-an l-system model for dynamic root growth simulation. Mathematical and Computer Modelling of Dynamical Systems, 16(6), 575–587.MathSciNetCrossRef
42.
Zurück zum Zitat Cieslak, M., Khan, N., Ferraro, P., Soolanayakanahally, R., Robinson, S. J., Parkin, I., McQuillan, I., & Prusinkiewicz, P. (2022). L-system models for image-based phenomics: Case studies of maize and canola. In Silico Plants, 4(1), diab039.CrossRef Cieslak, M., Khan, N., Ferraro, P., Soolanayakanahally, R., Robinson, S. J., Parkin, I., McQuillan, I., & Prusinkiewicz, P. (2022). L-system models for image-based phenomics: Case studies of maize and canola. In Silico Plants, 4(1), diab039.CrossRef
43.
Zurück zum Zitat Mundermann, L., Erasmus, Y., Lane, B., Coen, E., & Prusinkiewicz, P. (2005). Quantitative modeling of arabidopsis development. Plant Physiology, 139(2), 960–968.CrossRef Mundermann, L., Erasmus, Y., Lane, B., Coen, E., & Prusinkiewicz, P. (2005). Quantitative modeling of arabidopsis development. Plant Physiology, 139(2), 960–968.CrossRef
44.
Zurück zum Zitat Jallas, E., Sequeira, R., Martin, P., Turner, S., & Papajorgji, P. (2009). Mechanistic virtual modeling: Coupling a plant simulation model with a three-dimensional plant architecture component. Environmental Modeling and Assessment, 14, 29–45.CrossRef Jallas, E., Sequeira, R., Martin, P., Turner, S., & Papajorgji, P. (2009). Mechanistic virtual modeling: Coupling a plant simulation model with a three-dimensional plant architecture component. Environmental Modeling and Assessment, 14, 29–45.CrossRef
45.
Zurück zum Zitat Espana, M. L., Baret, F., Aries, F., Chelle, M., Andrieu, B., & Prévot, L. (1999). Modeling maize canopy 3d architecture: Application to reflectance simulation. Ecological Modelling, 122(1–2), 25–43.CrossRef Espana, M. L., Baret, F., Aries, F., Chelle, M., Andrieu, B., & Prévot, L. (1999). Modeling maize canopy 3d architecture: Application to reflectance simulation. Ecological Modelling, 122(1–2), 25–43.CrossRef
46.
Zurück zum Zitat Ward, D., & Moghadam, P. (2020). Scalable learning for bridging the species gap in image-based plant phenotyping. Computer Vision and Image Understanding, 197, 103009.CrossRef Ward, D., & Moghadam, P. (2020). Scalable learning for bridging the species gap in image-based plant phenotyping. Computer Vision and Image Understanding, 197, 103009.CrossRef
47.
Zurück zum Zitat Haruna, Y., Qin, S., & Mbyamm Kiki, M. J. (2023). An improved approach to detection of rice leaf disease with gan-based data augmentation pipeline. Applied Sciences, 13(3), 1346.CrossRef Haruna, Y., Qin, S., & Mbyamm Kiki, M. J. (2023). An improved approach to detection of rice leaf disease with gan-based data augmentation pipeline. Applied Sciences, 13(3), 1346.CrossRef
49.
Zurück zum Zitat Radford, A., Metz, L., & Chintala, S. (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint, arXiv:1511.06434. Radford, A., Metz, L., & Chintala, S. (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint, arXiv:​1511.​06434.
51.
Zurück zum Zitat Reed, S., Akata, Z., Yan, X., Logeswaran, L., Schiele, B., & Lee, H. (2016). Generative adversarial text to image synthesis. In International conference on machine learning (pp. 1060–1069). PMLR. Reed, S., Akata, Z., Yan, X., Logeswaran, L., Schiele, B., & Lee, H. (2016). Generative adversarial text to image synthesis. In International conference on machine learning (pp. 1060–1069). PMLR.
52.
Zurück zum Zitat Isola, P., Zhu, J.-Y., Zhou, T., & Efros, A. A. (2017). Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1125–1134). Isola, P., Zhu, J.-Y., Zhou, T., & Efros, A. A. (2017). Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1125–1134).
54.
Zurück zum Zitat Long, J., Shelhamer, E., & Darrell, T. (2015). Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3431–3440). Long, J., Shelhamer, E., & Darrell, T. (2015). Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3431–3440).
55.
Zurück zum Zitat Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT Press. Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT Press.
56.
Zurück zum Zitat Yan, L. C., Yoshua, B., & Geoffrey, H. (2015). Deep learning. Nature, 521(7553), 436–444.CrossRef Yan, L. C., Yoshua, B., & Geoffrey, H. (2015). Deep learning. Nature, 521(7553), 436–444.CrossRef
57.
Zurück zum Zitat Zhu, J.-Y., Park, T., Isola, & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 2223–2232). Zhu, J.-Y., Park, T., Isola, & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 2223–2232).
58.
Zurück zum Zitat Yi, Z., Zhang, H., Tan, P., & Gong, M. (2017). Dualgan: Unsupervised dual learning for image-to-image translation. In Proceedings of the IEEE international conference on computer vision (pp. 2849–2857). Yi, Z., Zhang, H., Tan, P., & Gong, M. (2017). Dualgan: Unsupervised dual learning for image-to-image translation. In Proceedings of the IEEE international conference on computer vision (pp. 2849–2857).
59.
Zurück zum Zitat Karras, T., Aila, T., Laine, S., & Lehtinen, J. (2017). Progressive growing of gans for improved quality, stability, and variation. arXiv:1710.10196. Karras, T., Aila, T., Laine, S., & Lehtinen, J. (2017). Progressive growing of gans for improved quality, stability, and variation. arXiv:​1710.​10196.
60.
Zurück zum Zitat Rusu, A. A., Rabinowitz, N. C., Desjardins, G., Soyer, H., Kirkpatrick, J., Kavukcuoglu, K., Pascanu, R., & Hadsell, R. (2016). Progressive neural networks. arXiv:1606.04671. Rusu, A. A., Rabinowitz, N. C., Desjardins, G., Soyer, H., Kirkpatrick, J., Kavukcuoglu, K., Pascanu, R., & Hadsell, R. (2016). Progressive neural networks. arXiv:​1606.​04671.
61.
Zurück zum Zitat Durall, R., Chatzimichailidis, A., Labus, P., & Keuper, J. (2020). Combating mode collapse in gan training: An empirical analysis using hessian eigenvalues. arXiv:2012.09673. Durall, R., Chatzimichailidis, A., Labus, P., & Keuper, J. (2020). Combating mode collapse in gan training: An empirical analysis using hessian eigenvalues. arXiv:​2012.​09673.
62.
Zurück zum Zitat Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., & Courville, A. C. (2017). Improved training of wasserstein gans. Advances in Neural Information Processing Systems 30. Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., & Courville, A. C. (2017). Improved training of wasserstein gans. Advances in Neural Information Processing Systems 30.
63.
Zurück zum Zitat Karras, T., Laine, S., & Aila, T. (2019). A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 4401–4410). Karras, T., Laine, S., & Aila, T. (2019). A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 4401–4410).
64.
Zurück zum Zitat Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., & Aila, T. (2020). Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 8110–8119). Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., & Aila, T. (2020). Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 8110–8119).
65.
Zurück zum Zitat Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., & Abbeel, P. (2016). Infogan: Interpretable representation learning by information maximizing generative adversarial nets. Advances in Neural Information Processing Systems 29. Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., & Abbeel, P. (2016). Infogan: Interpretable representation learning by information maximizing generative adversarial nets. Advances in Neural Information Processing Systems 29.
66.
Zurück zum Zitat Kurutach, T., Tamar, A., Yang, G., Russell, S. J., & Abbeel, P. (2018). Learning plannable representations with causal infogan. Advances in Neural Information Processing Systems 31. Kurutach, T., Tamar, A., Yang, G., Russell, S. J., & Abbeel, P. (2018). Learning plannable representations with causal infogan. Advances in Neural Information Processing Systems 31.
67.
Zurück zum Zitat Spurr, A., Aksan, E., & Hilliges, O. (2017). Guiding infogan with semi-supervision. In Machine learning and knowledge discovery in databases: European conference, ECML PKDD, Proceedings, Part I (pp. 119–134). Springer. Spurr, A., Aksan, E., & Hilliges, O. (2017). Guiding infogan with semi-supervision. In Machine learning and knowledge discovery in databases: European conference, ECML PKDD, Proceedings, Part I (pp. 119–134). Springer.
68.
Zurück zum Zitat Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al. (2017). Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4681–4690). Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al. (2017). Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4681–4690).
69.
Zurück zum Zitat Nasrollahi, K., & Moeslund, T. B. (2014). Super-resolution: A comprehensive survey. Machine Vision and Applications, 25, 1423–1468.CrossRef Nasrollahi, K., & Moeslund, T. B. (2014). Super-resolution: A comprehensive survey. Machine Vision and Applications, 25, 1423–1468.CrossRef
70.
Zurück zum Zitat Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., & Change Loy, C. (2018). Esrgan: Enhanced super-resolution generative adversarial networks. In Proceedings of the European conference on computer vision (ECCV) workshops. Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., & Change Loy, C. (2018). Esrgan: Enhanced super-resolution generative adversarial networks. In Proceedings of the European conference on computer vision (ECCV) workshops.
71.
Zurück zum Zitat Deng, X. (2018). Enhancing image quality via style transfer for single image super-resolution. IEEE Signal Processing Letters, 25(4), 571–575.CrossRef Deng, X. (2018). Enhancing image quality via style transfer for single image super-resolution. IEEE Signal Processing Letters, 25(4), 571–575.CrossRef
72.
Zurück zum Zitat Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., & Huang, T. S. (2018). Generative image inpainting with contextual attention. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5505–5514). Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., & Huang, T. S. (2018). Generative image inpainting with contextual attention. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5505–5514).
73.
Zurück zum Zitat Iizuka, S., Simo-Serra, E., & Ishikawa, H. (2017). Globally and locally consistent image completion. ACM Transactions on Graphics (ToG), 36(4), 1–14.CrossRef Iizuka, S., Simo-Serra, E., & Ishikawa, H. (2017). Globally and locally consistent image completion. ACM Transactions on Graphics (ToG), 36(4), 1–14.CrossRef
74.
Zurück zum Zitat Yang, C., Lu, X., Lin, Z., Shechtman, E., Wang, O., & Li, H. (2017). High-resolution image inpainting using multi-scale neural patch synthesis. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 6721–6729). Yang, C., Lu, X., Lin, Z., Shechtman, E., Wang, O., & Li, H. (2017). High-resolution image inpainting using multi-scale neural patch synthesis. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 6721–6729).
75.
Zurück zum Zitat Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., & Huang, T. S. (2019). Free-form image inpainting with gated convolution. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 4471–4480). Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., & Huang, T. S. (2019). Free-form image inpainting with gated convolution. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 4471–4480).
76.
Zurück zum Zitat Valerio Giuffrida, M., Scharr, H., & Tsaftaris, S. A. (2017). Arigan: Synthetic arabidopsis plants using generative adversarial network. In Proceedings of the IEEE international conference on computer vision workshops (pp. 2064–2071). Valerio Giuffrida, M., Scharr, H., & Tsaftaris, S. A. (2017). Arigan: Synthetic arabidopsis plants using generative adversarial network. In Proceedings of the IEEE international conference on computer vision workshops (pp. 2064–2071).
77.
Zurück zum Zitat Purbaya, M. E., Setiawan, N. A., & Adji, T. B. (2018). Leaves image synthesis using generative adversarial networks with regularization improvement. In 2018 International conference on information and communications technology (ICOIACT) (pp. 360–365). IEEE. Purbaya, M. E., Setiawan, N. A., & Adji, T. B. (2018). Leaves image synthesis using generative adversarial networks with regularization improvement. In 2018 International conference on information and communications technology (ICOIACT) (pp. 360–365). IEEE.
78.
Zurück zum Zitat Zhu, Y., Aoun, M., Krijn, M., Vanschoren, J., & Campus, H. T. (2018). Data augmentation using conditional generative adversarial networks for leaf counting in arabidopsis plants. In BMVC, p. 324. Zhu, Y., Aoun, M., Krijn, M., Vanschoren, J., & Campus, H. T. (2018). Data augmentation using conditional generative adversarial networks for leaf counting in arabidopsis plants. In BMVC, p. 324.
79.
Zurück zum Zitat He, K., Gkioxari, G., Dollár, P., & Girshick, R. (2017). Mask r-cnn. In Proceedings of the IEEE international conference on computer vision (pp. 2961–2969). He, K., Gkioxari, G., Dollár, P., & Girshick, R. (2017). Mask r-cnn. In Proceedings of the IEEE international conference on computer vision (pp. 2961–2969).
80.
Zurück zum Zitat Drees, L., Junker-Frohn, L. V., Kierdorf, J., & Roscher, R. (2021). Temporal prediction and evaluation of brassica growth in the field using conditional generative adversarial networks. Computers and Electronics in Agriculture, 190, 106415.CrossRef Drees, L., Junker-Frohn, L. V., Kierdorf, J., & Roscher, R. (2021). Temporal prediction and evaluation of brassica growth in the field using conditional generative adversarial networks. Computers and Electronics in Agriculture, 190, 106415.CrossRef
81.
Zurück zum Zitat Duan, L., Wang, Z., Chen, H., Fu, J., Wei, H., Geng, Z., & Yang, W. (2022). Croppainter: An effective and precise tool for trait-to-image crop visualization based on generative adversarial networks. Plant Methods, 18(1), 1–11.CrossRef Duan, L., Wang, Z., Chen, H., Fu, J., Wei, H., Geng, Z., & Yang, W. (2022). Croppainter: An effective and precise tool for trait-to-image crop visualization based on generative adversarial networks. Plant Methods, 18(1), 1–11.CrossRef
82.
Zurück zum Zitat Valerio Giuffrida, M., Dobrescu, A., Doerner, P., & Tsaftaris, S. A. (2019). Leaf counting without annotations using adversarial unsupervised domain adaptation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops. Valerio Giuffrida, M., Dobrescu, A., Doerner, P., & Tsaftaris, S. A. (2019). Leaf counting without annotations using adversarial unsupervised domain adaptation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops.
83.
Zurück zum Zitat Henke, M., Junker, A., Neumann, K., Altmann, T., & Gladilin, E. (2019). Comparison of feature point detectors for multimodal image registration in plant phenotyping. Plos One, 14(9), e0221203.CrossRef Henke, M., Junker, A., Neumann, K., Altmann, T., & Gladilin, E. (2019). Comparison of feature point detectors for multimodal image registration in plant phenotyping. Plos One, 14(9), e0221203.CrossRef
84.
Zurück zum Zitat Henke, M., Junker, A., Neumann, K., Altmann, T., & Gladilin, E. (2020). A two-step registration-classification approach to automated segmentation of multimodal images for high-throughput greenhouse plant phenotyping. Plant Methods, 16(1), 95.CrossRef Henke, M., Junker, A., Neumann, K., Altmann, T., & Gladilin, E. (2020). A two-step registration-classification approach to automated segmentation of multimodal images for high-throughput greenhouse plant phenotyping. Plant Methods, 16(1), 95.CrossRef
85.
Zurück zum Zitat Sapoukhina, N., Samiei, S., Rasti, P., & Rousseau, D. (2019). Data augmentation from rgb to chlorophyll fluorescence imaging application to leaf segmentation of arabidopsis thaliana from top view images. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops. Sapoukhina, N., Samiei, S., Rasti, P., & Rousseau, D. (2019). Data augmentation from rgb to chlorophyll fluorescence imaging application to leaf segmentation of arabidopsis thaliana from top view images. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops.
86.
Zurück zum Zitat Aslahishahri, M., Stanley, K. G., Duddu, H., Shirtliffe, S., Vail, S., Bett, K., Pozniak, C., & Stavness, I. (2021). From rgb to nir: Predicting of near infrared reflectance from visible spectrum aerial images of crops. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 1312–1322). Aslahishahri, M., Stanley, K. G., Duddu, H., Shirtliffe, S., Vail, S., Bett, K., Pozniak, C., & Stavness, I. (2021). From rgb to nir: Predicting of near infrared reflectance from visible spectrum aerial images of crops. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 1312–1322).
87.
Zurück zum Zitat Shukla, A., Upadhyay, A., Sharma, M., Chinnusamy, V., & Kumar, S. (2022). High-resolution nir prediction from rgb images: Application to plant phenotyping. In 2022 IEEE international conference on image processing (ICIP) (pp. 4058–4062). IEEE. Shukla, A., Upadhyay, A., Sharma, M., Chinnusamy, V., & Kumar, S. (2022). High-resolution nir prediction from rgb images: Application to plant phenotyping. In 2022 IEEE international conference on image processing (ICIP) (pp. 4058–4062). IEEE.
88.
Zurück zum Zitat Hu, G., Wu, H., Zhang, Y., & Wan, M. (2019). A low shot learning method for tea leaf’s disease identification. Computers and Electronics in Agriculture, 163, 104852.CrossRef Hu, G., Wu, H., Zhang, Y., & Wan, M. (2019). A low shot learning method for tea leaf’s disease identification. Computers and Electronics in Agriculture, 163, 104852.CrossRef
89.
Zurück zum Zitat Abbas, A., Jain, S., Gour, M., & Vankudothu, S. (2021). Tomato plant disease detection using transfer learning with c-gan synthetic images. Computers and Electronics in Agriculture, 187, 106279.CrossRef Abbas, A., Jain, S., Gour, M., & Vankudothu, S. (2021). Tomato plant disease detection using transfer learning with c-gan synthetic images. Computers and Electronics in Agriculture, 187, 106279.CrossRef
90.
Zurück zum Zitat Wu, Q., Chen, Y., & Meng, J. (2020). Dcgan-based data augmentation for tomato leaf disease identification. IEEE Access 8, 98 716–98 728. Wu, Q., Chen, Y., & Meng, J. (2020). Dcgan-based data augmentation for tomato leaf disease identification. IEEE Access 8, 98 716–98 728.
91.
Zurück zum Zitat Gomaa, A. A., & Abd El-Latif, Y. M. (2021). Early prediction of plant diseases using cnn and gans. International Journal of Advanced Computer Science and Applications 12(5). Gomaa, A. A., & Abd El-Latif, Y. M. (2021). Early prediction of plant diseases using cnn and gans. International Journal of Advanced Computer Science and Applications 12(5).
92.
Zurück zum Zitat Hu, W.-J., Xie, T.-Y., Li, B.-S., Du, Y.-X., & Xiong, N. N. (2021). An edge intelligence-based generative data augmentation system for iot image recognition tasks. Journal of Internet Technology, 22(4), 765–778.CrossRef Hu, W.-J., Xie, T.-Y., Li, B.-S., Du, Y.-X., & Xiong, N. N. (2021). An edge intelligence-based generative data augmentation system for iot image recognition tasks. Journal of Internet Technology, 22(4), 765–778.CrossRef
93.
Zurück zum Zitat Yuwana, R. S., Fauziah, F., Heryana, A., Krisnandi, D., Kusumo, R. B. S., & Pardede, H. F. (2020). Data augmentation using adversarial networks for tea diseases detection. Jurnal Elektronika dan Telekomunikasi, 20(1), 29–35.CrossRef Yuwana, R. S., Fauziah, F., Heryana, A., Krisnandi, D., Kusumo, R. B. S., & Pardede, H. F. (2020). Data augmentation using adversarial networks for tea diseases detection. Jurnal Elektronika dan Telekomunikasi, 20(1), 29–35.CrossRef
94.
Zurück zum Zitat Lan, L., You, L., Zhang, Z., Fan, Z., Zhao, W., Zeng, N., Chen, Y., & Zhou, X. (2020). Generative adversarial networks and its applications in biomedical informatics. Frontiers in Public Health, 8, 164.CrossRef Lan, L., You, L., Zhang, Z., Fan, Z., Zhao, W., Zeng, N., Chen, Y., & Zhou, X. (2020). Generative adversarial networks and its applications in biomedical informatics. Frontiers in Public Health, 8, 164.CrossRef
95.
Zurück zum Zitat Zhang, M., Liu, S., Yang, F., & Liu, J. (2019). Classification of canker on small datasets using improved deep convolutional generative adversarial networks, IEEE Access 7, 49 680–49 690. Zhang, M., Liu, S., Yang, F., & Liu, J. (2019). Classification of canker on small datasets using improved deep convolutional generative adversarial networks, IEEE Access 7, 49 680–49 690.
96.
Zurück zum Zitat Sun, R., Zhang, M., Yang, K., & Liu, J. (2020). Data enhancement for plant disease classification using generated lesions. Applied Sciences, 10(2), 466.CrossRef Sun, R., Zhang, M., Yang, K., & Liu, J. (2020). Data enhancement for plant disease classification using generated lesions. Applied Sciences, 10(2), 466.CrossRef
97.
Zurück zum Zitat Chen, Y., & Wu, Q. (2023). Grape leaf disease identification with sparse data via generative adversarial networks and convolutional neural networks. Precision Agriculture, 24(1), 235–253.CrossRef Chen, Y., & Wu, Q. (2023). Grape leaf disease identification with sparse data via generative adversarial networks and convolutional neural networks. Precision Agriculture, 24(1), 235–253.CrossRef
98.
Zurück zum Zitat Douarre, C., Crispim-Junior, C. F., Gelibert, A., Tougne, L., & Rousseau, D. (2019). Novel data augmentation strategies to boost supervised segmentation of plant disease. Computers and Electronics in Agriculture, 165, 104967.CrossRef Douarre, C., Crispim-Junior, C. F., Gelibert, A., Tougne, L., & Rousseau, D. (2019). Novel data augmentation strategies to boost supervised segmentation of plant disease. Computers and Electronics in Agriculture, 165, 104967.CrossRef
99.
Zurück zum Zitat Tian, Y., Yang, G., Wang, Z., Li, E., & Liang, Z. (2019). Detection of apple lesions in orchards based on deep learning methods of cyclegan and yolov3-dense. Journal of Sensors 2019. Tian, Y., Yang, G., Wang, Z., Li, E., & Liang, Z. (2019). Detection of apple lesions in orchards based on deep learning methods of cyclegan and yolov3-dense. Journal of Sensors 2019.
100.
Zurück zum Zitat Zeng, M., Gao, H., & Wan, L. (2021). Few-shot grape leaf diseases classification based on generative adversarial network. In Journal of Physics: Conference Series 1883(1), 012093, IOP Publishing. Zeng, M., Gao, H., & Wan, L. (2021). Few-shot grape leaf diseases classification based on generative adversarial network. In Journal of Physics: Conference Series 1883(1), 012093, IOP Publishing.
101.
Zurück zum Zitat Nazki, H., Lee, J., Yoon, S., & Park, D. S. (2019). Image-to-image translation with gan for synthetic data augmentation in plant disease datasets. Smart Media Journal, 8(2), 46–57. Nazki, H., Lee, J., Yoon, S., & Park, D. S. (2019). Image-to-image translation with gan for synthetic data augmentation in plant disease datasets. Smart Media Journal, 8(2), 46–57.
102.
Zurück zum Zitat Chen, Y., Pan, J., & Wu, Q. (2023). Apple leaf disease identification via improved cyclegan and convolutional neural network. Soft Computing, pp. 1–14. Chen, Y., Pan, J., & Wu, Q. (2023). Apple leaf disease identification via improved cyclegan and convolutional neural network. Soft Computing, pp. 1–14.
103.
Zurück zum Zitat Li, J., Zhao, X., Zhou, G., Zhang, M., Li, D., & Zhou, Y. (2021). Evaluating the work productivity of assembling reinforcement through the objects detected by deep learning. Sensors, 21(16), 5598.CrossRef Li, J., Zhao, X., Zhou, G., Zhang, M., Li, D., & Zhou, Y. (2021). Evaluating the work productivity of assembling reinforcement through the objects detected by deep learning. Sensors, 21(16), 5598.CrossRef
104.
Zurück zum Zitat Cap, Q. H., Uga, H., Kagiwada, S., & Iyatomi, H. (2020). Leafgan: An effective data augmentation method for practical plant disease diagnosis. IEEE Transactions on Automation Science and Engineering, 19(2), 1258–1267.CrossRef Cap, Q. H., Uga, H., Kagiwada, S., & Iyatomi, H. (2020). Leafgan: An effective data augmentation method for practical plant disease diagnosis. IEEE Transactions on Automation Science and Engineering, 19(2), 1258–1267.CrossRef
105.
Zurück zum Zitat Arsenovic, M., Karanovic, M., Sladojevic, S., Anderla, A., & Stefanovic, D. (2019). Solving current limitations of deep learning based approaches for plant disease detection. Symmetry, 11(7), 939.CrossRef Arsenovic, M., Karanovic, M., Sladojevic, S., Anderla, A., & Stefanovic, D. (2019). Solving current limitations of deep learning based approaches for plant disease detection. Symmetry, 11(7), 939.CrossRef
106.
Zurück zum Zitat Xu, M., Yoon, S., Fuentes, A., Yang, J., & Park, D. S. (2022). Style-consistent image translation: A novel data augmentation paradigm to improve plant disease recognition. Frontiers in Plant Science, 12, 3361.CrossRef Xu, M., Yoon, S., Fuentes, A., Yang, J., & Park, D. S. (2022). Style-consistent image translation: A novel data augmentation paradigm to improve plant disease recognition. Frontiers in Plant Science, 12, 3361.CrossRef
107.
Zurück zum Zitat Maqsood, M. H., Mumtaz, R., Haq, I. U., Shafi, U., Zaidi, S. M. H., & Hafeez, M. (2021). Super resolution generative adversarial network (srgans) for wheat stripe rust classification. Sensors, 21(23), 7903.CrossRef Maqsood, M. H., Mumtaz, R., Haq, I. U., Shafi, U., Zaidi, S. M. H., & Hafeez, M. (2021). Super resolution generative adversarial network (srgans) for wheat stripe rust classification. Sensors, 21(23), 7903.CrossRef
108.
Zurück zum Zitat Wen, J., Shi, Y., Zhou, X., & Xue, Y. (2020). Crop disease classification on inadequate low-resolution target images. Sensors, 20(16), 4601.CrossRef Wen, J., Shi, Y., Zhou, X., & Xue, Y. (2020). Crop disease classification on inadequate low-resolution target images. Sensors, 20(16), 4601.CrossRef
109.
Zurück zum Zitat Katafuchi, R., & Tokunaga, T. (2020). Image-based plant disease diagnosis with unsupervised anomaly detection based on reconstructability of colors. arXiv:2011.14306. Katafuchi, R., & Tokunaga, T. (2020). Image-based plant disease diagnosis with unsupervised anomaly detection based on reconstructability of colors. arXiv:​2011.​14306.
110.
Zurück zum Zitat Bhugra, S., Kaushik, V., Gupta, A., Lall, B., & Chaudhury, S. (2023). Anoleaf: Unsupervised leaf disease segmentation via structurally robust generative inpainting. In Proceedings of the IEEE/CVF winter conference on applications of computer vision (pp. 6415–6424). Bhugra, S., Kaushik, V., Gupta, A., Lall, B., & Chaudhury, S. (2023). Anoleaf: Unsupervised leaf disease segmentation via structurally robust generative inpainting. In Proceedings of the IEEE/CVF winter conference on applications of computer vision (pp. 6415–6424).
111.
Zurück zum Zitat Fawakherji, M., Potena, C., Pretto, A., Bloisi, D. D., & Nardi, D. (2021). Multi-spectral image synthesis for crop/weed segmentation in precision farming. Robotics and Autonomous Systems, 146, 103861.CrossRef Fawakherji, M., Potena, C., Pretto, A., Bloisi, D. D., & Nardi, D. (2021). Multi-spectral image synthesis for crop/weed segmentation in precision farming. Robotics and Autonomous Systems, 146, 103861.CrossRef
112.
Zurück zum Zitat Espejo-Garcia, B., Mylonas, N., Athanasakos, L., Vali, E., & Fountas, S. (2021). Combining generative adversarial networks and agricultural transfer learning for weeds identification. Biosystems Engineering, 204, 79–89.CrossRef Espejo-Garcia, B., Mylonas, N., Athanasakos, L., Vali, E., & Fountas, S. (2021). Combining generative adversarial networks and agricultural transfer learning for weeds identification. Biosystems Engineering, 204, 79–89.CrossRef
113.
Zurück zum Zitat Heusel. M., Ramsauer, H., Unterthiner, T., Nessler, B., & Hochreiter, S. (2017). Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in Neural Information Processing Systems 30. Heusel. M., Ramsauer, H., Unterthiner, T., Nessler, B., & Hochreiter, S. (2017). Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in Neural Information Processing Systems 30.
114.
Zurück zum Zitat Kerdegari, H., Razaak, M., Argyriou, V., & Remagnino, P. (2019). Semi-supervised gan for classification of multispectral imagery acquired by uavs. arXiv:1905.10920. Kerdegari, H., Razaak, M., Argyriou, V., & Remagnino, P. (2019). Semi-supervised gan for classification of multispectral imagery acquired by uavs. arXiv:​1905.​10920.
115.
Zurück zum Zitat Khan, S., Tufail, M., Khan, M. T., Khan, Z. A., Iqbal, J., & Alam, M. (2021). A novel semi-supervised framework for uav based crop/weed classification. Plos One, 16(5), e0251008.CrossRef Khan, S., Tufail, M., Khan, M. T., Khan, Z. A., Iqbal, J., & Alam, M. (2021). A novel semi-supervised framework for uav based crop/weed classification. Plos One, 16(5), e0251008.CrossRef
116.
Zurück zum Zitat Slovak, R., Göschl, C., Su, X., Shimotani, K., Shiina, T., & Busch, W. (2014). A scalable open-source pipeline for large-scale root phenotyping of arabidopsis. The Plant Cell, 26(6), 2390–2403.CrossRef Slovak, R., Göschl, C., Su, X., Shimotani, K., Shiina, T., & Busch, W. (2014). A scalable open-source pipeline for large-scale root phenotyping of arabidopsis. The Plant Cell, 26(6), 2390–2403.CrossRef
117.
Zurück zum Zitat Gaggion, N., Ariel, F., Daric, V., Lambert, É., Legendre, S., Roule, T., Camoirano, A., Milone, D., Crespi, M., Blein, T., & Ferrante, E. (2021). ChronoRoot: High-throughput phenotyping by deep segmentation networks reveals novel temporal parameters of plant root system architecture. GigaScience 10(7), giab052. [Online]. Available: https://doi.org/10.1093/gigascience/giab052. Gaggion, N., Ariel, F., Daric, V., Lambert, É., Legendre, S., Roule, T., Camoirano, A., Milone, D., Crespi, M., Blein, T., & Ferrante, E. (2021). ChronoRoot: High-throughput phenotyping by deep segmentation networks reveals novel temporal parameters of plant root system architecture. GigaScience 10(7), giab052. [Online]. Available: https://​doi.​org/​10.​1093/​gigascience/​giab052.
118.
Zurück zum Zitat Möller, B., Schreck, B., & Posch, S. (2021). Analysis of arabidopsis root images–studies on cnns and skeleton-based root topology. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 1294–1302). Möller, B., Schreck, B., & Posch, S. (2021). Analysis of arabidopsis root images–studies on cnns and skeleton-based root topology. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 1294–1302).
119.
Zurück zum Zitat Chen, H., Giuffrida, M. V., Doerner, P., & Tsaftaris, S. A. (2019). Blind inpainting of large-scale masks of thin structures with adversarial and reinforcement learning. arXiv:1912.02470. Chen, H., Giuffrida, M. V., Doerner, P., & Tsaftaris, S. A. (2019). Blind inpainting of large-scale masks of thin structures with adversarial and reinforcement learning. arXiv:​1912.​02470.
120.
Zurück zum Zitat Chen, H., Valerio Giuffrida, M., Doerner, P., & Tsaftaris, S. A. (2019). Adversarial large-scale root gap inpainting. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops. Chen, H., Valerio Giuffrida, M., Doerner, P., & Tsaftaris, S. A. (2019). Adversarial large-scale root gap inpainting. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops.
121.
Zurück zum Zitat Yamamoto, K., Togami, T., & Yamaguchi, N. (2017). Super-resolution of plant disease images for the acceleration of image-based phenotyping and vigor diagnosis in agriculture. Sensors, 17(11), 2557.CrossRef Yamamoto, K., Togami, T., & Yamaguchi, N. (2017). Super-resolution of plant disease images for the acceleration of image-based phenotyping and vigor diagnosis in agriculture. Sensors, 17(11), 2557.CrossRef
122.
Zurück zum Zitat Yang, S., Zheng, L., He, P., Wu, T., Sun, S., & Wang, M. (2021). High-throughput soybean seeds phenotyping with convolutional neural networks and transfer learning. Plant Methods, 17(1), 50.CrossRef Yang, S., Zheng, L., He, P., Wu, T., Sun, S., & Wang, M. (2021). High-throughput soybean seeds phenotyping with convolutional neural networks and transfer learning. Plant Methods, 17(1), 50.CrossRef
123.
Zurück zum Zitat Scharr, H., Minervini, M., French, A. P., Klukas, C., Kramer, D. M., Liu, X., Luengo, I., Pape, J.-M., Polder, G., Vukadinovic, D., et al. (2016). Leaf segmentation in plant phenotyping: A collation study. Machine Vision and Applications, 27, 585–606.CrossRef Scharr, H., Minervini, M., French, A. P., Klukas, C., Kramer, D. M., Liu, X., Luengo, I., Pape, J.-M., Polder, G., Vukadinovic, D., et al. (2016). Leaf segmentation in plant phenotyping: A collation study. Machine Vision and Applications, 27, 585–606.CrossRef
124.
Zurück zum Zitat Tang, H., Wang, W., Xu, D., Yan, Y., & Sebe, N. (2018). Gesturegan for hand gesture-to-gesture translation in the wild. In Proceedings of the 26th ACM international conference on Multimedia (pp. 774–782). Tang, H., Wang, W., Xu, D., Yan, Y., & Sebe, N. (2018). Gesturegan for hand gesture-to-gesture translation in the wild. In Proceedings of the 26th ACM international conference on Multimedia (pp. 774–782).
125.
Zurück zum Zitat He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770–778). He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770–778).
126.
Zurück zum Zitat Talebi, H., & Milanfar, P. (2018). Nima: Neural image assessment. IEEE Transactions on Image Processing, 27(8), 3998–4011.MathSciNetCrossRef Talebi, H., & Milanfar, P. (2018). Nima: Neural image assessment. IEEE Transactions on Image Processing, 27(8), 3998–4011.MathSciNetCrossRef
127.
Zurück zum Zitat Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., & Chen, X. (2016). Improved techniques for training gans. Advances in Neural Information Processing Systems 29. Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., & Chen, X. (2016). Improved techniques for training gans. Advances in Neural Information Processing Systems 29.
128.
Zurück zum Zitat Min, B., Kim, T., Shin, D., & Shin, D. (2023). Data augmentation method for plant leaf disease recognition. Applied Sciences, 13(3), 1465.CrossRef Min, B., Kim, T., Shin, D., & Shin, D. (2023). Data augmentation method for plant leaf disease recognition. Applied Sciences, 13(3), 1465.CrossRef
129.
Zurück zum Zitat Van der Maaten, L., & Hinton, G. (2008). Visualizing data using t-sne. Journal of Machine Learning Research 911. Van der Maaten, L., & Hinton, G. (2008). Visualizing data using t-sne. Journal of Machine Learning Research 911.
Metadaten
Titel
Plant Data Generation with Generative AI: An Application to Plant Phenotyping
verfasst von
Swati Bhugra
Siddharth Srivastava
Vinay Kaushik
Prerana Mukherjee
Brejesh Lall
Copyright-Jahr
2024
DOI
https://doi.org/10.1007/978-3-031-46238-2_26

Premium Partner