Skip to main content

13.05.2024 | Research

A Pruning Method Combined with Resilient Training to Improve the Adversarial Robustness of Automatic Modulation Classification Models

verfasst von: Chao Han, Linyuan Wang, Dongyang Li, Weijia Cui, Bin Yan

Erschienen in: Mobile Networks and Applications

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

In the rapidly evolving landscape of wireless communication systems, the vulnerability of automatic modulation classification (AMC) models to adversarial attacks presents a significant security challenge. This study introduces a pruning and training methodology tailored to address the nuances of signal processing within these systems. Leveraging a pruning method based on channel activation contributions, our approach optimizes adversarial training potential, enhancing the model’s capacity to improve robustness against attacks. Additionally, the approach constructs a resilient training method based on a composite strategy, integrating balanced adversarial training, soft target regularization, and gradient masking. This combination effectively broadens the model’s uncertainty space and obfuscates gradients, thereby enhancing the model’s defenses against a wide spectrum of adversarial tactics. The training regimen is carefully adjusted to retain sensitivity to adversarial inputs while maintaining accuracy on original data. Comprehensive evaluations conducted on the RML2016.10A dataset demonstrate the effectiveness of our method in defending against both gradient-based and optimization-based attacks within the realm of wireless communication. This research offers insightful and practical approaches to improving the security and performance of AMC models against the complex and evolving threats present in modern wireless communication environments.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Weitere Produktempfehlungen anzeigen
Literatur
1.
Zurück zum Zitat Lin, Y., Zhao, H., Tu, Y et al (2020) Threats of adversarial attacks in dnn-based modulation recognition. In: IEEE INFOCOM 2020-IEEE Conference on Computer Communications, IEEE pp. 2469–2478 Lin, Y., Zhao, H., Tu, Y et al (2020) Threats of adversarial attacks in dnn-based modulation recognition. In: IEEE INFOCOM 2020-IEEE Conference on Computer Communications, IEEE pp. 2469–2478
7.
Zurück zum Zitat Zhang L, Lambotharan S, Zheng G et al (2022) A hybrid training-time and run-time defense against adversarial attacks in modulation classification. IEEE Wirel Commun Lett 11(6):1161–1165CrossRef Zhang L, Lambotharan S, Zheng G et al (2022) A hybrid training-time and run-time defense against adversarial attacks in modulation classification. IEEE Wirel Commun Lett 11(6):1161–1165CrossRef
8.
Zurück zum Zitat Hameed MZ, György A, Gündüz D (2020) The best defense is a good offense: Adversarial attacks to avoid modulation detection. IEEE Trans Inf Forensic Secur 16:1074–1087CrossRef Hameed MZ, György A, Gündüz D (2020) The best defense is a good offense: Adversarial attacks to avoid modulation detection. IEEE Trans Inf Forensic Secur 16:1074–1087CrossRef
9.
Zurück zum Zitat Wang D, Li C, Wen S et al (2020) Defending against adversarial attack towards deep neural networks via collaborative multi-task training. IEEE Trans Dependable Secure Comput 19(2):953–965CrossRef Wang D, Li C, Wen S et al (2020) Defending against adversarial attack towards deep neural networks via collaborative multi-task training. IEEE Trans Dependable Secure Comput 19(2):953–965CrossRef
10.
Zurück zum Zitat Yuan X, He P, Zhu Q et al (2019) Adversarial examples: Attacks and defenses for deep learning. IEEE trans neural netw Learn Syst 30(9):2805–2824MathSciNetCrossRef Yuan X, He P, Zhu Q et al (2019) Adversarial examples: Attacks and defenses for deep learning. IEEE trans neural netw Learn Syst 30(9):2805–2824MathSciNetCrossRef
11.
Zurück zum Zitat Akhtar N, Mian A, Kardan N et al (2021) Advances in adversarial attacks and defenses in computer vision: A survey. IEEE Access 9:155161–155196CrossRef Akhtar N, Mian A, Kardan N et al (2021) Advances in adversarial attacks and defenses in computer vision: A survey. IEEE Access 9:155161–155196CrossRef
12.
Zurück zum Zitat Reed R (1993) Pruning algorithms-a survey. IEEE trans Neural Netw 4(5):740–747CrossRef Reed R (1993) Pruning algorithms-a survey. IEEE trans Neural Netw 4(5):740–747CrossRef
13.
Zurück zum Zitat Chen Z, Wang Z, Gao X et al (2023) Channel pruning method for signal modulation recognition deep learning models. IEEE Transactions on Cognitive Communications and Networking Chen Z, Wang Z, Gao X et al (2023) Channel pruning method for signal modulation recognition deep learning models. IEEE Transactions on Cognitive Communications and Networking
14.
Zurück zum Zitat Blalock D, Gonzalez Ortiz JJ, Frankle J et al (2020) What is the state of neural network pruning? Proc mach learn syst 2:129–146 Blalock D, Gonzalez Ortiz JJ, Frankle J et al (2020) What is the state of neural network pruning? Proc mach learn syst 2:129–146
15.
Zurück zum Zitat Vadera S, Ameen S (2022) Methods for pruning deep neural networks. IEEE Access 10:63280–63300CrossRef Vadera S, Ameen S (2022) Methods for pruning deep neural networks. IEEE Access 10:63280–63300CrossRef
16.
Zurück zum Zitat Madaan D, Shin J, Hwang SJ (2020) Adversarial neural pruning with latent vulnerability suppression. In: International Conference on Machine Learning PMLR p 6575–6585 Madaan D, Shin J, Hwang SJ (2020) Adversarial neural pruning with latent vulnerability suppression. In: International Conference on Machine Learning PMLR p 6575–6585
17.
Zurück zum Zitat Vemparala M-R, Fasfous N, Frickenstein A et al (2021) Adversarial robust model compression using in-train pruning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, p 66–75 Vemparala M-R, Fasfous N, Frickenstein A et al (2021) Adversarial robust model compression using in-train pruning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, p 66–75
18.
Zurück zum Zitat Zhang L, Wang Z, Dong X et al (2023) Towards fairness-aware adversarial network pruning. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, p 5168–5177 Zhang L, Wang Z, Dong X et al (2023) Towards fairness-aware adversarial network pruning. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, p 5168–5177
19.
Zurück zum Zitat Ye S, Xu K, Liu S et al (2019) Adversarial robustness vs. model compression, or both?. In: Proceedings of the IEEE/CVF International Conference on Computer Vision p 111–120 Ye S, Xu K, Liu S et al (2019) Adversarial robustness vs. model compression, or both?. In: Proceedings of the IEEE/CVF International Conference on Computer Vision p 111–120
20.
Zurück zum Zitat Huang H, Wu P, Xia S et al (2023) Distortion diminishing with vulnerability filters pruning. Mach Vis Appl 34(6):123 Huang H, Wu P, Xia S et al (2023) Distortion diminishing with vulnerability filters pruning. Mach Vis Appl 34(6):123
21.
Zurück zum Zitat Athalye A, Carlini N (2018) On the robustness of the cvpr 2018 white-box adversarial example defenses. arXiv preprint arXiv:1804.03286 Athalye A, Carlini N (2018) On the robustness of the cvpr 2018 white-box adversarial example defenses. arXiv preprint arXiv:​1804.​03286
22.
Zurück zum Zitat Szegedy C, Zaremba W, Sutskever I et al(2014) Intriguing Properties of Neural Networks. (2014). 2nd International Conference on Learning Representations, ICLR 2014 ; Conference date: 14-04-2014 Through 16-04-2014 Szegedy C, Zaremba W, Sutskever I et al(2014) Intriguing Properties of Neural Networks. (2014). 2nd International Conference on Learning Representations, ICLR 2014 ; Conference date: 14-04-2014 Through 16-04-2014
24.
Zurück zum Zitat Shaham U, Yamada Y, Negahban S (2018) Understanding adversarial training: Increasing local stability of supervised models through robust optimization. Neurocomputing 307:195–204CrossRef Shaham U, Yamada Y, Negahban S (2018) Understanding adversarial training: Increasing local stability of supervised models through robust optimization. Neurocomputing 307:195–204CrossRef
25.
Zurück zum Zitat Madry A, Makelov A, Schmidt L et al (2017) Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 Madry A, Makelov A, Schmidt L et al (2017) Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:​1706.​06083
26.
Zurück zum Zitat Maroto J, Bovet G, Frossard P (2021) Safeamc: Adversarial training for robust modulation recognition models. arXiv preprint arXiv:2105.13746 Maroto J, Bovet G, Frossard P (2021) Safeamc: Adversarial training for robust modulation recognition models. arXiv preprint arXiv:​2105.​13746
27.
Zurück zum Zitat Wen W, Wu C, Wang Y et al (2016) Learning structured sparsity in deep neural networks. Advances in neural information processing systems, 29 Wen W, Wu C, Wang Y et al (2016) Learning structured sparsity in deep neural networks. Advances in neural information processing systems, 29
28.
Zurück zum Zitat Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25 Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25
29.
Zurück zum Zitat Srivastava N, Hinton G, Krizhevsky A et al (2014) Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res 15(1):1929–1958 Srivastava N, Hinton G, Krizhevsky A et al (2014) Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res 15(1):1929–1958
30.
Zurück zum Zitat Roth K, Lucchi A, Nowozin S et al (2018) Adversarially robust training through structured gradient regularization. arXiv preprint arXiv:1805.08736 Roth K, Lucchi A, Nowozin S et al (2018) Adversarially robust training through structured gradient regularization. arXiv preprint arXiv:​1805.​08736
31.
Zurück zum Zitat Szegedy C, Vanhoucke V, Ioffe S et al (2016) Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, p 2818–2826 Szegedy C, Vanhoucke V, Ioffe S et al (2016) Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, p 2818–2826
32.
Zurück zum Zitat Tramèr F, Kurakin A, Papernot N et al (2017) Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204 Tramèr F, Kurakin A, Papernot N et al (2017) Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:​1705.​07204
34.
Zurück zum Zitat Li Y, Cheng S, Su H et al (2020) Defense against adversarial attacks via controlling gradient leaking on embedded manifolds. In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXVIII Springer 16 p 753–769 Li Y, Cheng S, Su H et al (2020) Defense against adversarial attacks via controlling gradient leaking on embedded manifolds. In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXVIII Springer 16 p 753–769
35.
Zurück zum Zitat Han S, Pool J, Tran J et al (2015) Learning both weights and connections for efficient neural network. Advances in neural information processing systems 28 Han S, Pool J, Tran J et al (2015) Learning both weights and connections for efficient neural network. Advances in neural information processing systems 28
36.
Zurück zum Zitat Cai X, Yi J, Zhang F et al (2019) Adversarial structured neural network pruning. In: Proceedings of the 28th ACM International Conference on Information and Knowledge Management, p 2433–2436 Cai X, Yi J, Zhang F et al (2019) Adversarial structured neural network pruning. In: Proceedings of the 28th ACM International Conference on Information and Knowledge Management, p 2433–2436
37.
38.
Zurück zum Zitat O‘Shea TJ, Corgan J, Clancy TC (2016) Convolutional radio modulation recognition networks. In: Engineering Applications of Neural Networks: 17th International Conference, EANN 2016, Aberdeen, UK, September 2-5, 2016, Proceedings Springer 17 p 213–226 O‘Shea TJ, Corgan J, Clancy TC (2016) Convolutional radio modulation recognition networks. In: Engineering Applications of Neural Networks: 17th International Conference, EANN 2016, Aberdeen, UK, September 2-5, 2016, Proceedings Springer 17 p 213–226
39.
Zurück zum Zitat Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. In: 3rd International Conference on Learning Representations (ICLR 2015). Computational and Biological Learning Society Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. In: 3rd International Conference on Learning Representations (ICLR 2015). Computational and Biological Learning Society
41.
Zurück zum Zitat Kurakin A, Goodfellow IJ, Bengio S (2018) Adversarial examples in the physical world. In: Artificial Intelligence Safety and Security, Chapman and Hall/CRC, ??? p 99–112 Kurakin A, Goodfellow IJ, Bengio S (2018) Adversarial examples in the physical world. In: Artificial Intelligence Safety and Security, Chapman and Hall/CRC, ??? p 99–112
42.
Zurück zum Zitat Croce F, Hein M (2020) Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In: International Conference on Machine Learning PMLR, p 2206–2216 Croce F, Hein M (2020) Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In: International Conference on Machine Learning PMLR, p 2206–2216
43.
Zurück zum Zitat Carlini N, Wagner D (2017) Towards evaluating the robustness of neural networks. In: 2017 Ieee Symposium on Security and Privacy (sp) Ieee, p 39–57 Carlini N, Wagner D (2017) Towards evaluating the robustness of neural networks. In: 2017 Ieee Symposium on Security and Privacy (sp) Ieee, p 39–57
44.
Zurück zum Zitat Moosavi-Dezfooli, S-M, Fawzi A, Frossard P (2016) Deepfool: A simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Moosavi-Dezfooli, S-M, Fawzi A, Frossard P (2016) Deepfool: A simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
45.
Zurück zum Zitat Andriushchenko M, Flammarion N (2020) Understanding and improving fast adversarial training. Adv Neural Inf Process Syst NeurIPS 33:16048–16059 Andriushchenko M, Flammarion N (2020) Understanding and improving fast adversarial training. Adv Neural Inf Process Syst NeurIPS 33:16048–16059
46.
Zurück zum Zitat Cohen J, Rosenfeld E, Kolter Z (2019) Certified adversarial robustness via randomized smoothing. In: International Conference on Machine Learning, PMLR p 1310–1320 Cohen J, Rosenfeld E, Kolter Z (2019) Certified adversarial robustness via randomized smoothing. In: International Conference on Machine Learning, PMLR p 1310–1320
Metadaten
Titel
A Pruning Method Combined with Resilient Training to Improve the Adversarial Robustness of Automatic Modulation Classification Models
verfasst von
Chao Han
Linyuan Wang
Dongyang Li
Weijia Cui
Bin Yan
Publikationsdatum
13.05.2024
Verlag
Springer US
Erschienen in
Mobile Networks and Applications
Print ISSN: 1383-469X
Elektronische ISSN: 1572-8153
DOI
https://doi.org/10.1007/s11036-024-02333-9