The Role of Deepfake, Deception, and Disinformation in Conflict Zones Based on DL for NLP: A Critical AI-Era Perspective
DOI:
https://doi.org/10.5281/zenodo.18115680Keywords:
3D-Sec, Deepfake Detection, Disinformation Warfare, Artificial Intelligence in Conflict, Context-Aware NLP, Multidimensional Knowledge Framework, Deep LearningAbstract
The fast-paced growth of artificial intelligence (AI) has transformed contemporary information warfare, particularly in unstable, conflict-ridden regions. Among the most significant threats is the “3D-Sec” triad Deepfake, Deception, and Disinformation which exploits synthetic media, psychological manipulation, and fabricated narratives to weaken digital trust, disrupt institutions, distort learning, and influence cognitive and socio-psychological behavior. This research rigorously examines the role of 3D-Sec in conflict zones and introduces an AI-driven framework designed to identify and mitigate its repercussions on security, governance, education, cognition, and public perception. As NLP and AI-generated content becomes increasingly lifelike, the boundary between reality and fabrication blurs. Existing detection techniques often lack contextual understanding and fail to address the complex, multi-dimensional characteristics of 3D-Sec threats, including their cognitive and educational impacts. Uncontrolled 3D-Sec campaigns jeopardize privacy, obstruct peace efforts, misinform learners, and contribute to socio-psychological stress and geopolitical instability. Understanding these mechanisms is critical to protecting vulnerable populations, safeguarding education, and ensuring the integrity of digital communication during conflict. We propose a Deep Learning–Natural Language Processing (DL-NLP) approach grounded in the ‘Multidimensional Knowledge Framework for Data Analysis (6-W)’. This framework incorporates six analytical dimensions, expressed as (Wt = f(Wy, Wr, Wn, Wo, Wh )) capturing the contextual, temporal, cognitive, and socio-psychological dependencies essential for detecting 3D-Sec activities. The proposed model enhances detection capabilities by integrating semantic, situational, and cognitive indicators, enabling accurate recognition of AI-induced deceit, misinformation, and educational content manipulation. A context-sensitive, interdisciplinary defense system such as this is vital for combating AI-enhanced information threats in war-torn regions, protecting cognition, education, and societal resilience against manipulation and misinformation.
References
Agrawal, S., Pandey, L., & Lakshmi, D. (2025). Proactively Approaching Cybersecurity With AI-Powered Malware Detection Is Essential. In M. A. Almaiah (Ed.), Utilizing AI in Network and Mobile Security for Threat Detection and Prevention (pp. 23-42). IGI Global Scientific Publishing. https://doi.org/10.4018/979-8-3693-9919-4.ch002
Agwanda, B., Nyadera, I. N., & Asal, U. Y. (2022). Cameroon and the Anglophone Crisis. In O. P. Richmond & G. Visoka (Eds.), The Palgrave Encyclopedia of Peace and Conflict Studies (pp. 99-109). Springer International Publishing. https://doi.org/10.1007/978-3-030-77954-2_115
Al Siam, A., Hassan, M. M., & Bhuiyan, T. (2025). Artificial Intelligence for Cybersecurity: A State of the Art. In 2025 IEEE 4th International Conference on AI in Cybersecurity (ICAIC) (pp. 1-7). IEEE. https://doi.org/10.1109/ICAIC63015.2025.10848980
Alam, M. S., Mrida, M. S. H., & Rahman, M. A. (2025). Sentiment Analysis in Social Media: How Data Science Impacts Public Opinion Knowledge Integrates Natural Language Processing (NLP) with Artificial Intelligence (AI). American Journal of Scholarly Research and Innovation, 4(1), 63-100. https://doi.org/10.63125/r3sq6p80
Albader, F. (2025). Synthetic Media as a Risk Factor for Genocide. Journal of Law, Technology, & the Internet, 16(2), 200. https://scholarlycommons.law.case.edu/jolti/vol16/iss2/1
Bhalli, N. N., Naqvi, N., Evered, C., Mallinson, C., & Janeja, V. P. (2024). Listening for Expert Identified Linguistic Features: Assessment of Audio Deepfake Discernment among Undergraduate Students. arXiv preprint arXiv:2411.14586. https://doi.org/10.48550/arXiv.2411.14586
Bourgault, J. R. (2025). Free Speech And Synthetic Lies: Deepfakes, Synthetic Media, and the First Amendment. Student Journal of Information Privacy Law, 3(1), 49. https://digitalcommons.mainelaw.maine.edu/sjipl/vol3/iss1/5
Carpenter, P. (2024). FAIK: A Practical Guide to Living in a World of Deepfakes, Disinformation, and AI-Generated Deceptions. John Wiley & Sons.
Chadwick, A., & Stanyer, J. (2021). Deception as a Bridging Concept in the Study of Disinformation, Misinformation, and Misperceptions: Toward a Holistic Framework. Communication Theory, 32(1), 1-24. https://doi.org/10.1093/ct/qtab019
Citron, D. K., & Chesney, R. (2019). Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security. California Law Review, 107(6), 1753. https://scholarship.law.bu.edu/faculty_scholarship/640
Eason, T., Garmestani, A. S., Stow, C. A., Rojo, C., Alvarez?Cobelas, M., & Cabezas, H. (2016). Managing for Resilience: An Information Theory?based Approach to Assessing Ecosystems. Journal of Applied Ecology, 53(3), 656-665. https://doi.org/10.1111/1365-2664.12597
Farooq, A., & de Vreese, C. (2025). Deciphering authenticity in the age of AI: how AI-generated disinformation images and AI detection tools influence judgements of authenticity. AI & Society. https://doi.org/10.1007/s00146-025-02416-5
Folorunsho, F., & Boamah, B. F. (2025). Deepfake Technology and Its Impact: Ethical Considerations, Societal Disruptions, and Security Threats in AI-Generated Media. International Journal of Information Technology and Management Information Systems, 16(1), 1060-1080. https://doi.org/10.34218/IJITMIS_16_01_076
Hancock, J. T., & Bailenson, J. N. (2021). The Social Impact of Deepfakes. Cyberpsychology, Behavior, and Social Networking, 24(3), 149-152. https://doi.org/10.1089/cyber.2021.29208.jth
Hwang, Y., Ryu, J. Y., & Jeong, S.-H. (2021). Effects of Disinformation Using Deepfake: The Protective Effect of Media Literacy Education. Cyberpsychology, Behavior, and Social Networking, 24(3), 188-193. https://doi.org/10.1089/cyber.2020.0174
Jacobsen, B. N. (2025). Deepfakes and the promise of algorithmic detectability. European Journal of Cultural Studies, 28(2), 419-435. https://doi.org/10.1177/13675494241240028
Khan, F. A., Li, G., Khan, A. N., Khan, Q. W., Hadjouni, M., & Elmannai, H. (2023). AI-Driven Counter-Terrorism: Enhancing Global Security Through Advanced Predictive Analytics. IEEE Access, 11, 135864-135879. https://doi.org/10.1109/ACCESS.2023.3336811
Lazer, D. M. J., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., et al. (2018). The science of fake news. Science, 359(6380), 1094-1096. https://doi.org/10.1126/science.aao2998
Maronkova, B. (2021). NATO Amidst Hybrid Warfare Threats: Effective Strategic Communications as a Tool Against Disinformation and Propaganda. In S. Jayakumar, B. Ang, & N. D. Anwar (Eds.), Disinformation and Fake News (pp. 117-129). Springer Singapore. https://doi.org/10.1007/978-981-15-5876-4_9
Matar, T.-L. (2025). Mitigating the Threat of AI-Assisted Terrorism: Challenges and Counterterrorism Strategies [Doctoral dissertation, Neapolis University in Cyprus]. https://hdl.handle.net/11728/13120
Nawaz, F. (2025). Psychological Warfare in the Digital Age: Strategies, Impacts, and Countermeasures. Journal of Future Building, 2(1), 21-30. https://www.researchcorridor.org/index.php/jfb/article/view/314
Nenovski, B., Ilijevski, I., & Stanojoska, A. (2023). Strengthening Resilience Against Deepfakes as Disinformation Threats. In Poland’s Experience in Combating Disinformation: Inspirations for the Western Balkans (pp. 127-142). Oficyna Wydawnicza ASPRA-JR, Warsaw. https://eprints.uklo.edu.mk/id/eprint/9662
Nounkeu, C. T. (2020). Facebook and Fake News in the “Anglophone Crisis” in Cameroon. African Journalism Studies, 41(3), 20-35. https://doi.org/10.1080/23743670.2020.1812102
O’Hara, I. (2022). Automated Epistemology: Bots, Computational Propaganda & Information Literacy Instruction. The Journal of Academic Librarianship, 48(4), 102540. https://doi.org/10.1016/j.acalib.2022.102540
Olanipekun, S. O. (2025). Computational Propaganda and Misinformation: AI Technologies as Tools of Media Manipulation. World Journal of Advanced Research and Reviews, 25(1), 911-923. https://doi.org/10.30574/wjarr.2025.25.1.0131
Palazzi, M. J., Solé-Ribalta, A., Calleja-Solanas, V., Meloni, S., Plata, C. A., Suweis, S., et al. (2020). Resilience and Elasticity of Co-Evolving Information Ecosystems. arXiv preprint arXiv:2005.07005. https://doi.org/10.48550/arXiv.2005.07005
Pearce, K. E. (2015). Democratizing kompromat: the affordances of social media for state-sponsored harassment. Information, Communication & Society, 18(10), 1158-1174. https://doi.org/10.1080/1369118X.2015.1021705
Plikynas, D., Rizgelien?, I., & Korvel, G. (2025). Systematic Review of Fake News, Propaganda, and Disinformation: Examining Authors, Content, and Social Impact Through Machine Learning. IEEE Access, 13, 17583-17629. https://doi.org/10.1109/ACCESS.2025.3530688
Rana, M. S., Nobi, M. N., Murali, B., & Sung, A. H. (2022). Deepfake Detection: A Systematic Literature Review. IEEE Access, 10, 25494-25513. https://doi.org/10.1109/ACCESS.2022.3154404
Rød, B., Pursiainen, C., & Eklund, N. (2025). Combatting Disinformation – How Do We Create Resilient Societies? Literature Review and Analytical Framework. European Journal for Security Research. https://doi.org/10.1007/s41125-025-00105-4
Rosca, C.-M., Stancu, A., & Iovanovici, E. M. (2025). The New Paradigm of Deepfake Detection at the Text Level. Applied Sciences, 15(5), 2560. https://doi.org/10.3390/app15052560
Samoilenko, S. A. (2017). Strategic Deception in the Age of “Truthiness”. In I. Chiluwa (Ed.), Deception and Deceptive Communication: Motivations, Recognition Techniques and Behavioral Control (pp. 129-168). Nova Science Publishers. https://www.researchgate.net/publication/324260429
Shu, K., Mahudeswaran, D., Wang, S., Lee, D., & Liu, H. (2020). FakeNewsNet: A Data Repository with News Content, Social Context, and Spatiotemporal Information for Studying Fake News on Social Media. Big Data, 8(3), 171-188. https://doi.org/10.1089/big.2020.0062
Sophia, L. (2025). The Social Harms of AI-Generated Fake News: Addressing Deepfake and AI Political Manipulation. Digital Society & Virtual Governance, 1(1), 72-88. https://doi.org/10.6914/dsvg.010105
Svetoka, S. (2016). Social Media as a Tool of Hybrid Warfare. NATO Strategic Communications Centre of Excellence. https://stratcomcoe.org/publications/social-media-as-a-tool-of-hybrid-warfare/177
Vosoughi, S., Roy, D., & Aral, S. (2018). The Spread of True and False News Online. Science, 359(6380), 1146-1151. https://doi.org/10.1126/science.aap9559
Westerlund, M. (2019). The Emergence of Deepfake Technology: A Review. Technology Innovation Management Review, 9(11), 39-52. https://www.timreview.ca/article/1282
Zhou, T., Wang, W., Liang, Z., & Shen, J. (2021). Face Forensics in the Wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 5778-5788). IEEE. https://doi.org/10.1109/CVPR46437.2021.00572
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Comunicar

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.