Keywords
3D-Sec, Detección de Deepfake, Guerra de Desinformación, Inteligencia Artificial en Conflictos, PLN Sensible al Contexto, Marco de Conocimiento Multidimensional, Deep Learning
Abstract
El rápido crecimiento de la inteligencia artificial (IA) ha transformado la guerra informativa contemporánea, especialmente en regiones inestables y afectadas por conflictos. Entre las amenazas más significativas se encuentra la tríada “3D-Sec”: Deepfake, Decepción y Desinformación, que explota los medios sintéticos, la manipulación psicológica y las narrativas fabricadas para debilitar la confianza digital, perturbar las instituciones, distorsionar el aprendizaje e influir en el comportamiento cognitivo y socio-psicológico. Esta investigación examina rigurosamente el papel del 3D-Sec en zonas de conflicto e introduce un marco impulsado por IA diseñado para identificar y mitigar sus repercusiones en la seguridad, la gobernanza, la educación, la cognición y la percepción pública. A medida que el contenido generado por IA y los sistemas de PLN se vuelven cada vez más realistas, la frontera entre la realidad y la fabricación se difumina. Las técnicas de detección existentes suelen carecer de comprensión contextual y no abordan las complejas y multidimensionales características de las amenazas 3D-Sec, incluidas sus repercusiones cognitivas y educativas. Las campañas 3D-Sec incontroladas ponen en riesgo la privacidad, obstaculizan los esfuerzos de paz, desinforman a los aprendices y contribuyen al estrés socio-psicológico y a la inestabilidad geopolítica. Comprender estos mecanismos es fundamental para proteger a las poblaciones vulnerables, salvaguardar la educación y garantizar la integridad de la comunicación digital en contextos de conflicto. Se propone un enfoque de Deep Learning–Procesamiento del Lenguaje Natural (DL–PLN) basado en el “Marco de Conocimiento Multidimensional para el Análisis de Datos (6-W)”. Este marco incorpora seis dimensiones analíticas, expresadas como (Wt = f(Wy, Wr, Wn, Wo, Wh )) capturando las dependencias contextuales, temporales, cognitivas y socio-psicológicas esenciales para la detección de actividades 3D-Sec. El modelo propuesto mejora las capacidades de detección al integrar indicadores semánticos, situacionales y cognitivos, lo que permite un reconocimiento preciso del engaño, la desinformación y la manipulación de contenidos educativos inducidos por IA. Un sistema de defensa interdisciplinario y sensible al contexto como este es vital para combatir las amenazas informativas potenciadas por IA en regiones devastadas por la guerra, protegiendo la cognición, la educación y la resiliencia social frente a la manipulación y la desinformación.
References
Agrawal, S., Pandey, L., & Lakshmi, D. (2025). Proactively Approaching Cybersecurity With AI-Powered Malware Detection Is Essential. In M. A. Almaiah (Ed.), Utilizing AI in Network and Mobile Security for Threat Detection and Prevention (pp. 23-42). IGI Global Scientific Publishing. https://doi.org/10.4018/979-8-3693-9919-4.ch002
Agwanda, B., Nyadera, I. N., & Asal, U. Y. (2022). Cameroon and the Anglophone Crisis. In O. P. Richmond & G. Visoka (Eds.), The Palgrave Encyclopedia of Peace and Conflict Studies (pp. 99-109). Springer International Publishing. https://doi.org/10.1007/978-3-030-77954-2_115
Al Siam, A., Hassan, M. M., & Bhuiyan, T. (2025). Artificial Intelligence for Cybersecurity: A State of the Art. In 2025 IEEE 4th International Conference on AI in Cybersecurity (ICAIC) (pp. 1-7). IEEE. https://doi.org/10.1109/ICAIC63015.2025.10848980
Alam, M. S., Mrida, M. S. H., & Rahman, M. A. (2025). Sentiment Analysis in Social Media: How Data Science Impacts Public Opinion Knowledge Integrates Natural Language Processing (NLP) with Artificial Intelligence (AI). American Journal of Scholarly Research and Innovation, 4(1), 63-100. https://doi.org/10.63125/r3sq6p80
Albader, F. (2025). Synthetic Media as a Risk Factor for Genocide. Journal of Law, Technology, & the Internet, 16(2), 200. https://scholarlycommons.law.case.edu/jolti/vol16/iss2/1
Bhalli, N. N., Naqvi, N., Evered, C., Mallinson, C., & Janeja, V. P. (2024). Listening for Expert Identified Linguistic Features: Assessment of Audio Deepfake Discernment among Undergraduate Students. arXiv preprint arXiv:2411.14586. https://doi.org/10.48550/arXiv.2411.14586
Bourgault, J. R. (2025). Free Speech And Synthetic Lies: Deepfakes, Synthetic Media, and the First Amendment. Student Journal of Information Privacy Law, 3(1), 49. https://digitalcommons.mainelaw.maine.edu/sjipl/vol3/iss1/5
Carpenter, P. (2024). FAIK: A Practical Guide to Living in a World of Deepfakes, Disinformation, and AI-Generated Deceptions. John Wiley & Sons.
Chadwick, A., & Stanyer, J. (2021). Deception as a Bridging Concept in the Study of Disinformation, Misinformation, and Misperceptions: Toward a Holistic Framework. Communication Theory, 32(1), 1-24. https://doi.org/10.1093/ct/qtab019
Citron, D. K., & Chesney, R. (2019). Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security. California Law Review, 107(6), 1753. https://scholarship.law.bu.edu/faculty_scholarship/640
Eason, T., Garmestani, A. S., Stow, C. A., Rojo, C., Alvarez?Cobelas, M., & Cabezas, H. (2016). Managing for Resilience: An Information Theory?based Approach to Assessing Ecosystems. Journal of Applied Ecology, 53(3), 656-665. https://doi.org/10.1111/1365-2664.12597
Farooq, A., & de Vreese, C. (2025). Deciphering authenticity in the age of AI: how AI-generated disinformation images and AI detection tools influence judgements of authenticity. AI & Society. https://doi.org/10.1007/s00146-025-02416-5
Folorunsho, F., & Boamah, B. F. (2025). Deepfake Technology and Its Impact: Ethical Considerations, Societal Disruptions, and Security Threats in AI-Generated Media. International Journal of Information Technology and Management Information Systems, 16(1), 1060-1080. https://doi.org/10.34218/IJITMIS_16_01_076
Hancock, J. T., & Bailenson, J. N. (2021). The Social Impact of Deepfakes. Cyberpsychology, Behavior, and Social Networking, 24(3), 149-152. https://doi.org/10.1089/cyber.2021.29208.jth
Hwang, Y., Ryu, J. Y., & Jeong, S.-H. (2021). Effects of Disinformation Using Deepfake: The Protective Effect of Media Literacy Education. Cyberpsychology, Behavior, and Social Networking, 24(3), 188-193. https://doi.org/10.1089/cyber.2020.0174
Jacobsen, B. N. (2025). Deepfakes and the promise of algorithmic detectability. European Journal of Cultural Studies, 28(2), 419-435. https://doi.org/10.1177/13675494241240028
Khan, F. A., Li, G., Khan, A. N., Khan, Q. W., Hadjouni, M., & Elmannai, H. (2023). AI-Driven Counter-Terrorism: Enhancing Global Security Through Advanced Predictive Analytics. IEEE Access, 11, 135864-135879. https://doi.org/10.1109/ACCESS.2023.3336811
Lazer, D. M. J., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., et al. (2018). The science of fake news. Science, 359(6380), 1094-1096. https://doi.org/10.1126/science.aao2998
Maronkova, B. (2021). NATO Amidst Hybrid Warfare Threats: Effective Strategic Communications as a Tool Against Disinformation and Propaganda. In S. Jayakumar, B. Ang, & N. D. Anwar (Eds.), Disinformation and Fake News (pp. 117-129). Springer Singapore. https://doi.org/10.1007/978-981-15-5876-4_9
Matar, T.-L. (2025). Mitigating the Threat of AI-Assisted Terrorism: Challenges and Counterterrorism Strategies [Doctoral dissertation, Neapolis University in Cyprus]. https://hdl.handle.net/11728/13120
Nawaz, F. (2025). Psychological Warfare in the Digital Age: Strategies, Impacts, and Countermeasures. Journal of Future Building, 2(1), 21-30. https://www.researchcorridor.org/index.php/jfb/article/view/314
Nenovski, B., Ilijevski, I., & Stanojoska, A. (2023). Strengthening Resilience Against Deepfakes as Disinformation Threats. In Poland’s Experience in Combating Disinformation: Inspirations for the Western Balkans (pp. 127-142). Oficyna Wydawnicza ASPRA-JR, Warsaw. https://eprints.uklo.edu.mk/id/eprint/9662
Nounkeu, C. T. (2020). Facebook and Fake News in the “Anglophone Crisis” in Cameroon. African Journalism Studies, 41(3), 20-35. https://doi.org/10.1080/23743670.2020.1812102
O’Hara, I. (2022). Automated Epistemology: Bots, Computational Propaganda & Information Literacy Instruction. The Journal of Academic Librarianship, 48(4), 102540. https://doi.org/10.1016/j.acalib.2022.102540
Olanipekun, S. O. (2025). Computational Propaganda and Misinformation: AI Technologies as Tools of Media Manipulation. World Journal of Advanced Research and Reviews, 25(1), 911-923. https://doi.org/10.30574/wjarr.2025.25.1.0131
Palazzi, M. J., Solé-Ribalta, A., Calleja-Solanas, V., Meloni, S., Plata, C. A., Suweis, S., et al. (2020). Resilience and Elasticity of Co-Evolving Information Ecosystems. arXiv preprint arXiv:2005.07005. https://doi.org/10.48550/arXiv.2005.07005
Pearce, K. E. (2015). Democratizing kompromat: the affordances of social media for state-sponsored harassment. Information, Communication & Society, 18(10), 1158-1174. https://doi.org/10.1080/1369118X.2015.1021705
Plikynas, D., Rizgelien?, I., & Korvel, G. (2025). Systematic Review of Fake News, Propaganda, and Disinformation: Examining Authors, Content, and Social Impact Through Machine Learning. IEEE Access, 13, 17583-17629. https://doi.org/10.1109/ACCESS.2025.3530688
Rana, M. S., Nobi, M. N., Murali, B., & Sung, A. H. (2022). Deepfake Detection: A Systematic Literature Review. IEEE Access, 10, 25494-25513. https://doi.org/10.1109/ACCESS.2022.3154404
Rød, B., Pursiainen, C., & Eklund, N. (2025). Combatting Disinformation – How Do We Create Resilient Societies? Literature Review and Analytical Framework. European Journal for Security Research. https://doi.org/10.1007/s41125-025-00105-4
Rosca, C.-M., Stancu, A., & Iovanovici, E. M. (2025). The New Paradigm of Deepfake Detection at the Text Level. Applied Sciences, 15(5), 2560. https://doi.org/10.3390/app15052560
Samoilenko, S. A. (2017). Strategic Deception in the Age of “Truthiness”. In I. Chiluwa (Ed.), Deception and Deceptive Communication: Motivations, Recognition Techniques and Behavioral Control (pp. 129-168). Nova Science Publishers. https://www.researchgate.net/publication/324260429
Shu, K., Mahudeswaran, D., Wang, S., Lee, D., & Liu, H. (2020). FakeNewsNet: A Data Repository with News Content, Social Context, and Spatiotemporal Information for Studying Fake News on Social Media. Big Data, 8(3), 171-188. https://doi.org/10.1089/big.2020.0062
Sophia, L. (2025). The Social Harms of AI-Generated Fake News: Addressing Deepfake and AI Political Manipulation. Digital Society & Virtual Governance, 1(1), 72-88. https://doi.org/10.6914/dsvg.010105
Svetoka, S. (2016). Social Media as a Tool of Hybrid Warfare. NATO Strategic Communications Centre of Excellence. https://stratcomcoe.org/publications/social-media-as-a-tool-of-hybrid-warfare/177
Vosoughi, S., Roy, D., & Aral, S. (2018). The Spread of True and False News Online. Science, 359(6380), 1146-1151. https://doi.org/10.1126/science.aap9559
Westerlund, M. (2019). The Emergence of Deepfake Technology: A Review. Technology Innovation Management Review, 9(11), 39-52. https://www.timreview.ca/article/1282
Zhou, T., Wang, W., Liang, Z., & Shen, J. (2021). Face Forensics in the Wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 5778-5788). IEEE. https://doi.org/10.1109/CVPR46437.2021.00572
Technical information
Received: 2025-07-25 | Reviewed: 2025-08-18 | Accepted: 2025-08-21 | Online First: 2025-11-25 | Published: 2026-01-04
Metrics
Metrics of this article
Views: 38099
Abstract readings: 36810
PDF downloads: 1289
Full metrics of Comunicar 77
Views: 459033
Abstract readings: 446071
PDF downloads: 12962
Cited by
Cites in Web of Science
Currently there are no citations to this document

Cites in Scopus
Currently there are no citations to this document

Cites in Google Scholar
Currently there are no citations to this document
Métricas alternativas
Cómo citar
Pascal Muam Mah. (2026). El papel del deepfake, el engaño y la desinformación en zonas de conflicto basado en aprendizaje automático para PNL: una perspectiva crítica de la era de la IA. Comunicar, 34(84). 10.5281/zenodo.18115680