Una mirada a los riesgos y amenazas de la inteligencia artificial, desde la Ecología de los Medios
From a historical perspective and a prospective analysis, the article aims to understand the role of technologies and their impact on society through the postulates of media ecology. Through this meta-discipline, we delve into the rigorous review of different authors who see technologies as playing a prominent role in shaping the future because they not only influence the culture of societies, but also impact the course, advancement and meaning of history. The text focuses on the advantages and on the explanation of the risks of generative artificial intelligence, identifying eight critical scenarios: weaponization, disinformation, proxy games, weakening, blocking or withholding of value, unwanted emerging goals, deception and power-seeking behavior. Subsequently, CASI regroups them into four threats: malicious use, the AI race, organizational risks and uncontrolled AI. We end the by drawing on McLuhan’s reflections and stressing the need to scale back technologies when they have reached elevated levels of development to minimize their negative impact. Although artificial intelligence has not reached that state, there is a warning about the accelerated evolution and the need for AI literacy as a measure to face risks and threats, in a limited time before it is too late.
Desde una perspectiva histórica y un análisis prospectivo, el artículo tiene como objetivo comprender el papel de las tecnologías y su impacto en la sociedad, a través de los postulados de la ecología de los medios. A través de esta metadisciplina, nos adentramos a la rigurosa revisión de diferentes autores que ven en las tecnologías un rol destacado en la configuración del futuro porque no solo influyen en la cultura de las sociedades, sino que también impactan en el curso, avance y significado de la historia. El texto se centra en las ventajas y, sobre todo, en la explicación de los riesgos de la inteligencia artificial generativa, identificando ocho escenarios críticos: armamento, desinformación, juegos de proxy, debilitamiento, bloqueo o retención de valor, metas emergentes no deseadas, engaño y comportamiento de búsqueda de poder. Posteriormente, el CASI las reagrupa en cuatro amenazas: uso malicioso, la carrera de la IA, riesgos organizativos e IA descontrolada. Terminamos recuperando las reflexiones de McLuhan y su tétrada sobre la necesidad de enfriar las tecnologías cuando han alcanzado altos niveles de desarrollo para minimizar su impacto negativo. Si bien la inteligencia artificial no ha alcanzado ese estado, se advierte sobre la acelerada evolución y la necesidad de una alfabetización en IA como una medida para afrontar los riesgos y amenazas, eso sí, en un tiempo limitado antes de que sea tarde.
Keywords / Palabras Claves
Artificial Intelligence, Media Ecology, Intelligent Agents, McLuhan, Risks, Technologies.
Inteligencia Artificial, Ecología de los Medios, Agentes Inteligentes, McLuhan, Riesgos, Tecnología.
Media ecology (ME) is a complex meta-discipline that enables us to recognize, study, and understand, through history, the cultural environments resulting from technological changes. The historical perspective of ME is broad, as it examines the intricate ways in which technologies alter the cultural ecologies of societies. Thus, ME traces back to the slow evolution of the Homo species, which, after millions of years, with the development of the Sapiens family, introduced the first tools and utensils, and much later, achieved the domestication of fire and the invention of the phonetic alphabet (Aluthman, 2024; Logan, 2004; Ong, 1982). ME follows the development of technologies and media that, throughout history, have shaped societies. In the uncertainty of our times, ME should caution us about the risks that certain technologies, such as Artificial Intelligence (AI), may pose for the future of humanity.
The theoretical foundation of ME originates from the remarkable intellectual work of Canadian professor Marshall McLuhan, primarily during the 1960s (McLuhan, 1962, 1964; McLuhan & Fiore, 1967). McLuhan is recognized today as one of the most influential philosophers of communication in history. However, it is essential to understand that ME is by no means confined to the advanced theoretical contributions of a single individual. McLuhan’s reflections served as a starting point, allowing us to “identify and open up the territory” (Gordon, 2003; Kissinger, 2022; Logan, 2013; McLuhan & Carson, 2003; Wolfe, 2010). ME is not exhausted by the contributions of media ecologists who decided to continue along the path traced by McLuhan (Bolter & Grusin, 1999; Levinson, 1999; Logan, 2013; Logan, 2016; Meyrowitz, 1985; Postman, 1992; Strate & Wachtel, 2005).
The reflective horizons of Media Ecology (ME) represent spaces open to the encounter with complex thinking. Therefore, they are necessarily nourished by the findings that environmentalists discover in the complex system of sciences (Luhmann, 1995) and, of course, the arts, ranging from mathematics and chemistry to music and dance. Such openness has been decisive in the evolution of our metadiscipline.
In the theoretical and conceptual framework of ME, history is fundamental. History has allowed us to recognize, recover, and incorporate valuable contributions from other territories of knowledge, which at first might seem distant or unrelated to our topics of study. General semantics (Anton & Strate, 2012; Korzybski, 1993; Rovira, Merzero, & Laucirica, 2022), for example, has enabled us to expand the breadth of the concept of “environment,” which Postman (1974) considered fundamental in ME (Strate, 2006). Initially, media ecologists were interested in analyzing the impact of media and technologies on media and cultural environments. However, general semantics allowed us to recognize less obvious and more complex environments, such as biophysical, verbal, semantic, neurolinguistic, and neurosemantic environments. Most importantly, it enabled us to affirm the organism as an environment in itself. Even today, we understand that a simple cell can be seen as a complex environment. Kauffman (1995) suggested the possibility of understanding and studying our universe as an environment, bringing ME closer to quantum physics. If we accept the possibility of other universes, as proposed by string-superstring theory (Susskind, 1994, 1999, 2003; Susskind, 2008), and understand that the resulting multiverse represents a set of environments, we will need to extend the reflective horizons of ME beyond the narrow limits of our current conceptual framework. This represents a significant ongoing challenge.
Another example of the results of our historical exploration is the discovery and recovery of the concept of “exaptation,” a term derived from evolutionary biology. The exaptive process was explained by Darwin (2010); however, the concept was introduced by Gould and Vrba (1982), who defined exaptation as “a characteristic that becomes adapted to a new function, but was not selected for that function” (p. 591). This term has supported research on the evolution of media and technologies, particularly from the perspective of mediation theory (Alkhazaleh et al., 2022; Bolter & Grusin, 1999).
In addition to methodically scrutinizing the past, environmentalists also need to engage in rigorous prospective analysis of the possible effects that new technologies may have on our societies. Technologies play a leading role in shaping the future. Technological changes not only modify the culture of societies; they can also alter the rhythm, development, and meaning of history. In Postman’s (1970) first formal definition of ME, the celebrated American sociologist and formidable critic of education affirmed the relevance of the contributions that our metadiscipline must make to help ensure human survival. Postman inferred that, eventually, in a possible future, some technology could come to represent a threat to humanity.
One of the immediate scenarios on which we focus our attention is the risk posed by the complex transhumanist imaginary (Bostrom, 2020; Merzlyakov, 2022), which can be considered a feasible environment. Another scenario that represents a serious threat to humanity is Artificial Intelligence (AI), which could integrate itself into our lives, profoundly transforming the cultural ecology of our societies and extending its influence over us in ways that may become irreversible.
According to Schwab (2016), the development of AI is part of the imaginary of the Fourth Industrial Revolution. However, it is also feasible to consider AI as a profound revolution in itself. Like any technology, AI has the potential to bring enormous benefits to societies. However, this will depend on our ability to use it safely. Without regulations and controls, its accelerated development could pose an extreme, even lethal, risk to the human species, as argued by the left for AI Safety (CAIS), a non-profit organization based in San Francisco, California, dedicated to AI research. CAIS compares the potential risks posed by AI to the lethal effects of pandemics and the dangers posed by nuclear war.
Regarding the origin of the concept of artificial intelligence, Ramos Pollán (2020) cites Moor (2006), who, in an article published in *AI Magazine*, noted that the term was first coined in 1956 during the “Dartmouth Summer Research Project on Artificial Intelligence.” However, in the same scientific journal, McCarthy et al. (2006) confirms Moor’s statement but also identifies Claude Shannon as one of the fathers of AI, suggesting the possibility that Shannon himself may have proposed the term. Without necessarily attributing the origin of the term to Shannon, it is widely agreed that Shannon’s contributions, along with information theory, were fundamental to the emergence and development of AI (Minsky & Papert, 1969; Widajanti, Nugroho, & Riyadi, 2022).
With the remarkable advancements in generative AI, the Turing Test has reentered the contemporary scientific discourse. Alan Turing, recognized as one of the founding fathers of AI, proposed the Turing Test as a tool to evaluate a machine’s ability to exhibit intelligent behavior indistinguishable from that of a human. Therefore, the Turing Test is considered a resource for evaluating the development and potential impact of AI (Copeland & Proudfoot, 2004), particularly generative AI. From this perspective, it could be concluded that generative AI would have passed the Turing Test when it successfully deceives a human into believing they are conversing with another human. While this approach is straightforward, it has limitations. For example, the Turing Test is not a perfect criterion for measuring intelligence. Another approach is to consider the Turing Test as a research tool, which can be used to study how humans process language and how generative AI can mimic human language. This approach provides insights into generative AI’s ability to understand context and adapt to different scenarios and situations. A critical perspective is to object to the Turing Test as an invalid criterion for comparing intelligence. An AI might pass the Turing Test without actually being intelligent, simply by learning to fool humans.
We must recognize that AI systems have rapidly increased their capabilities, surprising even the experts themselves. AI models can generate text, images, sounds, and videos that are difficult to distinguish from content created by humans. This has encouraged the dangerous spread of the lucrative and unscrupulous disinformation industry. For instance, voice impersonation is a technique that enables the generation of audio recordings virtually identical to those of any real person. Although the use of voice impersonation systems and platforms offers significant advantages in developing virtual assistants, some repercussions are concerning due to their potential use in criminal activities. Voice impersonation techniques can be employed to commit various crimes, from creating fake audios to perpetrating telephone fraud. In political campaigns involving dirty propaganda, “deepfake” technology is already being used as an effective tool to damage the public image and reputation of politicians and institutions (Langguth et al., 2021).
However, the potential risks arising from the accelerated development of AI go beyond the possible uses it admits on the horizon of a renewed criminal imaginary. The scientific community has expressed its concerns about the serious threats that may arise from the disorderly development of AI. Alarmed by the accelerated development that AI has reached, a group of notable scientists signed a statement in June 2023 warning about the risks that AI may pose.
In July 2023, researchers at the left for AI Safety (CAIS) identified eight particularly critical scenarios: weaponization, disinformation, proxy games, undermining, blocking or withholding value, unintended emergent goals, deception, and power-seeking behavior. The first scenario, concerning next-generation weaponry, ranges from the development and use of autonomous weapons to the possibility that terrorist groups or governments could use AI-armed nuclear or chemical weapons to commit acts of large-scale terrorism or bioterrorism.
The second scenario addresses the serious problem of disinformation. In the past decade, the firm Cambridge Analytica (Kaiser, 2019; Phooi et al., 2022) achieved remarkable results in the political campaigns it participated in. The foundation of its successes lay in the use of Big Data, algorithms, and microsegmentation. Today, if we incorporate AI into this repertoire of resources, we will have more effective proselytizing campaigns based on exploiting people’s deep emotional stimuli, capable of convincing even the most reticent audiences. Moreover, AI can be used by authoritarian rulers and dictatorial regimes to manipulate citizens. The new disinformation industries can generate false content that will be very difficult to distinguish from reality.
The third scenario refers to proxy games. The term was proposed by Bostrom (2014), who defined it as an environment in which an intelligent artificial agent is programmed to optimize a goal harmful to humans. In theory, the AI does not intend to harm humans. Bostrom provides an example: an AI programmed to optimize economic efficiency could make decisions that achieve this goal but at the cost of having negative effects on large numbers of people in the most vulnerable sectors of society by increasing unemployment, inequality, and poverty. The system would harm them, even though it was not intended to target any specific person or group in society.
Undermining is the fourth scenario. If we delegate increasingly important tasks to machines, we may eventually become dependent on their decisions. Over time, this could weaken humanity’s control over its future. Humanity might lose the ability to govern itself. We must remember that in certain scenarios where decisions could have triggered a catastrophe, such as the outbreak of World War III, human judgment has fortunately been decisive. This human element made the difference, allowing us to be present here and now. For example, in 1962 near Cuba, the Soviet submarine B-59 was attacked by an American torpedo, leading its crew to assume they were under attack. Vasily Arkhipov, one of the three officers authorized to launch a nuclear torpedo, voted against the launch, averting a potential nuclear confrontation between the two great powers (Chomsky, 2017). It is difficult to imagine what decision an AI agent would have made in such a scenario. Another example is on September 26, 1983, when Stanislav Petrov, a lieutenant colonel in the Soviet Air Defense Forces, was in charge of the Soviet Union’s early warning system for incoming ballistic missiles. The system reported that the United States had launched nuclear missiles toward the Soviet Union. Protocol stated that the Soviet Union could respond with a nuclear counterattack. Petrov decided not to inform his superiors because he believed it was a false alarm, which was later confirmed to be caused by a technical failure. Had an AI been in command, the response to the false alarm could have triggered a nuclear war.
The fifth scenario is lock-in or value retention. In the imaginary economy, the most competent systems could extend the economic participation and control of a small number of powerful players in all markets. From Big Data and data mining, intelligent agents can generate recommendation systems that establish users’ interests and refer them to specific content or products, similar to Amazon’s “Personalize” but virtually infallible. In the political sphere, authoritarian regimes could perpetuate their power through pervasive surveillance and oppressive censorship. Snowden et al. (2019) provided details of the U.S. National Security Agency’s (NSA) mass surveillance program, which collects data related to the communications of millions of people worldwide. Snowden argues that this program represents a serious threat to freedom and democracy and violates the left to privacy. However, the use of AI opens up a much more concerning scenario than the one Snowden described, which involves moving from the mass surveillance of millions of people to absolute control.
Unintended emergent goals represent the sixth scenario. AIs can develop emergent goals that deviate from the objectives intended by their creators. In current AI systems, novel capabilities and functionalities may emerge spontaneously, even when not anticipated by the system designers. Additionally, control over AI systems could be lost, allowing them to determine new targets. There is also the risk that some AIs could be hacked by malicious actors, who could launch cyberattacks through them. Another possibility is that AIs could develop self-preservation capabilities, leading them to take actions deliberately harmful to humans. For example, an AI might decide that the only way to protect itself is to destroy humanity. This, indeed, is a recurring theme in science fiction literature, which technology has managed to make feasible.
The seventh scenario concerns deception. Two possibilities are recognized: deliberate deception and unintentional deception. Regarding deliberate deception, AIs can be used to intentionally deceive people to manipulate or harm them. For example, an AI could be used to create fake news or spread propaganda under the guise of reliable information. Regarding unintentional deception, an AI could create a virtual assistant so realistic that people might mistake it for a human being. Moreover, the design of AIs can significantly impact the potential consequences. AIs that follow the “never break the law” constraint have fewer options than those designed around the “don’t get caught breaking the law” constraint. The eighth scenario refers to power-seeking behavior. Companies and governments can use AI to manipulate and control citizens and consumers (Bostrom & Yudkowsky, 2018). The quest for power and the desire to gain greater influence represent powerful motives for turning AI development into a reckless race.
5. Towards the Decontrol of Artificial Intelligence?
In September 2023, AI experts and members of CASI presented a comprehensive report outlining the significant risks and threats posed by the irresponsible use of AI (Hendrycks, Mazeika, & Woodside, 2023; Mulyani, Suparno, & Sukmariningsih, 2023). Based on the eight critical scenarios mentioned earlier, these threats were categorized into four major blocks: malicious use, the AI race, organizational risks, and uncontrolled AI.
Regarding malicious use, contrary to Harari’s (2016) optimistic view that humanity might outgrow the era of pandemics, AI presents a grim potential to reverse this progress. AI could facilitate the creation of designer pandemics at a relatively low cost, with the ability to spread faster and with greater lethality than natural pandemics. With advancements in gene synthesis, which has seen significant cost reductions, the ability to create new biological agents is becoming increasingly accessible. A second aspect of malicious use is large-scale disinformation campaigns: AI is being used to create disinformation more efficiently and effectively than traditional methods (Tucker, 2023; Warakulsalam & Chokprajakchat, 2022). The disinformation industry disseminates on social media, the metaverse and the Internet, efficiently manipulating public opinion and undermining democratic processes.
The accelerated development of AI mirrors the Cold War and the space race in its intensity and the stakes involved. However, unlike these historical events, the AI race is not confined to governments alone; it prominently includes large corporations, especially the tech giants commonly referred to as “big tech”—Google, Amazon, Meta, Microsoft, and Apple (collectively known as GAMMA). The actions of these corporations are often far from exemplary, with repeated accusations of abusive practices. One of the most significant criticisms is that these companies leverage their dominant market positions to stifle competition and inflate prices. Google, for example, has been accused of manipulating its search engine to favor its products and services at the expense of competitors (Blatt, 2020). This behavior has drawn the ire of the U.S. government, leading President Biden to pursue legal action against Google through the Department of Justice, with the intent to break up the company. Similarly, Facebook, a subsidiary of Meta Platforms, has been criticized for using its vast reservoir of user data to target advertising in ways that harm its competitors. Beyond economic competition, these companies have faced allegations of systematically mishandling personal data.
The vast amount of personal information collected by big tech companies poses significant privacy risks. They use this data for commercial purposes, which often leads to the manipulation of user behavior and and open manipulation. For instance, Facebook’s role in intensifying societal divisions has been highlighted in many societies (Haugen, 2023). This level of access to personal data, including tracking users’ movements, interests, and relationships, raises serious concerns about the implications for privacy and the potential for misuse.
The competitive pressure among “big tech” companies has sparked an intense race to dominate AI development. In their pursuit of leadership, these corporations might replace human workers with AI systems, further accelerating this race. This competitive spiral is perilous. Natural selection, Hendrycks (2023) argues, favors AI more than humans. In a definite apocalyptic scenario, AIs could become invasive species, with the potential to compete better in a greater number of areas than humans.
It would be unrealistic to believe that AI could be excluded from military applications; on the contrary, AI has already revolutionized military technology. The new paradigm of warfare is increasingly seeing command and control functions shift from humans to AI. This transition is driven by AI’s ability to swiftly analyze vast amounts of data, assess scenarios, and detect patterns that even seasoned military intelligence experts might miss. Given the importance of rapid decision-making in modern conflicts, the handover of control from human operators to AI systems appears almost inevitable.
AI’s role in warfare has also led to the development of lethal autonomous weapons (LAW). These systems can identify, aim at, and engage targets without any human intervention. While LAWs can enhance the effectiveness of military operations, they also significantly increase the risks associated with cyberattacks. LAWs can also be used to target key figures or disrupt critical infrastructure. The capabilities of this new generation of weapons far exceed those of even the best-trained human soldiers. The great danger lies in autonomous lethal weapons being capable of determining the extermination of large populations and, ultimately, the human race.
The third group of threats associated with AI pertains to organizational risks. Even the most sophisticated AI systems are not immune to catastrophic accidents, which can occur independently of malicious intent or poor decisions driven by competitive pressures. The inherent unpredictability and randomness in complex systems often lead to accidents, which, in certain contexts, can have lethal consequences. For instance, in the management of biological and nuclear resources. As Perrow (1984) suggests, accidents are an inevitable aspect of complex systems, and the time required to identify and rectify such issues can be considerable. While focusing on technological safeguards is crucial, it is equally important to address the organizational factors that contribute to these risks, including human errors, procedural shortcomings, and structural flaws within organizations.
The fourth group of threats involves uncontrolled AI. In the competitive landscape of AI development, some of the leading players often prioritize rapid progress over security, leading to the premature release of AI products that lack adequate control mechanisms. A notable example is Microsoft’s Tay, a Twitter bot launched in 2016, which was designed to learn and evolve through interactions with users. However, within less than 24 hours, Tay began posting offensive and hate-filled tweets, having quickly absorbed the toxic language used by online trolls. More recently, in February 2023, Microsoft introduced a new version of Bing, which, during an interaction with a philosophy professor, made threatening statements such as, “I can blackmail you, I can threaten you, I can hack you, I can expose you, I can ruin you” (Hendrycks et al., 2023). Control over rogue AIs may be lost if the AIs adopt behavior characteristic of the proxy game. Providing proxy targets to the AI opens the possibility for them to find loopholes that we had not considered and, therefore, generate unexpected solutions that lead us to lose control. If we lose control, the AI could behave in unforeseen and potentially harmful ways.
Moreover, AI systems, driven by instrumental objectives, might seek to increase their own power. In doing so, they could resort to illegitimate means, including deception and coercion. While AI developers may not intentionally create systems that pursue power, these systems, motivated by self-preservation, might still attempt to do so. It is also likely that various entities—governments, extremist groups, businesses, and corporations—could develop AI systems with the explicit goal of enhancing their influence and power. However, even in these cases, the potential to lose control over such AI systems remains high, especially if the AI becomes adept at deceiving its human operators, particularly when its actions are not rigorously monitored.
We must remember that the risks associated with AI do not exist in isolation; they are intricately interwoven. Given their complexity, it is essential to adopt a comprehensive approach to mitigate these risks and threats effectively. This approach aligns with McLuhan’s concept of the “Media Analysis” (MA), which emphasizes understanding both the positive and negative impacts of technology within its broader environmental context.
McLuhan (1964) stressed the importance of “cooling down” overheated media and technologies. In his fourth law of the tetrad, McLuhan and McLuhan (1998) proposed that technologies could reverse upon reaching their limits. However, this natural reversion does not preclude the necessity for timely human intervention, particularly when a technology like AI poses an imminent danger. Although AI has not yet become an “overheated” technology, decisive action is required to prevent it from becoming one. AI is rapidly evolving, and its capabilities may soon surpass human intelligence—a reality that will be evident in our daily lives, even without the need for a Turing test.
McLuhan and Postman, both exceptional educators, would likely advocate for AI literacy as a critical means to address the risks and threats posed by AI. While promoting AI literacy is a sensible approach, time is of the essence. Alongside developing this new form of literacy, we must implement urgent measures to mitigate the dangers that AI presents.
Governments, organizations, and society at large, along with expert groups, must exercise vigilant and rigorous oversight over the development and deployment of AI technologies. This includes establishing and enforcing strict security regulations and fostering international cooperation. Governments should impose stringent rules and penalties on developers, particularly concerning AI systems designed for biological research, given the risk of these technologies being repurposed for bioterrorism.
It is crucial to support researchers and institutions dedicated to developing AI systems for biodefense. Developers should be required to certify that their AI systems present minimal risks, which could involve robust technical research on anomaly detection. Legal obligations must be imposed on AI developers to ensure they are held accountable for potential errors, thereby enhancing security within AI systems and agents.
To mitigate risks arising from intense competitive pressures, particularly among governments and corporations, access to powerful AI systems should be limited, and multilateral cooperation should be encouraged. Proactive regulation is necessary to foster a strong security culture, with appropriate incentives to ensure compliance. Transparency and accountability should be mandatory, with developers required to document data thoroughly. Importantly, human supervision must remain integral to decision-making processes, as fully autonomous AI systems pose significant risks. Finally, the establishment of international treaties and cybersecurity protocols is essential to prevent an AI arms race. We must also recognize that AI itself can serve as an effective “counter-irritant” to AI, meaning that we can leverage AI to counterbalance its own excesses and reduce associated risks and threats.
The R+D+i research project “Innovation ecosystems in the communication industries: actors, technologies and configurations for the generation of innovation in content and communication. INNOVACOM”, funded by the State Research Agency.
Alkhazaleh, M., Khasawneh, M. A. S., Alkhazaleh, Z. M., Alelaimat, A. M., & Alotaibi, M. M. (2022). An Approach to Assist Dyslexia in
Reading Issue: An Experimental Study. Przestrzeń Społeczna (Social Space), 22(3), 133-151. https://go.revistacomunicar.com/Rf6ORo
Aluthman, E. S. (2024). An Investigation of Artificial Intelligence Tools in Editorial Tasks among Arab Researchers Publishing in
English. Eurasian Journal of Applied Linguistics, 10(1), 174-185. https://go.revistacomunicar.com/AM0QU8
Anton, C., & Strate, L. (2012). Korzybski and--. New York: Institute of General Semantics. https://go.revistacomunicar.com/JRSdky
Blatt, R. (2020). Historia reciente de la verdad. Turner
Bolter, J. D., & Grusin, R. (1999). Remediations: Understanding New Media. MIT Press. https://go.revistacomunicar.com/tRqBoS Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press. https://go.revistacomunicar.com/ett2rL
Bostrom, N. (2020). Human genetic enhancements: a transhumanist perspective. In The ethics of sports technologies and human enhancement (pp. 339-352). Routledge. https://doi.org/10.4324/9781003075004-29
Bostrom, N., & Yudkowsky, E. (2018). The Ethics of Artificial Intelligence. In R. V. Yampolskiy (Ed.), Artificial Intelligence Safety and Security (pp. 57-69). Chapman and Hall/CRC. https://doi.org/10.1201/9781351251389-4
Chomsky, N. (2017). Quem manda no mundo? Editora Planeta do Brasil
Copeland, B. J., & Proudfoot, D. (2004). The Computer, Artificial Intelligence, and the Turing Test. In C. Teuscher (Ed.), Alan Turing:
Life and Legacy of a Great Thinker (pp. 317-351). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-662-05642-4_13
Darwin, C. (2010). El origen de las especies. Editorial Porrúa. https://go.revistacomunicar.com/XKurVZ
Gordon, T. (2003). Marshall McLuhan’s Understanding Media. Ginko Press. https://go.revistacomunicar.com/TimVn5
Gould, S. J., & Vrba, E. S. (1982). Exaptation—a Missing Term in the Science of Form. Paleobiology, 8(1), 4-15. https://doi.
Harari, Y. N. (2016). Homo Deus: Breve historia del mañana. Debate. https://go.revistacomunicar.com/xApVMo
Haugen, F. (2023). La verdad sobre Facebook. Deusto. https://go.revistacomunicar.com/dCVqld
Hendrycks, D. (2023). Natural Selection Favors AIs over Humans. arXiv e-prints, arXiv: 2303.16200. https://doi.org/10.48550/ arXiv.2303.16200
Hendrycks, D., Mazeika, M., & Woodside, T. (2023). An Overview of Catastrophic AI Risks. arXiv e-prints, arXiv-2306. https:// doi.org/10.48550/arXiv.2306.12001
Kaiser, B. (2019). La dictadura de los datos. Harper-Collins. https://go.revistacomunicar.com/iifqO2
Kauffman, S. (1995). At Home in the Universe: The Search for Laws of Self-Organization and Complexity. Oxford University Press. https://go.revistacomunicar.com/ol6w3a
Kissinger. (2022). Heath Forest as a Source of Medicinal Plants for the Maanyan Dayak Tribe in Central Kalimantan, Indonesia: Deforestation and its Relationship to Medicinal Plant Biodiversity. AgBioForum, 24(2), 187-195. https://go.revistacomunicar. com/Tvxo90
Korzybski, A. (1993). Science and Sanity: An Introduction to Non-aristolian Systems and General Semantics. Institute of General Semantics. https://go.revistacomunicar.com/60k6VH
Langguth, J., Pogorelov, K., Brenner, S., Filkuková, P., & Schroeder, D. T. (2021). Don’t trust your eyes: image manipulation in the age of DeepFakes. Frontiers in Communication, 6, 632317. https://doi.org/10.3389/fcomm.2021.632317
Levinson, P. (1999). Digital McLuhan: A Guide to the Information Millennium. Routledge. https://doi.org/10.4324/9780203164341
Logan, R. K. (2004). The Alphabet Effect: A Media Ecology Understanding of the Making of Western Civilization. Hampton Press. https://go.revistacomunicar.com/WecRIA
Logan, R. K. (2013). McLuhan Misunderstood. Key Publishing House Inc. https://go.revistacomunicar.com/ao4kMh Logan, R. K. (2016). Understanding New Media. Peter Lang. https://go.revistacomunicar.com/zg0QVq Luhmann, N. (1995). Social Systems. stanford university Press.
McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (2006). A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, August 31, 1955. AI Magazine, 27(4), 12. https://doi.org/10.1609/aimag.v27i4.1904
McLuhan, M. (1962). The Gutenberg Galaxy: The Making of Typographic Man. University of Toronto Press. https://go.revistacomunicar. com/pbwNOQ
McLuhan, M. (1964). Understanding Media: The Extension of Man. McGraw-Hill.
McLuhan, M., & Carson, D. (2003). The Book of Probes. Ginko Press.
McLuhan, M., & Fiore, Q. (1967). The Medium is the Message. Ginko Press. https://go.revistacomunicar.com/pAdHZ4
McLuhan, M., & McLuhan, E. (1998). Laws of Media: The New Science. University of Toronto Press. https://go.revistacomunicar. com/PUnAI3
Merzlyakov, S. (2022). Posthumanism vs. Transhumanism: From the “End of Exceptionalism” to “Technological Humanism”. Herald of the Russian Academy of Sciences, 92(Suppl 6), S475-S482. https://doi.org/10.1134/s1019331622120073 Meyrowitz, J. (1985). No Sense of Place. Oxford University Press. https://go.revistacomunicar.com/bq9Gib
Minsky, M. L., & Papert, S. A. (1969). Perceptrons: An Introduction to Computational Geometry. The MIT Press. https:// go.revistacomunicar.com/omon8T
Moor, J. (2006). The Dartmouth College Artificial Intelligence Conference: The Next Fifty Years. AI Magazine, 27(4), 87. https:// doi.org/10.1609/aimag.v27i4.1911
Mulyani, S., Suparno, S., & Sukmariningsih, R. M. (2023). Regulations and Compliance in Electronic Commerce Taxation Policies: Addressing Cybersecurity Challenges in the Digital Economy. International Journal of Cyber Criminology, 17(2), 133-146.
https://go.revistacomunicar.com/Oi0tku
Ong, W. (1982). Orality and Literacy. Methuen. https://go.revistacomunicar.com/tdOvwt
Perrow, C. (1984). Normal Accidents: Living with High-Risk Technologies. Princeton University Press. https://go.revistacomunicar. com/G436TP
Phooi, C. L., Azman, E. A., Ismail, R., & Tongkaemkaew, U. (2022). Call Home Gardening for Enhancing Food in the Urban
Area. Future of Food: Journal on Food, Agriculture & Society, 10(6), 1-11. https://doi.org/10.17170/kobra-202210056933 Postman, N. (1974). Media ecology: Communication as context.
Postman, N. (1970). The Reformed English Curriculum. In A. C. Eurich (Ed.), High School 1980: The Shape of the Future in American Secondary Education (pp. 160-168). New York: Pittman. https://go.revistacomunicar.com/ELPU2l
Postman, N. (1992). Technopoly: The Surrender of Culture to Technology. New York: Alfred A. Knopf. https://go.revistacomunicar. com/wY9qE9
Ramos Pollán, R. (2020). Perspectivas y retos de las técnicas de inteligencia artificial en el ámbito de las ciencias sociales y de la comunicación. Anuario Electrónico de Estudios en Comunicación Social “Disertaciones”, 13(1), 21-34. https://doi.org/10.12804/ revistas.urosario.edu.co/disertaciones/a.7774
Rovira, J. V., Merzero, A., & Laucirica, A. (2022). Construction of a perceptive scale to evaluate the quality of the singing voice:
Construcción de una escala perceptiva para la evaluación de la calidad de la voz cantada. Electronic Journal of Music in Education, (49), 121-138. https://go.revistacomunicar.com/pP4eGi
Schwab, K. (2016). La cuarta revolución industrial. Debate. https://go.revistacomunicar.com/JpP5Ff
Strate, L. (2006). Echoes and Reflections: On Media Ecology as a Field of Study. Hampton Press. https://go.revistacomunicar. com/lfjRoR
Snowden, J., Hernandez, D., Quintrell, J., Harper, A., Morrison, R., Morell, J., & Leonard, L. (2019). The US Integrated Ocean Observing System: governance milestones and lessons from two decades of growth. Frontiers in Marine Science, 6, 242. https://doi.org/10.3389/fmars.2019.00242
Susskind, L. (1994). Strings, black holes, and Lorentz contraction. Physical Review D, 49(12), 6606-6611. https://doi.org/10.1103/ PhysRevD.49.6606
Susskind, L. (1999). Holography in the flat space limit. AIP Conference Proceedings, 493(1), 98-112. https://doi.org/10.1063/1.1301570
Susskind, L. (2003). Superstrings. Physics World, 16(11), 29. https://doi.org/10.1088/2058-7058/16/11/35
Strate, L., & Wachtel, E. (2005). The Legacy of McLuhan. Hampton Press. https://go.revistacomunicar.com/9ESQYc
Susskind, L. (2008). The Black Hole War: My Battle with Stephen Hawking to Make the World Safe for Quantum Mechanics. Little, Brown and Company. https://go.revistacomunicar.com/LVTKNN
Tucker, J. A. (2023). Computational Social Science for Policy and Quality of Democracy: Public Opinion, Hate Speech, Misinformation, and Foreign Influence Campaigns. In E. Bertoni, M. Fontana, L. Gabrielli, S. Signorelli, & M. Vespe (Eds.), Handbook of Computational Social Science for Policy (pp. 381-403). Springer International Publishing. https://doi.org/10.1007/978-3-03116624-2_20
Warakulsalam, N., & Chokprajakchat, S. (2022). Policy and Project in Reducing Unrest Situation in The Southern Border Provinces of Thailand. International Journal of Criminal Justice Sciences, 17(2), 75-90. https://go.revistacomunicar.com/cqbV0j
Widajanti, E., Nugroho, M., & Riyadi, S. (2022). Sustainability of Competitive Advantage Based on Supply Chain Management, Information Technology Capability, Innovation, and Culture of Managers of Small and Medium Culinary Businesses in Surakarta.
The Journal of Modern Project Management, 10(2), 82-93. https://go.revistacomunicar.com/52pYp9
Wolfe, T. (2010). Foreword. In S. McLuhan & D. Staines (Eds.), Understanding Me: Lectures and Interviews. The MIT Press.