Fostering Algorithmic Literacy in Education: Navigating News Ecosystems for Critical Media Understanding
DOI:
https://doi.org/10.5281/zenodo.16388403Keywords:
Algorithmic Curation, Media Literacy, Polarization, Filter Bubble, Algorithmic LiteracyAbstract
In an era of algorithmically curated news feeds, the interplay between technology and human behavior is transforming global information consumption. This study systematically reviews literature from 2015 to 2024, examining algorithms’ dual role as enhancers of personalization and drivers of polarization. It investigates how algorithmic bias influences news diversity, the effects of algorithmically driven news exposure on polarization, and the potential of media literacy to mitigate these impacts. The findings reveal a complex relationship between algorithmic curation, user behavior, and polarization, often exacerbated by system opacity. While algorithms can broaden exposure to diverse perspectives, they frequently reinforce existing beliefs through filter bubbles and echo chambers. Media literacy emerges as a vital tool, equipping individuals to critically engage with content and challenge biases. Addressing a growing research gap, this study explores the intricate dynamics between algorithmic personalization, polarization, and media literacy, proposing an educational framework to equip learners for AI-driven news environments. The proposed framework interconnects algorithmic curation, news exposure, user agency, media literacy, and polarization, emphasizing their cyclical dynamics. This research calls for algorithmic transparency, cross-cultural media literacy programs, and targeted studies in underrepresented regions, offering actionable pathways to support healthier public discourse through including algorithmic literacy in education.
References
Akram, M., Nasar, A., & Arshad-Ayaz, A. (2023). A Systematic Review for Netizens’ Response to the Truth Manipulation on Social Media. Knowledge Management & E-Learning, 15(2), 322-342. https://doi.org/10.34105/j.kmel.2023.15.018
Alonso, S. S., & Gil-Torres, A. (2023). La burbuja de filtros en España: una comprobación empírica en Facebook e Instagram. Observatorio (OBS*), 17(1), 19-37. https://doi.org/10.15847/OBSOBS17120232075
Alsaad, A., Taamneh, A., & Al-Jedaiah, M. N. (2018). Does social media increase racist behavior? An examination of confirmation bias theory. Technology in Society, 55, 41-46. https://doi.org/10.1016/j.techsoc.2018.06.002
Alzhrani, K. (2020). Ideology Detection of Personalized Political News Coverage: A New Dataset. In Proceedings of the 2020 4th International Conference on Compute and Data Analysis (pp. 10-15). Association for Computing Machinery. https://doi.org/10.1145/3388142.3388149
Arce-García, S., Márquez, F. V., & Fondevila-Gascón, J.-F. (2021). Polarización en Twitter durante la crisis de la COVID-19: Caso Aislado y Periodista Digital. Revista de Comunicación, 20(2), 29-47. https://doi.org/10.26441/RC20.2-2021-A2
Baldi, V. (2018). A construção viral da realidade: ciberpopulismos e polarização dos públicos em rede. Observatorio (OBS*), 12(5), 004-020. https://doi.org/10.15847/obsOBS12520181420
Beall, S., Makri, S., & McKay, D. (2023). Stronger Than Yesterday: Investigating Peoples’ Experiences of View Strengthening on Social Media. Proceedings of the Association for Information Science and Technology, 60(1), 41-52. https://doi.org/10.1002/pra2.767
Bechmann, A., & Nielbo, K. L. (2018). Are We Exposed to the Same “News” in the News Feed? Digital Journalism, 6(8), 990-1002. https://doi.org/10.1080/21670811.2018.1510741
Bili?, P., & Brajdi? Vukovi?, M. (2023). Being Media Literate in Croatia: Characteristics and Selected Dimensions of Media Literacy as a Social Practice. Revija za sociologiju, 53(2), 241-270. https://doi.org/10.5613/rzs.53.2.3
Bouchaud, P., & Ramaciotti, P. (2024). Auditing the audits: evaluating methodologies for social media recommender system audits. Applied Network Science, 9(1), 59. https://doi.org/10.1007/s41109-024-00668-6
Bruns, A. (2019). Filter Bubble. Internet Policy Review, 8(4), 1-14. https://doi.org/10.14763/2019.4.1426
Budak, C., Nyhan, B., Rothschild, D. M., Thorson, E., & Watts, D. J. (2024). Misunderstanding the harms of online misinformation. Nature, 630(8015), 45-53. https://doi.org/10.1038/s41586-024-07417-w
Burton, J. (2023). Algorithmic extremism? The securitization of artificial intelligence (AI) and its impact on radicalism, polarization and political violence. Technology in Society, 75, 102262. https://doi.org/10.1016/j.techsoc.2023.102262
Calice, M. N., Bao, L., Freiling, I., Howell, E., Xenos, M. A., Yang, S., et al. (2023). Polarized platforms? How partisanship shapes perceptions of “algorithmic news bias”. New Media & Society, 25(11), 2833-2854. https://doi.org/10.1177/14614448211034159
Chapman, A. L. (2023). The Good, the Bad, and the Ugly: How Social Media Operates in the Civic Sphere. In A. L. Chapman (Ed.), Social Media for Civic Education: Engaging Youth for Democracy (pp. 37-56). Springer International Publishing. https://doi.org/10.1007/978-3-031-10865-5_3
Chavalarias, D., Bouchaud, P., & Panahi, M. (2024). Can a Single Line of Code Change Society? The Systemic Risks of Optimizing Engagement in Recommender Systems on Global Information Flow, Opinion Dynamics and Social Structures. Journal of Artificial Societies and Social Simulation, 27(1), 1-9. https://www.jasss.org/27/1/9/9.pdf
Cheng, D., Yan, K., Keung, P., & Smith, N. A. (2022). The Engage Corpus: A Social Media Dataset for Text-Based Recommender Systems. In Proceedings of the Thirteenth Language Resources and Evaluation Conference (pp. 1885-1889). European Language Resources Association. https://aclanthology.org/2022.lrec-1.200
Cheong, H. J., Baksh, S. M., & Ju, I. (2022). Spiral of Silence in an Algorithm-Driven Social Media Content Environment: Conceptual Framework and Research Propositions. Kome, 10(1), 32-46. https://doi.org/10.17646/KOME.75672.86
Cinelli, M., De Francisci Morales, G., Galeazzi, A., Quattrociocchi, W., & Starnini, M. (2021). The echo chamber effect on social media. Proceedings of the National Academy of Sciences, 118(9), e2023301118. https://doi.org/10.1073/pnas.2023301118
Currie, D. H., & Kelly, D. M. (2022). Critical Media Literacy for Uncertain Times: Promoting Student Reflexivity. Journal of Media Literacy Education, 14(2), 15-26. https://doi.org/10.23860/JMLE-2022-14-2-2
Dahlgren, P. M. (2021). A Critical Review of Filter Bubbles and a Comparison with Selective Exposure. Nordicom Review, 42(1), 15-33. https://doi.org/10.2478/nor-2021-0002
Daus, Z. (2024). Socializing the political: rethinking filter bubbles and social media with Hannah Arendt. Ethics and Information Technology, 26(2), 20. https://doi.org/10.1007/s10676-024-09759-5
de Groot, T., de Haan, M., & van Dijken, M. (2023). Learning in and about a filtered universe: young people’s awareness and control of algorithms in social media. Learning, Media and Technology, 48(4), 701-713. https://doi.org/10.1080/17439884.2023.2253730
Denemark, J. (2023). Strengthening the European Union by Regulating the Digital Single Market. Acta Universitatis Carolinae Iuridica, 69(2), 107-123. https://doi.org/10.14712/23366478.2023.18
Dillet, B. (2022). Speaking to algorithms? Rhetorical political analysis as technological analysis. Politics, 42(2), 231-246. https://doi.org/10.1177/0263395720968060
Duan, Z., Li, J., Lukito, J., Yang, K.-C., Chen, F., Shah, D. V., et al. (2022). Algorithmic Agents in the Hybrid Media System: Social Bots, Selective Amplification, and Partisan News about COVID-19. Human Communication Research, 48(3), 516-542. https://doi.org/10.1093/hcr/hqac012
Emamgholizadeh, H., Nourizade, M., Tajbakhsh, M. S., Hashminezhad, M., & Esfahani, F. N. (2020). A framework for quantifying controversy of social network debates using attributed networks: biased random walk (BRW). Social Network Analysis and Mining, 10(1), 90. https://doi.org/10.1007/s13278-020-00703-1
Entman, R. M., & Usher, N. (2018). Framing in a Fractured Democracy: Impacts of Digital Technology on Ideology, Power and Cascading Network Activation. Journal of Communication, 68(2), 298-308. https://doi.org/10.1093/joc/jqx019
Evans, R., Jackson, D., & Murphy, J. (2023). Google News and Machine Gatekeepers: Algorithmic Personalisation and News Diversity in Online News Search. Digital Journalism, 11(9), 1682-1700. https://doi.org/10.1080/21670811.2022.2055596
Faverjon, T., & Ramaciotti, P. (2023). Discovering Ideological Structures in Representation Learning Spaces in Recommender Systems on Social Media Data. In Proceedings of the International Conference on Advances in Social Networks Analysis and Mining (pp. 400-406). Association for Computing Machinery. https://doi.org/10.1145/3625007.3627336
Feezell, J. T., Wagner, J. K., & Conroy, M. (2021). Exploring the effects of algorithm-driven news sources on political behavior and polarization. Computers in Human Behavior, 116, 106626. https://doi.org/10.1016/j.chb.2020.106626
Feio, C., & Oliveira, L. (2024). Entre Câmaras de Eco, Polarização Política e Intolerâncias–O Reverso Nocivo do Consumo e Mobilização Política dos Jovens nas Redes Sociais. Estudos em Comunicação, (38), 179-192. https://ojs.labcom-ifp.ubi.pt/ec/article/view/1433
Feldvari, K., Mi?unovi?, M., & Badurina, B. (2022). Hakiranje krize demokracije. Vjesnik bibliotekara Hrvatske, 65(2), 23-48. https://doi.org/10.30754/vbh.65.2.971
Ferrucci, P., & Hopp, T. (2023). Let’s intervene: How platforms can combine media literacy and self-efficacy to fight fake news. Communication and the Public, 8(4), 367-389. https://doi.org/10.1177/20570473231203081
Foster, C. L. E. (2023). Truth as social practice in a digital era: iteration as persuasion. AI & Society, 38(5), 2009-2023. https://doi.org/10.1007/s00146-021-01306-w
Gao, Y., Liu, F., & Gao, L. (2023). Echo chamber effects on short video platforms. Scientific Reports, 13(1), 6282. https://doi.org/10.1038/s41598-023-33370-1
García-Marín, J., & Serrano-Contreras, I.-J. (2023). (Un) Founded Fear towards the Algorithm: YouTube Recommendations and Polarisation. Comunicar, 31(74), 57-66. https://doi.org/10.3916/C74-2023-05
García-Orosa, B. (2021). Disinformation, social media, bots, and astroturfing: the fourth wave of digital democracy. Profesional de la información, 30(6), e300603. https://doi.org/10.3145/epi.2021.nov.03
Garcia, D. (2023). Influence of Facebook algorithms on political polarization tested. Nature, 620(7972), 39-41. https://doi.org/10.1038/d41586-023-02325-x
González-Bailón, S., Lazer, D., Barberá, P., Zhang, M., Allcott, H., Brown, T., et al. (2023). Asymmetric ideological segregation in exposure to political news on Facebook. Science, 381(6656), 392-398. https://doi.org/10.1126/science.ade7138
Gramigna, R. (2022). Inside Facebook’s semiosphere. How social media influence digital hate and fuel cyber-polarization. Social Semiotics, 32(5), 606-633. https://doi.org/10.1080/10350330.2022.2157171
Grant, M. J., & Booth, A. (2009). A typology of reviews: an analysis of 14 review types and associated methodologies. Health Information & Libraries Journal, 26(2), 91-108. https://doi.org/10.1111/j.1471-1842.2009.00848.x
Hamdi, S. A. (2024). Mining misinformation discourse on social media within the ‘ideological square’. Discourse & Society, 35(3), 329-344. https://doi.org/10.1177/09579265231211490
Heiss, R., Nanz, A., & Matthes, J. (2023). Social media information literacy: Conceptualization and associations with information overload, news avoidance and conspiracy mentality. Computers in Human Behavior, 148, 107908. https://doi.org/10.1016/j.chb.2023.107908
Hemphill, L., Culotta, A., & Heston, M. (2016). #Polar Scores: Measuring partisanship using social media content. Journal of Information Technology & Politics, 13(4), 365-377. https://doi.org/10.1080/19331681.2016.1214093
Hermann, E., Eisend, M., & Bayón, T. (2020). Facebook and the cultivation of ethnic diversity perceptions and attitudes. Internet Research, 30(4), 1123-1141. https://doi.org/10.1108/INTR-10-2019-0423
Jia, C., Lam, M. S., Mai, M. C., Hancock, J. T., & Bernstein, M. S. (2024). Embedding Democratic Values into Social Media AIs via Societal Objective Functions. Proceedings of the ACM on Human-Computer Interaction, 8(CSCW1), 1-36. https://doi.org/10.1145/3641002
Johnston, N. (2020). Living in the World of Fake News: High School Students’ Evaluation of Information from Social Media Sites. Journal of the Australian Library and Information Association, 69(4), 430-450. https://doi.org/10.1080/24750158.2020.1821146
Jürgens, P., & Stark, B. (2022). Mapping Exposure Diversity: The Divergent Effects of Algorithmic Curation on News Consumption. Journal of Communication, 72(3), 322-344. https://doi.org/10.1093/joc/jqac009
Kaluža, J. (2022). Habitual Generation of Filter Bubbles: Why is Algorithmic Personalisation Problematic for the Democratic Public Sphere? Javnost - The Public, 29(3), 267-283. https://doi.org/10.1080/13183222.2021.2003052
Knees, P., Neidhardt, J., & Nalis, I. (2023). Recommender Systems: Techniques, Effects, and Measures Toward Pluralism and Fairness. In H. Werthner, C. Ghezzi, J. Kramer, J. Nida-Rümelin, B. Nuseibeh, E. Prem, & A. Stanger (Eds.), Introduction to Digital Humanism: A Textbook (pp. 417-434). Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-45304-5_27
Kupferschmidt, K. (2023). Studies Find Little Impact of Social Media on Polarization. Science, 381(6656), 367-368. https://doi.org/10.1126/science.adj9569
Lim, M. (2017). Freedom to hate: social media, algorithmic enclaves, and the rise of tribal nationalism in Indonesia. Critical Asian Studies, 49(3), 411-427. https://doi.org/10.1080/14672715.2017.1341188
Lu, Z., & Gao, X. (2021). Research on the Formation Mechanism and Guidance Strategies of Group Polarization on Social Media Platforms. Information Studies: Theory and Application, 44(8), 51-58. https://doi.org/10.16353/j.cnki.1000-7490.2021.08.008
Ludwig, K., Grote, A., Iana, A., Alam, M., Paulheim, H., Sack, H., et al. (2023). Divided by the Algorithm? The (Limited) Effects of Content- and Sentiment-Based News Recommendation on Affective, Ideological, and Perceived Polarization. Social Science Computer Review, 41(6), 2188-2210. https://doi.org/10.1177/08944393221149290
Luengo, Ó., García-Marín, J., & De Blasio, E. (2021). COVID-19 on YouTube: Debates and polarisation in the digital sphere. Comunicar, 29(69), 9-19. https://doi.org/10.3916/C69-2021-01
Madraki, G., Otala, J., Vahidpour, B., Lukaszewski, M., Compani, M., & Matthews, J. (2025). Polarized social media networks: a novel approach to quantify the polarization level of individual users. Information, Communication & Society, 28(3), 361-395. https://doi.org/10.1080/1369118X.2024.2360508
Meng, J. (2024). AI emerges as the frontier in behavioral science. Proceedings of the National Academy of Sciences, 121(10), e2401336121. https://doi.org/10.1073/pnas.2401336121
Neyazi, T. A. (2020). Digital propaganda, political bots and polarized politics in India. Asian Journal of Communication, 30(1), 39-57. https://doi.org/10.1080/01292986.2019.1699938
Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., et al. (2021). The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ, 372, n71. https://doi.org/10.1136/bmj.n71
Palmer, R., & Toff, B. (2024). Neither Absent nor Ambient: Incidental News Exposure From the Perspective of News Avoiders in the UK, United States, and Spain. The International Journal of Press/Politics, 29(3), 755-773. https://doi.org/10.1177/19401612221103144
Pansanella, V., Sîrbu, A., Kertesz, J., & Rossetti, G. (2023). Mass media impact on opinion evolution in biased digital environments: a bounded confidence model. Scientific Reports, 13(1), 14600. https://doi.org/10.1038/s41598-023-39725-y
Park, H. W., & Park, S. (2024). The filter bubble generated by artificial intelligence algorithms and the network dynamics of collective polarization on YouTube: the case of South Korea. Asian Journal of Communication, 34(2), 195-212. https://doi.org/10.1080/01292986.2024.2315584
Pavlounis, D., Pashby, K., & Sanchez Morales, F. (2023). Linking digital, visual, and civic literacy in an era of mis/disinformation: Canadian teachers reflect on using the Questioning Images tool. Education Inquiry, 1-18. https://doi.org/10.1080/20004508.2023.2292828
Pedro-Carañana, J., Carrasco-Campos, Á., & Tornay-Márquez, M. C. (2024). Dialogues and Polarisation in the Mediated Public Sphere. Theory and practice of (Im) probable Communication. Estudos em Comunicação, (38), 3-10. https://ojs.labcom-ifp.ubi.pt/ec/article/view/1506
Rodilosso, E. (2024). Filter Bubbles and the Unfeeling: How AI for Social Media Can Foster Extremism and Polarization. Philosophy & Technology, 37(2), 71. https://doi.org/10.1007/s13347-024-00758-4
Romero-Rodriguez, L. M., Civila, S., & Aguaded, I. (2021). Otherness as a form of intersubjective social exclusion. Journal of Information, Communication and Ethics in Society, 19(1), 20-37. https://doi.org/10.1108/JICES-11-2019-0130
Røsok-Dahl, H., & Ihlebæk, K. A. (2024). Young People and News: A Systematic Literature Review. Journalism Studies, 25(10), 1228-1248. https://doi.org/10.1080/1461670X.2024.2372425
Saraswat, S., Sharma, S., Kulshrestha, R., Verma, N., & Lamba, M. (2023). Perceptions Unveiled: Analyzing Public Sentiment on IoT and AI Integration in Revolutionizing Social Interactions. In 2023 7th International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud)(I-SMAC) (pp. 65-70). IEEE. https://doi.org/10.1109/I-SMAC58438.2023.10290487
Scala, A. (2023). The Anatomy of Echo Chambers. Azimuth, 22(2), 29-41. https://doi.org/10.1400/294931
Schofield, D., Kupiainen, R., Frantzen, V., & Novak, A. (2023). Show or Tell? A Systematic Review of Media and Information Literacy Measurements. The Journal of Media Literacy Education, 15(2), 124-138. https://doi.org/10.23860/JMLE-2023-15-2-9
Seargeant, P., & Tagg, C. (2019). Social media and the future of open debate: A user-oriented approach to Facebook’s filter bubble conundrum. Discourse, Context & Media, 27, 41-48. https://doi.org/10.1016/j.dcm.2018.03.005
Serrano Plata, L. J., Bonilla-Aranzales, J. K., Bonilla Machado, M. M., & Chenou, J.-M. (2023). Silence is safer than speech: the utility of social media labeling in countering political polarization in peacebuilding contexts. Análisis Político, 36(106), 85-112. https://doi.org/10.15446/anpol.v36n106.111042
Shabani, A., & Keshavarz, H. (2022). Media literacy and the credibility evaluation of social media information: students’ use of Instagram, WhatsApp and Telegram. Global Knowledge, Memory and Communication, 71(6/7), 413-431. https://doi.org/10.1108/GKMC-02-2021-0029
Shin, D., & Jitkajornwanich, K. (2024). How Algorithms Promote Self-Radicalization: Audit of TikTok’s Algorithm Using a Reverse Engineering Method. Social Science Computer Review, 42(4), 1020-1040. https://doi.org/10.1177/08944393231225547
Sîrbu, A., Pedreschi, D., Giannotti, F., & Kertész, J. (2019). Algorithmic bias amplifies opinion fragmentation and polarization: A bounded confidence model. PloS One, 14(3), e0213246. https://doi.org/10.1371/journal.pone.0213246
Smets, A., Ballon, P., & Walravens, N. (2021). Mediated by Code: Unpacking Algorithmic Curation of Urban Experiences. Media and Communication, 9(4), 250-259. https://doi.org/10.17645/mac.v9i4.4086
Spurava, G., & Kotilainen, S. (2023). Digital literacy as a pathway to professional development in the algorithm-driven world. Nordic Journal of Digital Literacy, 18(1), 48-59. https://doi.org/10.18261/njdl.18.1.5
Surjatmodjo, D., Unde, A. A., Cangara, H., & Sonni, A. F. (2024). Information Pandemic: A Critical Review of Disinformation Spread on Social Media and Its Implications for State Resilience. Social Sciences, 13(8), 418. https://doi.org/10.3390/socsci13080418
Thorp, H. H., & Vinson, V. (2024). Context matters in social media. Science, 385(6716), 1393-1393. https://doi.org/10.1126/science.adt2983
Thorson, K., & Wells, C. (2015). Curated Flows: A Framework for Mapping Media Exposure in the Digital Age. Communication Theory, 26(3), 309-328. https://doi.org/10.1111/comt.12087
Tully, M., Vraga, E. K., & Smithson, A.-B. (2020). News media literacy, perceptions of bias, and interpretation of news. Journalism, 21(2), 209-226. https://doi.org/10.1177/1464884918805262
Ulver, S. (2022). The conflict market: Polarizing consumer culture(s) in counter-democracy. Journal of Consumer Culture, 22(4), 908-928. https://doi.org/10.1177/14695405211026040
Verma, A., Sharma, A., Sharma, A., Prakash, A., & Das, A. (2023). Disentangling the Effect of Confirmation Bias and Media Literacy on Social Media Users’ Susceptibility to Fake News. Journal of Content, Community & Communication, 17(9), 16-30. https://doi.org/10.31620/JCCC.06.2023/03
Villagra, N., Reyes-Menéndez, A., Clemente-Mediavilla, J., & Semova, D. J. (2023). Using algorithms to identify social activism and climate skepticism in user-generated content on Twitter. Profesional de la información, 32(3), e320315. https://doi.org/10.3145/epi.2023.may.15
Wazzan, A., & Aldamen, Y. (2023). How University Students Evaluate the Role of Social Media in Political Polarization: Perspectives of a Sample of Turkish Undergraduate and Graduate Students. Journalism and Media, 4(4), 1001-1020. https://doi.org/10.3390/journalmedia4040064
Wojcieszak, M., Thakur, A., Ferreira Gonçalves, J. F., Casas, A., Menchen-Trevino, E., & Boon, M. (2021). Can AI Enhance People’s Support for Online Moderation and Their Openness to Dissimilar Political Views? Journal of Computer-Mediated Communication, 26(4), 223-243. https://doi.org/10.1093/jcmc/zmab006
Yan, H. Y., Yang, K.-C., Menczer, F., & Shanahan, J. (2021). Asymmetrical perceptions of partisan political bots. New Media & Society, 23(10), 3016-3037. https://doi.org/10.1177/1461444820942744
Yang, H.-N., Xu, X.-J., Liang, H., & Wang, X. (2020). A comparative study of online communities and popularity of BBS in four Chinese universities. PloS One, 15(6), e0234469. https://doi.org/10.1371/journal.pone.0234469
Yesilada, M., & Lewandowsky, S. (2022). Systematic Review: YouTube Recommendations and Problematic Content. Internet Policy Review, 11(1), 1652. https://doi.org/10.14763/2022.1.1652
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Comunicar

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.