Join us on Bluesky for #LTHEchat on Wednesday 4th March at 8pm GMT with guest Dr Olivia Kelly to discuss how algorithmic bias can influence what students see on social media and what our role as educators should be. As students consume increasing amounts of information through algorithm-driven platforms, educators face new challenges: misinformation, echo chambers and AI-curated feeds. This chat explores whether HE institutions and educators should teach students how algorithms influence what they see and ultimately what they believe.
More and more students now have their worldviews shaped not by textbooks or lectures but by endlessly scrolling feeds, carefully tailored by algorithms they rarely think about. Algorithmic systems increasingly mediate how students encounter information, particularly through social media platforms. Recent scholarship highlights the structural impact of these systems on youth information practices. Ahmmad et al.’s (2025) systematic review shows that social media algorithms consistently reinforce ideological homogeneity, limit viewpoint diversity, and intensify polarization among young users. This resonates strongly with concerns in HE where students often rely on algorithmically curated content as a primary source of news, learning materials and public discourse.
The concept of algorithmic literacy (understanding how algorithms shape what we see) has gained traction as a critical extension of digital literacy. Gagrčin et al. (2024) argue that although awareness of algorithms is rising, the field lacks a unified framework for teaching algorithmic literacy in formal educational settings. Their review emphasizes that algorithms across platforms optimize for engagement, not educational value, making students especially vulnerable to selective exposure. The rise of generative AI also complicates the information landscape. García-López & Trujillo-Liñán (2025) warn that while generative systems enable personalized learning, they introduce risks such as loss of cognitive autonomy and institutional misuse of student data, reinforcing the need for robust digital and algorithmic literacy frameworks in HE. As social platforms increasingly integrate GenAI into feeds and search interfaces, students face not only biased recommendations but AI-fabricated or AI-amplified misinformation.
Finally, public-facing research also shows that users themselves unintentionally reinforce algorithmic bias. Rathee et al. (2025) demonstrate that people often accept and perpetuate biased algorithmic recommendations, highlighting the dual interplay between system design and human behaviour. For educators, this underscores the importance of teaching students not only how algorithms work but how their own actions shape algorithmic outputs.
Together, these studies suggest an urgent need for HE to address algorithmic bias through explicit teaching of how social media shapes knowledge, moving digital literacy beyond skills toward critical, reflective understanding. As educators, we can no longer treat social media as peripheral to learning. Algorithmic systems shape how students interpret the world, encounter political ideas, understand scientific claims and engage with global events. When students are nudged toward certain viewpoints, whether subtly or aggressively, our commitment to fostering critical thinking requires us to step in.
Teaching about algorithmic influence is not about shaming students for their media use. Nor is it about demonizing technology. It’s about opening a window into the invisible forces that shape their digital lives, helping them see why certain narratives feel omnipresent and others almost invisible. Importantly, the goal is empowerment. Students who understand how algorithms work can push back, diversify their feeds and seek out credible sources. They become more intentional learners and more reflective digital participants.
This #LTHEchat invites us to imagine what HE could look like if algorithmic awareness were embedded into our teaching practices. Not as an add-on, but as essential literacy for navigating contemporary knowledge environments.
References
Ahmmad, M., Shahzad, K., Iqbal, A., & Latif, M. (2025). Trap of Social Media Algorithms: A Systematic Review of Research on Filter Bubbles, Echo Chambers, and Their Impact on Youth. Societies, 15(11), 301. https://doi.org/10.3390/soc15110301
Gagrčin, E., Naab, T.K., & Grub, M.F. (2024). Algorithmic media use and algorithm literacy: An integrative literature review. New media and society, 28(1), 423-447. https://doi.org/10.1177/14614448241291137
García-López IM and Trujillo-Liñán L (2025). Ethical and regulatory challenges of Generative AI in education: a systematic review. Front. Educ. 10:1565938. https://doi.org/10.3389/feduc.2025.1565938
Rathee, S., Banker, S., Mishra, A. & Mishra, H. (2025). Algorithms are Propagating Bias – Are we complicit?. Keller Centre for Research. https://kellercenter.hankamer.baylor.edu/news/story/2025/algorithms-are-propagating-bias-are-we-complicit
Speaker Bio
Dr Olivia Kelly is an Associate Lecturer at the Open University, whose work focuses on advancing engaging, high-quality learning experiences in higher education distance learning. Drawing on deep expertise in teaching practice and student support, she brings a thoughtful, research-informed approach to curriculum design and academic development. Olivia’s research interests focus on the role of Social Media in HE, having researched student community building on Twitter (X) for her Doctoral study. She currently leads The Open University’s Praxis Social Media Scholarship Hub focusing on social media related research in education and she recently completed a funded project using Discord with students. Olivia hosts a podcast interviewing researchers on various HE related topics to champion innovative pedagogies that enhance student success. Passionate about widening participation and building inclusive learning environments, she regularly presents at academic events.





