LTHEChat 337 Dialogic learning in the Age of Generative AI

Join us on Bluesky on Wednesday 1st October 2025 at 20:00 BST

What does it truly mean to learn with a machine, and are machines capable of engaging in dialogic learning? Generative AI models—capable of producing text, images, or other content in response to prompts—are rapidly reshaping educational discourse by introducing ‘scalable’ forms of personalised learning, while also raising challenges around academic integrity and the need to redefine what critical thinking entails in the context of learning with AI. More recently, features within popular Generative AI models like ChatGPT’s  “Study and Learn mode” (which guides learners with questions instead of just giving answers) and Google Gemini’s ‘‘Learn Your Way’’ (which transforms textbooks into interactive, AI-driven study guides) are being marketised on the promise of more conversational, personalised learning experiences that are fine tuned for learning based on education research and principles. Within higher education, the growing presence of these systems demands deeper exploration. Are they genuinely expanding the possibilities for dialogue and feedback, or quietly reshaping the conditions of academic exchange? As practitioners, we must ask not only how these systems work, but also why they are used—and for whom? Do they stimulate enquiry, or do they replace the productive discomfort of genuine dialogue with frictionless interactions that risk remaining superficial (Tang et al, 2024; Wu et al., 2025)?

What is dialogic learning?

Dialogic learning, shaped by educational theorists like Bakhtin (1986), Freire (1970), and  Pask (1976) remind us that learning is inherently dialogic – it happens through dialogue, not one-way transmission. Freire (1970) argued that human nature is dialogic: we continuously create and re-create knowledge through communication and questioning. In Freire’s liberatory pedagogy, tutor and students join in conversation as co-learners, rather than the teacher “depositing” knowledge into passive students. Bakhtin’s dialogism similarly insists that an individual’s understanding cannot exist in isolation – meaning emerges only through interaction with others. For Bakhtin, every voice needs an “other” voice; learning is essentially a chain of responses and reflections that prevents any single viewpoint from being final or absolute. Cybernetician Gordon Pask added a systems perspective with his Conversation Theory. Pask maintained that all effective learning can be seen as a conversation between a tutor and learner, where each asks questions, gives explanations, and adjusts understanding based on feedback. In other words, the fundamental unit of learning is not a lecture or a textbook, but an interactive exchange – a back-and-forth process of asking, answering, challenging, and clarifying. Dialogic learning values this plurality of voices and the co-construction of meaning over any one authoritative voice. Unlike transmission models of teaching, dialogic learning values plurality, contestation and the co-construction of meaning (Costa & Murphy, 2025; Tang et al., 2024). The benefits of this approach are widely recognised. It fosters critical thinking by encouraging learners to question assumptions and synthesise diverse perspectives (Corbin et al., 2025). It supports epistemic agency by giving students responsibility for their intellectual choices (Costa & Murphy, 2025). It contributes to identity formation as learners develop voice, confidence and a sense of belonging within academic communities (Lee & Moore, 2024). It also develops feedback literacy through cycles of exchange and revision, in which students learn to interpret, negotiate and act upon comments (Jensen et al., 2025; Guo et al., 2024).

Can Generative AI models elicit dialogic learning?

Advancing interactive features in Generative AI models like ChatGPT, Gemini, and Perplexity may appear to support dialogic learning by posing questions, scaffolding reasoning, and personalising feedback—simulating tutor-like experiences through guided prompts.

What does this may look like in a seminar?

For instance, in one of my modules during the last academic year, students used (under my instruction) Google’s NotebookLM to convert an assignment brief into a podcast to better grasp the nuanced expectations of their task. Research suggests that Generative AI tools used in such ways can foster metacognitive awareness and iterative improvement in coursework preparation (Lee & Moore, 2024; Wu et al., 2025). Listening to content in audio form or visualising summary highlights from an article using a mind map produced with the help of multimodal features within Generative AI tools may support accessibility and flexible engagement, particularly for students managing competing demands. However passive consumption of AI-generated content in their various forms, risks flattening complexity if not embedded within reflective and dialogic learning design. Dialogic learning is not reducible to interaction alone. Its value lies in unpredictability, relationality, and openness—qualities that emerge when tutors and students negotiate meaning, clarify ambiguity, or explore for example in a seminar, what part of learning consolidation an assessment truly aims to assess. Designed to optimise helpfulness and coherence, current Generative AI tools struggle to replicate the conditions for negotiated learning that dialogic learning encapsulates, since they often smooth over disagreement and avoid the intellectual discomfort or contestation that can spark transformative learning (Costa & Murphy, 2025).

Provocations for a critical dialogue

The difference between human versus machine simulated dialogic learning is subtle in the moment and significant over time. In a human dialogue, the pauses matter. For example, to help students explore the nature and limitations of Generative AI—without using AI—I recently designed a workshop built around the use of everyday craftwork as metaphorical training data. This session aimed not to demonstrate the outputs of AI, but to enable students to think critically about how Generative AI models are trained, how they respond to prompts, and where bias might reside in seemingly neutral systems. In the activity, students were presented with a set of curated craftworks—patchworks, collages, weaves, and prints – described as their “training data” if they were preparing an AI model. Then, they were handed a second, unrelated set of crafted items, this time framed as “prompts.” Students had to match, infer, or “generate” a response using only the original resources. Of course, the results were often mismatched or superficial. This opened a rich and meaningful dialogue where students co-constructed knowledge by exchanging critical dialogues about how a model built on curated examples will struggle with novelty, difference, or contradiction. More importantly, they recognised how bias can be baked into the very foundation of what counts as valid information. The activity never used a single Generative AI tool. Yet it illuminated key dynamics: how training data constrains response, how prompts channel expectation, how patterns are privileged over anomalies, and how meaning is not generated but always interpreted. Through dialogue grounded in tactile, visual artefacts, students explored the tensions between creation and curation, automation and authorship. They began to articulate a shared understanding of the human dimension of machine learning. This workshop re-centred human interpretation and judgement. The craft objects were static, but they became catalysts for expansive, situated thinking. Dialogue emerged from human interaction mediated by metaphor and material and not by algorithm on this occasion. It was a gentle reminder that the deepest conversations are not sparked by convenience but by complexity. A student reformulates an idea; a tutor waits; someone else steps in with a doubt that sends the group back to the text. However, in an exchange with Generative AI, the tempo is brisk, the turns are clean, the answers are read which may replace friction with fluency. While fluency has its place—accessibility, confidence, momentum—it can quietly erode the conditions that help students develop judgement: hesitation, contest, and the courage to revise a claim in their own words.

Does this mean Generative AI tools are an ‘illusion’ rather than an extension for ‘dialogic learning?

This is not an argument against the tools. In fact, they can extend dialogic practice when used with intent. Where things tend to go awry is when mode-switching and feature-swapping become novelty rather than purpose when using Generative AI tools. The availability of text, audio, visuals, and quizzes can fragment attention if there isn’t a reason to move between them. The question to keep asking is simple: Why this mode for this idea, at this moment, for this group? When the answer is clear—accessibility, comparison, perspective-taking—multimodality serves dialogue. When it isn’t, it becomes decoration. There is also a collective responsibility here. If we want to intentionally design dialogic learning with machines, we design for encounter: tasks where students interrogate Generative AI outputs critically and reflectively. We deliberately design learning activities that restore pace and pause; prompts that invite disagreement rather than tidy agreement. We protect the moments of uncertainty in which students decide what they think—and why.

Towards authentic dialogic learning

GenAI offers genuine opportunities to scaffold dialogue and expand access to feedback, but its danger lies in replacing the friction that drives authentic learning with seamless interaction. If GenAI is to support authentic dialogue, higher education must approach its integration critically and deliberately. Designing AI-mediated dialogue to invite critique rather than compliance is essential, ensuring that systems provoke questioning and exploration rather than scripted agreement. Learners must be empowered to direct enquiry and challenge system-driven prompts so that they remain active participants rather than passive recipients. Institutions must also safeguard plurality and equity, ensuring transparency, fair access and the inclusion of diverse perspectives in AI-mediated dialogue.

Guest Biography:

Nurun Nahar is an Assistant Teaching Professor based at the Greater Manchester Business School, University of Greater Manchester. Nurun’s responsibilities include overseeing and advising on Generative AI and technology-enhanced learning initiatives to enhance pedagogical practices within her department. Nurun is a published scholar and has presented her research work widely at several international conferences alongside invited guest talks on the topics of digital literacy, pedagogical partnerships, use of generative AI and technology-enhanced learning in Higher Education. Nurun led a whole institution collaborative project supporting the design and development of an AI literacy framework and supporting online tutorials for the University of Greater Manchester, which is embedded within the central academic skills development programme for all students including pre-arrival students.

Questions and chat

Q1 –  What does dialogic learning mean in your context and practice?

Q2 –  Where do you see the main benefits or limits of using generative AI for dialogic learning?

Q3 – What strategies can help educators and students maintain agency and voice when using Generative AI within human-AI dialogic learning collaboration?

Q4 – How have you used generative AI—personally or professionally—to support your own dialogic learning, and what did you notice?

Q5 – What activities might be used to support authentic dialogic learning with or without generative AI?

Q6 – Which part of a module/unit/delivery approach would you deliberately design for friction (slow thinking for dialogue) instead of fluency (quick completion), and why?

Unknown's avatar

About Teach

A little bit of sunshine and a whole lot of hurricane.
This entry was posted in announcement and tagged , , , , . Bookmark the permalink.

Leave a comment