Led by Gerhard Kristandl (@drkristandl)

When I open this blog post by stating “generative AI has taken the world by storm”, then I’m sure that you have heard (or read) that before. A whirlwind of hype, hope, and fear has swept not only through higher education, but society at large. In the AI-related trainings, workshops, and talks I run for my fellow educators, I often come across the same set of pervasive ‘myths’ – or rather persistent statements – about AI’s impact on teaching and learning, amidst the frenzy and the “fast-paced developments in the realm of education” (if I mimic a typical GenAI-generated phrase). In this post, I will briefly examine six of these ‘myths’ and reflect on a more nuanced reality, in hopes of triggering reflections, challenging assumptions, and – hopefully – alleviating concerns. Disclaimer at this point: When I write ‘AI’ in this blog post, I mean ‘generative AI’ (as technically speaking, the two terms are not synonymous, but often synonymously used).
About the Capabilities and Limitations of Generative AI
One seemingly common misconception I come across in my training participants is that ‘AI is the same as ChatGPT’. Of course, OpenAI has kicked off the AI-wave when launching ChatGPT in November 2022, so it’s unsurprising that “ChatGPT” is equated widely to “AI” (or rather “Generative AI”), similar to “hoover” being used synonymously with “vacuum cleaner” – first-mover advantage and good branding. However, while ChatGPT is currently the most well-known generative AI tool, it is far from the only one. A vast ecosystem of AI models with diverse capabilities is rapidly expanding – Microsoft Copilot, Anthropic’s Claude, Perplexity, Midjourney, Adobe Firefly – I could go on and on. Equating all generative AI with ChatGPT only ignores this kaleidoscope of AI tools, from varying capabilities (text, images, audio, etc.) to availability (closed and open source) to training data and use cases.
About the Impact of Generative AI on Education
There are some concerns that generative AI will stifle student creativity (Atkinson and Barker, 2023). After all, just ask it to perform a task for you, and it does it, right? No more creativity needed, then? Not quite! At the end of the day, it is a question of ‘how’ it is being used. If GenAI is stifling student creativity – we’re doing it wrong. AI can inspire – not stifle – creativity by exposing learners to diverse ideas and prompting original thinking (Inie et al., 2023). Sure, using it as an essay-spewing machine, accepting its output uncritically, won’t achieve this. But using it as an ideation facilitator, a brainstorming tool, to support and trigger creative thinking processes, and it’s a different story. The key is how we use technology – for evil or for good. Teaching students to use AI as a brainstorming tool, not a crutch, is paramount, and it falls back on the human educator to be in charge of the AI (Mollick, 2024a).
Closely related to this comes the perception that critical thinking skills may no longer be relevant if students and educators can just ask an AI tool to do the thinking for them. Again, here is a reminder that it’s ‘how’ – not ‘that’ – AI is used. Of course, it can provide quick answers and seemingly well-crafted arguments, but it is crucial to recognize that these outputs are based on patterns in the AI’s training data, not genuine understanding or reasoning (Prade, 2016). Responses may not be the final article, and blindly accepting AI-generated responses without critical evaluation can lead to the perpetuation of biases and inaccuracies present in the training data, and shallow thinking in students and educators alike.
However, when used as a tool to augment and enhance human critical thinking, generative AI can facilitate sharpening these essential skills. By presenting diverse perspectives and prompting students to interrogate the logic and evidence behind AI-generated arguments, educators can create valuable opportunities for critical analysis and debate (Berg and Plessis, 2023). The key is to teach students to approach AI outputs with a critical lens, asking questions like: What assumptions underlie this argument? What evidence supports or refutes it? What perspectives might be missing? By engaging in this type of critical dialogue with AI, students can develop a deeper understanding of the complexities and nuances of the topics they are studying, ultimately strengthening their own critical thinking abilities.
About Strategies for Engaging with Generative AI
Although becoming less and less prevalent since early 2023, the belief that banning AI is advisable and possible is still widely found amongst educators (Xiao et al., 2023), seemingly often born out of hope that ‘this soon will be over’. However, it still rings true today as it did in early 2023, what proponents for ‘engaging’ (rather than ‘embracing’ – thank you, Martin Compton) have been repeating time and again – that prohibition is neither practical nor beneficial in the long run (Volante et al., 2023). As these tools become ubiquitous, students need to learn to use them responsibly, and outright banning them rather would drive many into the very thing a ban aims to avoid – unethical uses, cheating, and added to that, poor AI literacy. Like it or not, but engaging thoughtfully with AI, rather than futile bans, raising AI literacy and critical exposure to it is the path forward.
Closely linked to this is the misconception (I daresay – hope) that so-called AI detectors can reliably distinguish AI-generated text from human writing. The bad news is that there is no such app for that. As studies (e.g., Liang et al., 2023; Sadasivan et al. 2024) and thought leaders (Furze, 2023; Mollick, 2024b) have shown, these tools largely overstate their success rates, whilst remaining opaque about their approaches and methods. These detectors often produce false positives, disadvantage non-English native speakers, and struggle to keep up with the sheer speed AI is developing (Furze, 2023). Not only are these detectors unreliable, but relying on them is outright dangerous and does a disservice to the students, unfairly penalizing them.
About the Future of Generative AI in Higher Education
Despite sensational predictions, AI will not render human educators obsolete. Yes, the technology has the potential to enhance learning with personalized feedback and content; AI avatars based on especially trained large-language models can interact with students already (Fink et al., 2024), but it cannot replace the nuance, intrinsic experience, empathy, mentorship, and adaptability of skilled teachers (Pila, 2023). After all, we are talking about sophisticated algorithms, not self-aware AI that is at the time of writing still the fabric of science fiction. It may replace tasks and run processes it can do better in the future, but replacing human educators altogether? Not anytime soon! The future lies in human-AI collaboration and ‘co-intelligence’ (Mollick, 2024a), not replacement; in enhancement through technology, not elimination.
Now what?
Of course, many more myths and misconceptions need critical discourse and debate. We are all together in largely uncharted waters. Generative AI has moved past the Peak of Inflated Expectations in Gartner’s Hype Cycle for Emerging Technologies and is at the brink of the Trough of Disillusionment, where the hype is starting to cool down. It seems that every day, there are quantum leaps in what ‘AI’ can do. But as we grapple with real-world challenges and limitations of the technology and its impact on sustainability and the environment, we must steer clear of these often-simplistic myths. The reality is more complex and filled with both challenges and opportunities. Neither must we be completely for or against AI. By engaging critically, cautiously, but optimistically, teaching responsible use, and leveraging them to augment rather than replace human instruction, I hope we can harness GenAI’s potential to enhance learning for all our students and us.
References
Atkinson, D., & Barker, D. (2023). AI and the social construction of creativity. Convergence, 29, 1054 – 1069. https://doi.org/10.1177/13548565231187730.
Berg, G., & Plessis, E. (2023). ChatGPT and Generative AI: Possibilities for Its Contribution to Lesson Planning, Critical Thinking and Openness in Teacher Education. Education Sciences. https://doi.org/10.3390/educsci13100998.
Compton, M. (2024a) Navigating the AI landscape in he: Six opinions, HEducationist. Available at: https://mcompton.uk/2024/07/06/navigating-the-ai-landscape-in-he-six-opinions/ (Accessed: 06 October 2024).
Fink, M.C., Robinson, S.A., and Ertl, B. (2024). AI-based avatars are changing the way we learn and teach: benefits and challenges. Frontiers in Education. 9. https://doi.org/10.3389/feduc.2024.1416307
Furze, L. (2023) ‘AI Detection in Education is a Dead End’, *Leon Furze*, 9 April. Available at: https://leonfurze.com/2024/04/09/ai-detection-in-education-is-a-dead-end/comment-page-1/ (Accessed: 1 October 2024).
Inie, N., Falk, J., & Tanimoto, S. (2023). Designing Participatory AI: Creative Professionals’ Worries and Expectations about Generative AI. Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3544549.3585657.
Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., and Zou, J. (2023). GPT detectors are biased against non-native English writers. Patterns. 4:7. https://doi.org/10.1016/j.patter.2023.100779
Mollick, E. (2024a). Co-Intelligence: Living and Working with AI. New York, New York: Penguin Publishing Group
Mollick, E. (2024b) ‘Signs and Portents’, One Useful Thing, 6 January. Available at: https://www.oneusefulthing.org/p/signs-and-portents (Accessed: 06 October 2024).
Pila, A. (2023). Will artificial intelligence overcome teachers that just addresses content?. Concilium. https://doi.org/10.53660/clm-1590-23j20.
Prade, H. (2016). Reasoning with Data – A New Challenge for AI?. In: Schockaert, S., Senellart, P. (eds) Scalable Uncertainty Management. SUM 2016. Lecture Notes in Computer Science(), vol 9858. Springer, Cham. https://doi.org/10.1007/978-3-319-45856-4_19
Sadasivan, V.S., Kumar, A., Balasubramanian, S., Wang, W., and Feizi, S. (2024). Can AI-Generated Text be Reliabliy Detected? ArXiv, abs/2303.11156. https://doi.org/10.48550/arXiv.2303.11156
Volante, L., DeLuca, C., & Klinger, D. (2023). Leveraging AI to enhance learning. Phi Delta Kappan, 105, 40 – 45. https://doi.org/10.1177/00317217231197475.
Xiao, P., Chen, Y., & Bao, W. (2023). Waiting, Banning, and Embracing: An Empirical Analysis of Adapting Policies for Generative AI in Higher Education. ArXiv, abs/2305.18617. https://doi.org/10.2139/ssrn.4458269.
Author Biography
Dr Gerhard Kristandl is a National Teaching Fellow and an Associate Professor in Accounting and Technology-Enhanced Learning at the University of Greenwich. He has 18 years of experience in higher education across the UK, Canada, and Austria, focusing on learning technologies in HE. He is the chair of the University’s AI Special Interest Group and has talked internationally on various aspects of Generative AI in HE. He is the university lead for Mentimeter, a Senior Fellow of the Higher Education Academy, and a former management consultant. He blogs about Generative AI on LinkedIn and Medium, and runs his own YouTube channel, with recent videos around generative AI and its applications in education. He is passionate about creating engaging and innovative learning experiences for his students and is a strong believer that generative AI makes and will make human educators even more important than ever before.
LinkedIn profile: https://www.linkedin.com/in/gerhardkristandl/
Medium: https://medium.com/@gerhard.kristandl
YouTube channel: https://www.youtube.com/@drgeekay






Pingback: #LTHEChat 307: Digital Storytelling: Encouraging Authenticity in HE? | #LTHEchat