Led by Dr. James Bedford, Education Specialist, Artificial Intelligence (AI), UNSW College, @jamesbedford.bsky.social
2025: The Year for Evidence-Based Generative AI in Higher Education
The past two years have seen an explosion of discourse on Generative AI (GenAI) in education—from speculative threads on social media to claims about the many benefits and dangers of this new technology. I wanted to take this opportunity while writing a blog post for #LTHEchat to encourage a shift in the way we talk about GenAI more broadly, one that focuses on evidence-based approaches with pedagogically aligned solutions.
Reflecting on Two Years of GenAI in Education
The conversations that have emerged because of GenAI have been some of the most fascinating of my career to date. From almost frantic exchanges with colleagues about how on earth we come up with a response to the challenge of academic integrity and Referencing AI in academic work, to designing a Responsible Use of AI framework for UNSW that attempts to outline effective and responsible uses of AI for over 60,000 students. Collaborative efforts, such as our research on how GenAI can support ESL students demonstrated the value of the student-voice and left plenty of room for further exploration. I’ve also been lucky enough to speak to over 4000 educators and students across a variety of keynotes and seminars which have fundamentally informed my understanding of the state of AI in education.
If there is one thing I’ve learned from all this, including the past 24 months of updates, and responses to those updates, it’s that there is a lot of hype and hopeout there, and a lot less proof about what GenAI can actually do for education.
The problem with much of the current discourse is that it’s often centred around cherry-picked examples of GenAI failures or successes, along with simplistic testing of large language models resulting in hasty conclusions about the capabilities or limitations of these tools. As Rose Luckin recently pointed out, “claims about educational impact need proper time for evaluation in higher education” (2024). We are only two years in, and if the past has told us anything it’s that the longer-term effects of technology take a while to manifest. If we are going to make any progress with (or perhaps without GenAI) in education we need to start having a much deeper, pedagogically-informed discussion grounded in robust research and thorough evidence.
Moving Towards Evidence-Based Implementation
Recent studies have attempted to shed light on the tangible impacts of GenAI in educational settings. For instance, Almasri (2024) conducted a systematic review highlighting that AI applications in education are transforming instructional practices, assessment strategies, and administrative processes, actively contributing to the progression of science education. Additionally, Lee and Moore (2024) synthesised empirical studies on GenAI for automated feedback in higher education, indicating significant opportunities and challenges in integrating these tools effectively into learning environments, emphasising the growing demand for timely and personalised feedback. Crompton and Burke (2023) underscored the importance of aligning AI tools with specific educational objectives to enhance learning experiences.
However, while these articles provide excellent coverage of a growing field, there still remains a need for further empirical research to fully understand the long-term implications of AI integration in education. This includes addressing challenges such as the limitations of current AI technologies, the necessity for human oversight, and the potential impact on intellectual and emotional development. Additionally, the rapid evolution of AI tools calls for continuous evaluation to ensure they complement traditional teaching methods without undermining the fundamental goals of learning. A cautious and well-researched approach is essential to harness the benefits of AI while mitigating potential risks in educational settings.
Moving Forward: Practical Steps for 2025
And herein lies the challenge. How do we evaluate the pedagogical and societal impact of generative AI—a technology that is not only still emerging, but which so often operates subtly and invisibly within educational practices?
For starters, we need to be asking ourselves some important questions:
- What specific problems in our current educational system can GenAI demonstrably help solve?
- How can we validate the effectiveness of GenAI tools in our educational settings?
- What metrics should we use to measure the impact of GenAI integration?
All of us must be willing to question our assumptions about the necessity of GenAI solutions, and all of us must acknowledge not every educational challenge requires an AI-powered solution. We should be honest about when traditional approaches might be more effective. I’m all for using GenAI for certain parts of my job, however, I would not think it beneficial to create an educational system where we end up becoming conduits for statistically aligned outputs, infinitely parsing information through systems that effectively minimise human intervention and judgement. In other words (and excuse the sentiment) if we depend too much on GenAI’s handling of everything, we risk losing the personal touch that makes educational experiences so meaningful. Which brings me to the purpose of this blog.
Recommendations for Implementation
While speculation about future GenAI capabilities is crucial, perhaps we should be leaning into an informed understanding of current GenAI tools and their applications. This means:
- Conducting rigorous research on existing GenAI tools and their impact on learning outcomes.
- Developing clear frameworks for evaluating the appropriateness of GenAI integration in different educational contexts.
- Creating evidence-based best practices for GenAI implementation.
- Establishing robust assessment methods to measure the effectiveness of GenAI-enhanced learning.
While the enthusiastic discussions and speculative debates about GenAI in education have served an important purpose in helping us process this technological revolution, 2025 must be the year we anchor ourselves in evidence. The future of GenAI in education is not just about continually anticipating what’s coming next—it’s about understanding and optimising what we have now.
Conclusion
In summary, to make 2025 a turning point in how we approach GenAI in education educators might consider:
- Prioritising peer-reviewed research on GenAI implementation in educational settings.
- Sharing detailed case studies of both successes and failures in GenAI integration.
- Developing standardised methods for evaluating GenAI tools in educational contexts.
By focusing on evidence-based approaches and present-day applications, we can build a more solid foundation for GenAI solutions in higher education. This doesn’t mean we stop imagining future possibilities, rather, we begin to balance our forward-looking discussions with practical, evidence-based implementations that serve both our own and our students’ needs today. To end on a quote from a now prophetic article on AIED published in 2021:
”In the end, the goal of AIEd is not to promote AI, but to support education. In essence, there is only one way to evaluate the impact of AI in Education: through learning outcomes. AIEd for reducing teachers’ workload is a lot more impactful if the reduced workload enables teachers to focus on students’ learning, leading to better learning outcomes” (Chaudhry & Kazim, 2021).
Author Biography
James Bedford is an award-winning writer and educator with over 10 years experience working in higher education. He earned his PhD in Creative Writing from the University of New South Wales in 2019 and has won multiple teaching and academic awards throughout his career, ranging from: a Programs that Enhance Learning Award, Australian Postgraduate Award, a Research Excellence Award, and a University Medal. A visiting doctoral scholar at the Oxford Centre for Life-Writing at Oxford University he has published both creative fiction as well as teaching and learning scholarship. He is a Senior Fellow of the Higher Education Academy (SFHEA) and has been a keynote speaker at multiple events and conferences, regularly sharing his insights on generative AI in higher education. He is also a member of the Artificial Intelligence in Education at Oxford University (AIEOU), a research hub dedicated to exploring the potential of AI in education. Currently, he is working at UNSW College as an Education Specialist in Artificial Intelligence.

References
Almasri, F. (2024). Exploring the impact of artificial intelligence in teaching and learning of science: A systematic review of empirical research. Research in Science Education, 54(4), 977–997. https://doi.org/10.1007/s11165-024-10176-3
Bedford, J. (2024). AI in academic research and writing: Potentials, pitfalls, and possibilities. Kirby Institute, UNSW. Retrieved from https://www.kirby.unsw.edu.au/events/ai-academic-research-and-writing-potentials-pitfalls-and-possibilities
Bedford, J., Kim, M., & Qin, J. C. (2024). Confidence enhancer, learning equalizer, and pedagogical ally. In S. Beckingham, J. Lawrence, S. Powell, & P. Hartley (Eds.), Using generative AI effectively in higher education: Sustainable and ethical practices for learning, teaching and assessment (1st ed., pp. 9–18). Routledge. https://doi.org/10.4324/9781003482918-6
Chaudhry, M. A., & Kazim, E. (2021). Artificial intelligence in education (AIEd): A high-level academic and industry note 2021. AI and Ethics, 2(1), 157–165. https://doi.org/10.1007/s43681-021-00074-z
Crompton, H., & Burke, D. (2023). Artificial intelligence in higher education: The state of the field. International Journal of Educational Technology in Higher Education, 20(22). https://doi.org/10.1186/s41239-023-00392-8
Lee, S. S., & Moore, R. L. (2024). Harnessing generative AI (GenAI) for automated feedback in higher education: A systematic review. Online Learning, 28(3), 82–104. https://doi.org/10.24059/olj.v28i3.4593




