by Leon Furze
With the startlingly rapid pace of development in Artificial Intelligence in the last few months it’s easy to imagine that education will be left behind. In order to deal with the implications of these technologies, we need to strike a balance between banning and uncritical adoption.
The Year of AI
Since the release of OpenAI’s ChatGPT in late 2022, we have seen a rapid increase in the number of Artificial Intelligence apps and services. Although much of the technology behind ChatGPT existed prior to its release, the simple addition of a user-friendly chatbot interface has made AI technology more accessible and thrust AI into the mainstream.
By January 2023, hundreds of thousands were logging into ChatGPT, including students and teachers. And the development didn’t stop with OpenAI’s chatbot. Forbes has already labelled 2023 a defining year for AI and the future of work. In the space of only a few months we have seen Microsoft’s Bing chat integrate AI, rumours of Google’s powerful LaMDA model, and OpenAI’s release of ChatGPT to developers. This last development has led to companies from Snapchat to Spotify integrating ChatGPT-based services into their own apps.
It's easy to feel like the AI genie is out of the bottle. In education, we have faced the rapid adoption of digital technologies before – particularly in the last few years. During remote learning we quickly pivoted onto online platforms and Learning Management Systems, and teachers across the world had to adapt materials and teaching methods to online and hybrid spaces. In a way, this latest technological development feels like more of the same: a sudden change that educators will just have to “deal with”.
Determinism and solutionism
The idea that technologies like AI have the power to shape and transform society has a name: technological determinism. It’s a problematic idea on a number of levels, not least of all because it suggests that we are powerless in the face of these technologies. The suggestion that teachers will just have to “deal with” AI is deterministic: it disempowers educators and places all of the power in the hands of the organisations who control the technology.
It also disenfranchises our students. Technological determinism would have us believe that our students must learn to use AI tools, or they are at risk of being denied future careers and the opportunity to contribute to society. Like the analogy of the genie being out of the bottle, and our inability to stuff it back in, technological determinism suggests that AI is inevitable and inescapable.
A closely related idea, and one which plays out regularly in education, is technological solutionism. Coined by Evgeny Morozov in his book To Save Everything, Click Here, this is the idea that, given enough of our data, technology like AI can solve all of our problems. It’s a popular narrative with the organisations behind AI: large organisations like Microsoft and Google as well as smaller start-ups like OpenAI. In education, we’ve seen technological solutionism in wave after wave of edtech. From apps that promise to reduce workload and improve assessment and reporting, to claims that ChatGPT will revolutionise the way we work and communicate in schools, there is a persistent narrative that technology like AI can rid us of some of the most cumbersome and time-consuming aspects of teaching.
We should resist technological determinism and solutionism in education. Ultimately, both are narratives that serve the organisations who develop AI more than teachers and students. One response from education has been to ban or block access to ChatGPT and other AI technologies. Unfortunately, there is no evidence that this will work, particularly as the number of apps and services increases. Instead, schools should find a way to strike a balance.
Finding the middle ground
The decision to ban or allow ChatGPT has already divided states, and has been as contentious in secondary education as tertiary. It has become clear that there is no one-size-fits-all approach to dealing with AI in education. Instead, schools and educators need to take a nuanced approach that considers the specific needs of their students and the capabilities – and limitations – of the AI technology in question.
One possible approach is to focus on developing critical digital literacy skills in students. This involves teaching students how to evaluate the accuracy and reliability of information they find online, including information generated by AI. By teaching critical digital literacy, we empower students to navigate the complex and ever-changing landscape of AI technology.
Another approach is to adopt a tentative, cautious approach to AI without an outright ban. This means carefully evaluating the AI tools and services that are introduced into the classroom, and only using them if they can be shown to be effective and beneficial to student learning. This requires educators – especially school leaders – to be knowledgeable about AI and to work closely with developers to ensure that any AI tools used in the classroom are aligned with educational goals and values. It will also require clear policies and processes, which may augment existing digital technology policies.
The AI genie might be out of the bottle in education, but we need to find a way to deal with the implications of this technology. Rather than simply banning or uncritically adopting AI, we need to find a middle ground that empowers educators and students to navigate the complexities of this contentious and powerful technology. Ultimately, AI will find its way into our classrooms one way or another. We need to take control of the situation and ensure that AI is used in a way that supports our educational goals and values.
Leon Furze is a PhD student, experienced educator, consultant and educational writer. He is author of 'Practical Reading Strategies' and Jacaranda's new 'English' series of textbooks, and a VCE assessor. Leon provides professional learning and strategic planning for curriculum, literacy, and digital technologies.