“We should not let the tail wag the dog”. This a rather unusual idiom, but one that defined the interview given by Nick Saville at the Room for Discussion. His role as Director of Thought Leadership for Cambridge English is one of exploring language learning and mastery through technology. By using automated systems for assessment, Saville seeks to bring about a broader societal debate centered around the future of artificial intelligence in education. It is a conversation about technological solutions, unpredictable risks and the values we choose to uphold in the proliferation of knowledge.
However, to understand why “we should not let the tail wag the dog”, we must first explore the meaning of artificial intelligence. Saville defines it as “the development of systems that have capabilities and behaviours similar to those of humans”. The things they are to do and how they are to interact are thus both drawn in comparison to humans, with the intent of creating a machine that could supplement or help a person in the future.
In a practical context, artificial intelligence would do tasks such as applying initial corrections to a text or providing instant feedback. Saville explains it would “take care of the heavy lifting” that a teacher currently does. Examples include guiding a student in writing assignments, partially grading said assignments and more advanced functions such as helping decide a student’s suitability for a selected programme and the feasibility of their study plan. In turn, this would allow for “hybrid teaching”, where machines fulfill basic tasks while educators can engage in more complex activities that require subjective assessment.
Now that we understand artificial intelligence as “the tail”, Saville’s idiom becomes more telling. It concerns whether technology should act as a guiding force for future developments in education – the answer being a definitive “no”. Throughout the interview, an emphasis was placed on humans as those who both teach and generate knowledge. With machines acting as a helping force rather than one of takeover. However, for artificial intelligence to be of help, teachers must overcome the difficulties associated with adopting new technology, whether that be lack of understanding or unwillingness. Saville speaks of the issue as “change management”, where instead of taking an innovative approach, technologies are often integrated into “old ways and old habits” of teaching. Interactive whiteboards and language laboratories are machines that ultimately were pushed away as teachers worldwide struggled to adopt them as an education method.
When “change management” does not occur successfully, the question becomes who bears the loss. When asked, Saville pointed to children as those most at risk in these “field studies”, where attempts are made to incorporate technology in education. Nonetheless, he continues that we must think deeply about such topics so that potential benefits are not closed out due to an inherent fear of risk. A societal debate must be had, where educational research is done so that people’s doubts and fears are addressed, validating the positive effects of technology. According to Saville, this would require setting up guidelines for ethical artificial intelligence, where people ask the right questions in advance.
These guidelines would be centered around the well-being of children instead of pursuing commercial gains or simplification in teaching. Doing so would require engaging different stakeholders in the discussion, not only technologists focused on the latest and greatest innovations but those native to the world of education. Thus, Saville explains that creating guidelines that successfully incorporate educational tasks into artificial intelligence could be both gradual and costly, imposing limits on the ways the machines are trained.
Room for Discussion
Nevertheless, there are many concerns regarding such an investment, as artificial intelligence could exhibit a lack of impartiality and bias. Said issues occur in ordinary circumstances when teachers are the sole evaluators in the education process. Still, Saville believes that the topic is even more salient in the case of automated systems. Limiting bias means controlling the inputs from which a machine learns, ensuring that data is collected and utilized impartially. He believes that the only way to do this is through “trial and error” as long as the “error” element is not too grave. It would require those developing machines in education to continuously learn from past mistakes while understanding the balance of keeping the human in the loop. Hence, “co-evaluation” is the process in which technology is considered concerning established societal norms and behaviours. Without it, the consequences of artificial intelligence cannot be adequately understood, risking a negative technological spillover.
Nonetheless, Saville is convinced that until a social contract is developed based on “co-evaluation”, progress should not be halted. Artificial intelligence systems improve when they exist and work agilely, addressing and solving problems. The more data collected in this process, the better it would be analyzed, enforcing those aspects of a technology that work while discarding those that do not. According to Saville, the only way to achieve this is by preparing people through awareness and understanding, as their fears about artificial intelligence are often closer to science fiction than reality. On the contrary, those in the field of computer science and education are more concerned with creating an ethical framework that could be communicated with children and parents so that they embrace technology without fear.
When asked how such an ethical framework would be reflected in policy, Saville emphasizes that “context is king”. The specific circumstances of teachers and students must be understood to clarify the issues to be breached. Moreover, policymakers should realize that working with thought leaders in academia and industry is the only way to produce a complete solution. Saville believes the priority should be a policy that “engages, informs and inquires” by reaching out to those currently being educated. Without a good conversation, an answer cannot emerge; even if it does, implementing it would require the support of the next generation of young minds.
Returning to the idiom of the “tail wagging the dog”, we now understand the place of artificial intelligence in education. It is not a substitute for teaching, nor a way of creating new knowledge – it is a tool like any other. Saville tells us to question it and “co-evaluate”, to establish a social contract and engage not just with the technology itself but with the guidelines and policy it follows. Whether it will improve education remains unclear, but no matter the outcome, it is a technology that should follow teaching, not the other way around – “we should not let the tail wag the dog”.