Technology

Don’t ban ChatGPT in schools, but teach with it


Recently, I gave a talk to a group of K-12 teachers and public school administrators in New York. The topic was artificial intelligence (AI), and how schools would need to adapt to prepare students for a future filled with all kinds of capable AI tools.

But it turned out that my audience cared about only one AI tool: ChatGPT, the buzzy chatbot developed by OpenAI that is capable of writing cogent essays, solving science and maths problems and producing working computer code.

ChatGPT is new – it was released in late November – but it has already sent many educators into a panic. Students are using it to write their assignments, passing off AI-generated essays and problem sets as their own. Teachers and school administrators have been scrambling to catch students using the tool to cheat, and they are fretting about the havoc ChatGPT could wreak on their lesson plans. (Some publications have declared, perhaps a bit prematurely, that ChatGPT has killed homework altogether.)

Cheating is the immediate, practical fear, along with the bot’s propensity to spit out wrong or misleading answers. But there are existential worries, too. One high school teacher told me that he used ChatGPT to evaluate a few of his students’ papers, and that the app had provided more detailed and useful feedback on them than he would have, in a tiny fraction of the time.

“Am I even necessary now?” he asked me, only half-joking.

Some schools have responded to ChatGPT by cracking down. New York City public schools, for example, recently blocked ChatGPT access on school computers and networks, citing “concerns about negative impacts on student learning, and concerns regarding the safety and accuracy of content”. Schools in other cities, including Seattle, have also restricted access. (Mr Tim Robinson, a spokesman for Seattle Public Schools, told me that ChatGPT was blocked on school devices in December, “along with five other cheating tools”.)

It is easy to understand why educators feel threatened. ChatGPT is a freakishly capable tool that landed in their midst with no warning, and it performs reasonably well across a wide variety of tasks and academic subjects. There are legitimate questions about the ethics of AI-generated writing, and concerns about whether the answers ChatGPT gives are accurate. (Often, they are not.) And I am sympathetic to teachers who feel that they have enough to worry about, without adding AI-generated homework to the mix.

But after talking with dozens of educators over the past few weeks, I have come around to the view that banning ChatGPT from the classroom is the wrong move.

Instead, I believe schools should thoughtfully embrace ChatGPT as a teaching aid – one that could unlock student creativity, offer personalised tutoring and better prepare students to work alongside AI systems as adults. Here is why.

The first reason not to ban ChatGPT in schools is that, to be blunt, it is not going to work.

Sure, a school can block the ChatGPT website on school networks and school-owned devices. But students have phones, laptops and any number of other ways of accessing it outside of class. (Just for kicks, I asked ChatGPT how a student who was intent on using the app might evade a schoolwide ban. It came up with five answers, all totally plausible, including using a VPN (virtual private network) to disguise the student’s Web traffic.)

Some teachers have high hopes for tools such as GPTZero, a program built by a Princeton University student that claims to be able to detect AI-generated writing. But these tools are not reliably accurate, and it is relatively easy to fool them by changing a few words, or using a different AI program to paraphrase certain passages.

AI chatbots could be programmed to watermark their outputs in some way, so teachers would have an easier time spotting AI-generated text. But this, too, is a flimsy defence. Right now, ChatGPT is the only free, easy-to-use chatbot of its calibre. But there will be others, and students will soon be able to take their pick, probably including apps with no AI fingerprints.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.