How can schools deal with artificial intelligence systems?
When ChatGBT launched last November, many schools felt as if they had been hit by a meteorite. In the middle of the school year, teachers are forced, out of the blue, to confront a strange new technology that allows students to write college-level essays, solve complex problems, and excel on standardized tests.
Confusion and enthusiasm
Some schools responded—unwisely, at the time—by blocking ChatGPT and tools like it. But this ban did not work because the students could simply use the technology on their mobile phones and computers in their homes. As the school year progressed, most of the schools that had banned ChatGPT, Bing, Bard, and other tools, retracted their decision.
Now, as the new school year approaches, we’ve talked to teachers of all grade levels, administrators, and faculty about what they think of AI, and we’ve found a lot of confusion and awe combined with a great deal of curiosity and enthusiasm. It also turns out that teachers want to know how they can use this technology to help students learn, rather than worrying about using it to cheat.
Perceived basic tips
As tech journalists, we don’t have all the answers, especially regarding the long-term effects of AI on education, but we can offer some basic and useful short-term advice for schools trying to figure out how to deal with generative AI in the current school season.
First, we encourage teachers—particularly at the high school and college levels—to assume that all of their students use ChatGBT and other generative AI tools in every assignment and subject, except when they are supervised by the school or college.
This weighting will not apply to all schools, of course, because many students will not use artificial intelligence because of their ethical concerns about it, because it is not useful in some homework, or because they do not have the tools or are afraid of the idea of control.
But the hypothesis that everyone uses AI outside of the classroom will be closer to the truth than educators expect. (You can’t imagine how much we use GPT Chat, a Columbia University student wrote in a recent article.) Moreover, this hypothesis helps teachers try to find a way to adapt their methods of teaching. Why would a teacher give his students a homework to write an essay on a particular novel or writer when he knows that all of them—except law enforcement freaks, perhaps—will use AI to complete it? Why is this type of activity not replaced by others in the class, individually or in groups, especially since “GBT chat” has become as widespread as “Instagram” and “Snapchat” among students?
* Second, schools must stop relying on AI detection software to catch cheaters. There are dozens of tools on the market today that claim to be able to detect handwriting made using artificial intelligence, although none of them are reliable. These tools generate a lot of false positives and are easily fooled by paraphrasing techniques. Do you doubt it? Ask OpenAI, the maker of ChatGBT, why it is discontinuing its own tool designed to capture AI typewriting this year, and it will answer you, “lower accuracy.”
In the future, AI companies may imprint “watermarks” on their tool products to make them easier to spot, or we may see better AI detection tools emerge. But for now, most AI tools should be considered unobservable, and schools should allocate their time and budgets to other areas.
educational power
> In the third piece of advice, which will likely be the reason we get so many angry emails from teachers, we advise them to focus less on warning students of the shortcomings of generative AI rather than on its strengths.
Many schools last year tried to scare students off using AI by warning them that tools like ChatGPT were ineffective and tended to generate irrational answers. These criticisms may apply to early models of AI-supported chatbots, but they are less realistic for modern models, and smart students now know how to get the best results by prompting models with clear and precise commands.
This contributed to many students outperforming their professors in understanding generative artificial intelligence and its capabilities if used properly. Moreover, it can be said that all the warnings about the defects of generative AI that circulated last year have become empty this year, given that GPT 4 succeeded in passing tests at Harvard University.
Alex Cotran, CEO of the nonprofit AI Education Project that helps schools adopt AI, said teachers should spend some time using generative AI themselves to appreciate its benefits and the speed with which it improves.
“Most people still look at ChatGBT as a party stunt,” Cottran said. If you do not appreciate the depth of this tool, you will not be able to do all the required steps.”
There are some resources to help professors in a hurry to catch up with the development of artificial intelligence, and some teachers have even begun collecting recommendations to help their colleagues in this regard on dedicated websites that provide practical advice on generative AI for teachers.
Personally, we believe that personal experience is indispensable, so we advise teachers to start exploring ChatGBT and other AI tools themselves to enhance their skills in using them and catch up with many of their students.
For schools still fascinated by generative AI, this final piece of advice is: treat this year—the first academic year after the onset of the ChatGPT era—as a learning experience and don’t expect everything to work out.
Upside down classroom
Artificial intelligence can change the classroom in many ways. Ethan Mollick, a professor at the University of Pennsylvania’s Wharton School, believes that technology will lead more teachers to adopt a “flipped classroom” approach — getting students to learn material outside the classroom and apply it inside — that gives them an advantage over cheating. Other professors we contacted talked about exploring the idea of turning generative AI into a classroom partner, or a way for students to practice their skills at home with the help of an AI-powered tutor.
Some of these experiments will fail, of course, and others will succeed, because we are all still in the process of adapting to this strange technology, and we must stumble along the way.
Students need guidance when it comes to generative AI, but if schools treat it as a fad — or an enemy to be vanquished — they miss an opportunity to help them.
Finally, Molick concludes by saying: “Many things will emerge, and for this reason, you must decide what we will do instead of resisting defeat against artificial intelligence.”
The New York Times Service