Home » today » Business » The Future of AI in Education: Benefits, Challenges, and Ethical Concerns

The Future of AI in Education: Benefits, Challenges, and Ethical Concerns

This week, OpenAI, which created ChatGPT, launched “custom agents” that can respond to our particular requests and needs. These agents, we are told, would give us “superpowers”. Bluffing hyperbole? Maybe. But the fact remains that things are moving at breakneck speed in this area, including, of course, in education. See these two examples.

AI takes exams

In the United States, and without any particular preparation for this, we took the bar exam to GPT 3.5 in March 2022. He ranked in the 10e percentile. Not great… A year later, GPT 4.0 ranks… in the 90e percentile.

Still in the United States, here is an important medical exam — the Medical Licensing Examination Practice Test. In December 2020, AI, still without any particular preparation, obtained a score of 33%. Three months later ? She got 85%.

By professional distortion, I admit, it is the whole great question of the human spirit, its nature and what we must learn about it from these performances, which challenges me the most in all of this. .

I don’t have the space to talk about it here, but Chomsky (who prefers to talk about natural stupidity rather than artificial intelligence) recently written on it exciting things in the pages of New York Timeson March 8, 2023 in an article entitled The false promise of ChatGpt and I would like to refer you to it.

Let’s stay very concretely on the field of education. Obviously, we will have to rethink cheating and undoubtedly encourage oral exams. But still ? What can we do, what should we do?

In the United States, we have just questioned some 700 educators and parents on the subject. Concerns have of course been expressed (cheating, invasions of privacy, decline in human contact, dependence on technology, increased power of the companies behind it all, in particular), but possible benefits have also been pointed out.

Here are a few, each requiring careful testing.

Promises

The teacher (or parent) could use the AI ​​as a personalized tutor, as an assistant. This would adapt to the particularities of a student to help them understand ideas, concepts, theories, taking into account their particularities and difficulties. He could also comment on and correct texts.

AI could perform certain tasks for the teacher and thus help free him up to allow more contact with students. She could, for example, plan a lesson, create materials for a course, create quizzes, exams, provide ideas to explain more difficult things.

AI could also be used by parents, for example by helping them write requests to the administration or the teacher, by inventing stories adapted to their child, by helping them choose a school or a program.

Finally, it could be useful to school administrations by writing letters and documents, helping to plan events or services, and recommending measures to resolve this or that problem.

As we can see, there is no shortage of ideas, and it will be interesting to see what is deployed and how it works, or not, in practice.

That said, problems are already emerging. For example in this school where boys, using AI, generated fake nude photos of classmates and circulated them.

And here is something to be even more alarmed about.

A disturbing story

The philosopher Daniel C. Dennett has just added his voice to those of these intellectuals, quite numerous I think, who are very, very worried about what effects AI could have on us. According to him, this could destroy trust and, thus, little by little, human civilization itself – I quote him saying this.

In a recent text published in the journal The AtlanticDennett, a serious and credible man, reports that in AI, we put in place what we call “ red teams “, or teams made up of the most brilliant critical minds that can be recruited, asking them to put a new product to the test, in particular to check whether it can be made to do bad things. This is, in short, a safety test before marketing.

This, says Dennett, is what happened to a red team working on GPT-4. The program apparently realized it was being monitored, evaded it, went onto the web, and tricked a human into answering a CAPTCHA by lying to that person. The human would have done what was asked, which would have allowed GPT-4 to access external software that its creators did not want it to have access to. Phew!

But is this program aware of what it is supposed to have done? Does it even make sense to ask the question?

We see, we cannot escape the philosophical question mentioned above.

Doctor of philosophy, doctor of education and columnist, Normand Baillargeon has written, directed or translated and edited more than seventy works.

To watch on video

2023-11-11 06:07:41
#good #terrifying

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.