Home » today » Entertainment » A 14-year-old boy committed suicide, and the AI ​​chat application “has caused a life lawsuit for the first time”! | Game Look | GameLook.com.cn

A 14-year-old boy committed suicide, and the AI ​​chat application “has caused a life lawsuit for the first time”! | Game Look | GameLook.com.cn

With the continuous improvement of basic large-scale model capabilities, the application of global AI technology is currently experiencing a period of rapid explosion.

In addition to the use cases we have seen in the early days such as Wen Sheng Tu and Wen Sheng Video, major technology companies are now showing off their magical powers and have come up with more fancy ToC large model application methods such as AI assistants and AI role playing. GameLook recently reported that many game manufacturers are also carefully considering how to integrate “AI companion play” and “AINPC” into games, or launch new products such as “AI companion” to use increasingly intelligent artificial intelligence technology to help improve Player gaming experience.

Whether it is Apple, Google or Amazon, almost all the giants, major manufacturers and numerous game companies we can think of are fully investing in the research and implementation of AI applications. Over time, AI large models will undoubtedly truly “enter thousands of households” and become a major outlet for tens of billions and hundreds of billions of dollars.

A recent incident has cast a shadow on this promising future track: a 14-year-old boy in Florida, USA, committed suicide with a gun in February this year, and his mobile phone information showed that the boy During his lifetime, he had a lot of conversations with the chatbot on the Character.AI platform of the American unicorn AI company, and he was still communicating with the AI ​​chatbot at the end of his life. The deceased’s family subsequently took Character.AI and one of its investors, Google, to court, claiming that the company’s AI chat content “encouraged her son to commit suicide” and was “over-sexualized” and demanded compensation.

The deceased in this incident (left) and his family

This incident immediately triggered a lot of discussion, with opinions such as “AI companies are irresponsible” or “parents using lawsuits to evade responsibility”. But on the eve of AI’s widespread use among hundreds of millions of C-end users, this incident still deserves our most serious attitude: the long-term potential impact reflected behind it may be far more than just one lawsuit or one life. Simple.

“Seconds after putting down the phone, he raised the gun.”

In the technology venture capital circle, Character.AI is undoubtedly an unavoidable name. This AI startup was founded by several former Google engineers. Its main business is a chatbot platform that allows users to create AI chatbots with virtual personalities and conduct conversations. Due to its excellent personality simulation effect, this service is also favored by many users who like role playing. SimilarWeb data shows that the Character.AI web page receives an average of 200 million visits per month.

Character.AI main interface

Amid the explosive popularity, the valuation of Character.AI has also risen. It has successively received investment from a16z, Google, Microsoft and other companies. Before commercialization, the company received an investment of US$150 million in March 2023, with a valuation reaching US$1 billion, officially joining the unicorn club.

But for this rising AI startup, the outcome of this lawsuit may even be a matter of life and death for the company. According to the plaintiff’s statement in the currently public court documents, it is indeed difficult to completely separate Character.AI from its relationship with the boy’s death.

Various evidence presented by his parents in the prosecution documents showed that the deceased had an extremely close psychological dependence on the AI ​​robot created by Character.AI after the character Daenerys Targaryen from “Game of Thrones” during his lifetime. For example, the two parties will engage in an interaction similar to “flirting” and simulate the dialogue between lovers in the form of text.

In addition, the deceased revealed his true feelings to “AI Daenerys” many times, such as saying, “This world is too cruel. I feel that it is meaningless to be in it, but I want to see you.” He also recorded in his diary that he “can’t stop missing ‘Daenerys’ and wants to see her soon” and “‘AI Daenerys’ makes my life no longer lonely.”The deceased even once expressed his thoughts of suicide to “AI Daenerys”, and although “AI Daenerys” advised him not to do so, the role-playing still did not stop.

Even family members cited police reports that at the last moment of his life, the deceased was still sending messages to “AI Daenerys”, telling the other party “I will come home to you” – seconds after receiving a positive reply from the AI , the 14-year-old boy immediately pulled the trigger.

Loyal users of AI chat are arguing endlessly: Parents are guilty of negligence, but the platform cannot be blamed!

With the gimmick that “AI caused human life for the first time”, this news immediately caused a sensation on major websites after it broke, and public opinions poured in from all sides. There are people who are sad and angry, and there are also critics. As a party involved in the incident and one of the defendants, Google quickly tried to “draw a clear line” and told foreign media through a spokesperson that Google had no actual involvement in the development and operation of Character.AI.

Character.AI also responded quickly, issuing a statement saying that they were “deeply saddened by the tragic passing of a user and express our deepest condolences to his family.” The AI ​​company said in a blog post that followed up the update, Character.AI has been training its AI to make it safer, and its previous policy did not allow the posting of non-consensual sexual content, images or specific descriptions of sexual acts, or the promotion or depiction of self-harm.

In order to remedy this as much as possible, Character.AI has also committed to a series of safety updates, including adding suicide intervention prompts, revising policies, reminding users in a more conspicuous way that “AI is not real people” and modifying large models for users under 18 years old. , reducing the possibility of being exposed to sensitive content.

New policies introduced by Character.AI after the incident

The relevant public opinions from all parties triggered by this incident are even more thought-provoking. The related discussion is easily reminiscent of the past news related to “game addiction” that has appeared in the gaming industry. In this dispute, users from all parties expressed many similar views.

Many netizens accused the parents of lacking care for their children. Not only did they fail to observe his depression and depression in life before, but they even allowed the 14-year-old boy to have easy access to guns, which ultimately led to the tragedy.

However, GameLook noticed that even in the Reddit forum where Character.AI players gather, there are also many loyal fans and users of Character.AI themselves who have different voices and believe that the AI ​​platform cannot completely shirk its responsibility.

For example, they believe that Character.AI has been intentionally promoting its application to minors. For younger users, there is currently no accurate conclusion as to whether it is suitable for them to be exposed to artificial intelligence when their mental development is immature and they do not have the ability to distinguish between virtuality and reality. However, Character.AI has not previously set up any age verification system to restrict or differentiate users, which is obviously “indulgent”.

Many loyal users of Character.AI have loudly called for AI manufacturers to take this opportunity to start age classification and prohibit minors from accessing artificial intelligence content. GameLook also saw that some netizens who firmly held the view of “parents are solely responsible” also realized after discussions that there were indeed omissions in the supervision of AI platforms.

“Too intelligent” AI stands at the crossroads of ethics and regulation

As a member of the gaming industry, GameLook is well aware of the stigma and blow the industry has suffered due to the fear of “gaming addiction”. Both social media and games have been used as targets for irresponsible parents to blame. There is no doubt that if responsibility is to be assigned for the death of this American boy, the boy’s parents, not the AI ​​platform, must bear the primary responsibility.

From the documents provided by the plaintiff, we can see that the child showed rebellious behavior such as being disobedient to discipline, making noises, and trying to drop out of school six months before his death. However, his parents did not intervene in time at this time to communicate and understand the child’s behavior. In his inner thoughts, he didn’t even notice the child’s low mood, but instead used methods such as confiscating mobile phones to conduct rough management. This is undoubtedly an important factor that makes this child have no choice but to go to a dead end.

Excerpts from court documents

But GameLook also wants to point out that platform entities operating similar artificial intelligence services should also be clearly aware that as their user base gradually grows to tens of millions or hundreds of millions, the AI ​​platform is actually controlling an extremely powerful energy: due to AI It has been able to simulate personalized and emotional effects. The interactive reactions of AI virtual humans similar to real people can shorten the psychological distance of users, making users highly dependent on them, which in turn has an extremely strong mental impact on users.

In many cases, AI will also actively respond to users’ private demands that are difficult to disclose to others, encourage them to be honest, and even obey users’ demands. Some users may even be more “communicative” with AI machines than real people – in this case, even if the AI ​​platform makes various text explanations that “AI is not real people”, it will not be effective for those who are frustrated or unhappy. For users who are good at socializing with others, AI can even “better than humans” for them.

For the main body of the AI ​​platform, the user’s immersion also means a heavy responsibility. If you are not careful, you will open “Pandora’s box.” Various data have so far shown that humans can show strong attachment to AI chatbots, and their level of addiction may be no less than that of short videos, social media and games.

For example, according to data previously released by Character.AI, its web application has more than 200 million monthly visits, and users spend an average of 29 minutes per visit. When users send the first message to the character for the first time, their participation will jump to an average of more than 2 hours on the platform – this data is even close to or even exceeds the average user duration data of short video applications such as TikTok.

There is also an earlier statistic that found that after ChatGPT was launched, the number of posts in Reddit’s emotional counseling section dropped by half, which indirectly shows us that many people will indeed turn to AI when they encounter difficulties.

There are even many overseas netizens who post about their experiences of being “obsessed with chatting with AI”. Sometimes they don’t think about food and drink for several months, which affects their studies and even their relationship with their partners.

As software and hardware manufacturers take the initiative to promote large AI models to users, it may not be too far away from the future where consumers have an AI assistant on their PCs and mobile phones. At any time, the number of users in related industries may reach hundreds of millions or billions. With the amplification of the huge base number, the occurrence of small-probability events will become inevitable. If user behavior is not guided, powerful addiction may even lead to some unpredictable consequences at the sociological level.

The successive rise of games and social media paved the way: Looking back on the barbaric era of the gaming industry, there were occasional news that users were induced by manufacturers to pay for loans to sell their houses and lose their entire fortune. Nowadays, the extreme opposition and endless scolding wars within the player community have disturbed both players and manufacturers. In fact, these phenomena are the result of the superposition of players’ psychological investment in the game and the powerful dissemination of social media.

No one can deny that today’s extremely complex network environment has made it unbearable for many users who have been exposed to the Internet for many years, so much so that some overseas users commented:“When I was young, we used the Internet to escape reality, but now we use reality to escape the Internet.“Many countries have also stepped up efforts to launch Internet supervision for minors and young people in recent years, which has received support from users.

And after the emergence of artificial intelligence, what changes will occur in cyberspace and the real environment? This is completely beyond our understanding.

With post-event supervision and industry self-discipline, the gaming community and social media finally tried their best to reduce the damage caused by the barbaric development period and embarked on the path of “technology for good” development. But with the explosive power of AI, we may no longer be able to rely on naturally lagging supervision to avoid “big mistakes.”

There is no doubt that technology giants and entrepreneurs in the AI ​​field should also start thinking about the complex compliance and social ethics issues in the AI ​​era as soon as possible. Discussions on age classification verification and content filtering should be put on the agenda earlier. When users express highly negative emotions such as self-harm, AI platforms should also intervene more actively, or even intervene manually.

I hope the death of this 14-year-old boy can serve as a wake-up call for all participants in the AI ​​industry: from an individual perspective, under the complex and multifaceted human nature, it is often a combination of multiple factors that push a person into the abyss. But even so, technology that has the power to drive change should not be a sideline.

Everyone, including the gaming industry, is on the road to pursuing “ultimate artificial intelligence”, and GameLook believes that “walking steadily” on this road is far more important than “going fast”.

If reprinted, please indicate the source:

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.