Home » today » Technology » X starts using user data to train AI. What are the risks? | seclusion

X starts using user data to train AI. What are the risks? | seclusion

OX (formerly Twitter) has joined the list of social networks that use user data to train artificial intelligence (AI) tools. Very recently, users have been “invited” to accept changes to the platform’s usage policy. Among the new permissions, the social network requested permission to analyze network content to train AI models.

For the platform controlled by billionaire Elon Musk, information collected from users is used to train Grok. chatbot of this network. thus Connect with CNNThis document has already been criticized several times due to its circulation in the past. fake news About elections north america Between Kamala Harris and Trump, and even due to the creation of violent and graphic images of politicians.

Brazil’s National Data Protection Authority has already announced that it will summon Director Notified. managed by the platform Elon Musk It thus joins Meta (owner of Facebook, Instagram and WhatsApp), Google and Microsoft. road User Data.

“(X’s decision) is not surprising at all. Stanford’s Transparency Index shows that almost all of the largest companies fall far short of using technology responsibly. OX is another one entering the race Yes IAFind out who has the biggest and best models,” the expert begins, telling PÚBLICO. Helena MonizResponsible for the Ethics Committee for the Responsible Use of AI.

‘Ideological DNA’

What exactly does this data collection mean? This means that companies can use photos, text, captions, and other content shared by users to train their AI tools. Google is GeminiMetao Rama 3 And Apple recently Apple Intelligence PackageThis is ChatGPT There is no Siri assistant.

“Little is known about the state of literacy in Portugal in relation to AI. People often read and say yes to everything said on social media without actually knowing the implications of what is being used. People express a lot of who they are, their personality, their emotions and their political views on social media,” warns Helena Moniz.

Although the professor recognizes the benefits that AI can bring to a variety of areas, he knows firsthand some of the risks of turning these systems into social networks.

“I work in a systems and computer engineering lab, and we have PhD students working with data from large-scale language models. It is possible to extract data from speech, text or images and understand these people’s personality traits, sexual orientation and beliefs. We can easily profile anyone. Let us assume that our ‘ideological DNA’ is encoded in this data,” he elaborates.

This is not a recent phenomenon. In 2018, this news was spread all over the world. o Spotify We were able to understand the user’s emotional state with high precision. Data about the songs listened to allowed the application to understand whether the user was in a state of sadness, happiness, or other emotions. This information was especially important for marketing companies, which could tailor their ads based on the mood of their audience.

EU pays attention to risks

no way European Commission We recently appointed a team of 13 people to design a code of conduct for our AI models. European companies are concerned about the non-transparency shown by the tech giants regarding the data used to train these tools.

“There are several key areas including transparency, risk and data copyright issues. It’s all still very confidential and started a long time ago, but the idea is to build best practice code and inform people about it. “explains Helena Moniz. .

In August new law of AI. One of the guidelines in this European Union diploma requires companies to be fully transparent about the user data used. Anyone who fails to comply with this obligation risks being fined up to €35 million, or a total of 7% of annual income.

Apple in June decided to postpone We have launched a package with AI functions in the European market. At the time, the company claimed that the delay was due to uncertainty related to the application of new European Union regulations, namely the new Digital Markets Act.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.