Home » Technology » OpenAI’s Sora Model Raises Questions About Training Data Source: An Investigation into Mira Murati’s Comments

OpenAI’s Sora Model Raises Questions About Training Data Source: An Investigation into Mira Murati’s Comments

OpenAI’s innovative breakthroughs in the world of artificial intelligence create both excitement and some uncertainty. The recently announced Sora model is at the center of these uncertainties. The Wall Street Journal’s interview with Mira Murati is filled with significant questions about the source of Sora’s training data. Murati, OpenAI’s chief technology officer, avoided giving clear answers to questions about Sora’s data source and glossed over the issue with vague statements.

Mira Murati’s Statements Are Confusing

Ambiguity, especially SoraIt’s worrying for those who want to provide clarity on where ‘s training data comes from. Murati’s statement “we used public data and licensed data” actually raises many questions. What public data was used? From what sources was the licensed data obtained and what type of data does it cover?

Murati gave vague answers when asked about the data source of the company’s Sora model, which can produce video from text instructions. Murati answered the question about how the $80 billion company developed its new model: “We used public data and licensed data.”

The Journal’s Joanna Stern writes that Sora YouTube, Instagram throw away Facebook He asked whether it was trained with data from social media platforms such as. “Actually, I’m not sure about that,” Murati replied and added:

“You know, if they were public – if they were available for public use. But I am not sure. “I’m not confident about this.”

The question of whether Sora was trained with data received from social media platforms remained completely unanswered. Murati’s response of “I’m not sure” clearly reflects a stance that is far from transparency. At this point, clarifying the source of the data used and the confidentiality of this data are extremely important for its fair use and the security of the society.

Another noteworthy point is the information about OpenAI’s partnership with Shutterstock. Murati’s explanations regarding the source of the data are unfortunately not fully satisfactory. Although it is important whether they are publicly available or licensed, it should also be clarified how and for what purposes these data are used.

There Are Situations That May Cause Serious Concerns

AI models are built on training data, and the accuracy, reliability and source of this data are one of the critical factors that determine the model’s success and social impact. Therefore, the uncertainty of the training data of an important AI model such as Sora can raise serious concerns both within the industry and in society.

OpenAIA greater emphasis on transparency and accountability in innovations such as the Sora model could both increase trust and reduce concerns. In conclusion, while it is important to advance in the world of artificial intelligence, this progress needs to be compatible with society’s interests and values.

When you step through the door of OpenAI, you feel like you’ve opened the door to the future. However, lately this door has been filled with a series of legal problems that have blocked the company’s path. Current developments regarding Murati’s management of some of OpenAI’s key projects have both elevated and questioned the company’s name.

Shaken by the dismissal of OpenAI’s executive chairman, Sam Altman, in November last year, the company fell into a temporary leadership vacuum. During this critical period, Murati took over the captaincy and was appointed as interim CEO. However, the management change attracted attention not only within the company but also in the outside world.

Serious Criticisms Are Coming

OpenAI’s AI models have faced severe criticism regarding the accuracy and use of training data. In July 2023, well-known authors such as Sarah Silverman, Richard Kadrey, and Christopher Golden filed a lawsuit accusing ChatGPT of copyright infringement. According to their claims, the AI ​​was using authors’ works by creating summaries based on copyrighted content.

This case wasn’t limited to OpenAI’s legal turmoil. In December, The New York Times filed a similar lawsuit against Microsoft and OpenAI, accusing the companies of allegedly using the newspaper’s content without permission to train AI chatbots. Also, in California, OpenAI ChatGPTAnother class action lawsuit was filed for allegedly collecting private user information from the internet without permission to educate .

The allegations made in these lawsuits show that OpenAI must consider not only technological progress, but also ethical and legal responsibility. As a giant at the forefront of the artificial intelligence world, it is eagerly awaited to see how OpenAI will solve these challenging problems. However, there is a fact that, while artificial intelligence shapes the future, it seems impossible to progress in this shaping journey without the guidance of law.

What I would say as a writer for explanations is that such negative developments have the potential to seriously affect artificial intelligence coins. Any negative approach towards a giant company like OpenAI may affect other artificial intelligence coins, especially WLD.

Disclaimer: The information contained in this article does not constitute investment advice. Investors should be aware that crypto currencies carry high volatility and therefore risk, and should carry out their transactions in line with their own research.

Dr. A.S. Mesut İnan 17 March 2024, 01:00 17.03.2024 – 01:0

2024-03-16 22:00:29
#Statement #Affect #Artificial #Intelligence #Coins #OpenAI #expressions #COINTURK

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.