Home » Technology » “OpenAI Unveils Sora: A Powerful AI Tool with Implications for Deepfake Videos in Every Industry”

“OpenAI Unveils Sora: A Powerful AI Tool with Implications for Deepfake Videos in Every Industry”

OpenAI, the company behind the popular ChatGPT, has unveiled its latest generative artificial intelligence tool called Sora. This groundbreaking AI application has the potential to accelerate the proliferation of deepfake videos and is expected to have implications for virtually every industry. Sora takes written prompts and turns them into original videos, making it a powerful tool that has left AI experts “terrified.”

Oren Etzioni, the founder of TruMedia.org, a nonprofit organization dedicated to fighting AI-based disinformation in political campaigns, expressed his concerns about the rapid evolution of generative AI tools. He believes that the rise of deepfake videos, combined with the influence of social networks, poses a significant threat to democracy. Etzioni particularly emphasized the timing of this development, as it coincides with the upcoming 2024 presidential election.

OpenAI shared a teaser of Sora’s capabilities, showcasing its ability to create sophisticated 60-second-long videos with highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions. However, the tool is not yet publicly available. OpenAI has limited its use to “red teamers,” visual artists, designers, and filmmakers who will test the product and provide feedback before its wider release. The company is also committed to ensuring safety and preventing the creation of misinformation and hateful content through Sora.

According to Etzioni, technology has outpaced the establishment of checks and balances for tools like Sora. He highlights the need for regulations and safeguards to guide the responsible use of AI. Etzioni predicts that Sora or a similar technology from OpenAI’s competitors will be released to the public in the coming months.

The implications of Sora’s capabilities extend beyond political campaigns and celebrities. Dr. Andrew Newell, the chief scientific officer for identify verification firm iProov, warns that malicious actors could use Sora to generate high-quality video deepfakes for offensive purposes. This puts the onus on organizations, especially banks that rely on video authentication security measures, to develop their own AI-based tools to protect consumers against potential threats.

Content creators, including filmmakers and media professionals, are expected to be the most immediately affected by Sora’s capabilities. Reece Hayden, a senior analyst at tech intelligence company ABI Research, believes that multimodal models like Sora could be a game changer for marketing and creative professions, leading to significant cost savings for film and television makers. This could potentially contribute to the proliferation of AI-generated content instead of using human actors.

Sora’s accessibility and ease of use also open up possibilities for users to develop their own content based on prompts. Even major players like Netflix could enable end-users to create their own choose-your-own-adventure-style media using this technology.

As the implications of Sora become more apparent, it is crucial for organizations and individuals to stay vigilant and develop strategies to mitigate the risks associated with deepfake videos. The responsible use of AI tools like Sora requires a balance between innovation and safeguarding against potential harm.

About the Author:
Megan Cerullo is a New York-based reporter for CBS MoneyWatch. She covers a wide range of topics, including small business, workplace, healthcare, consumer spending, and personal finance. Megan regularly appears on CBS News Streaming to discuss her reporting.

video-container">

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.