Home » today » Technology » Facebook Announces ChatGPT Competitor and Generative AI Team

Facebook Announces ChatGPT Competitor and Generative AI Team

Faced with the enthusiasm around generative artificial intelligences, the Meta group opens access to its LLaMA language model to facilitate research in the field.

Meta on Friday unveiled its own version of artificial intelligence (AI) that can generate text, like ChatGPT, opening it up for now to researchers so they can respond to the risks posed by these new technologies.

The objective of this new language model called LLaMA is “to help researchers progress in their work” on this subject, in particular because it does not require very large infrastructures to be studied, specifies the company in a presentation text.

“We are building a new high-level product group at Meta focused on generative AI to energize our work in this area,” confirmed Mark Zuckerberg, in a Facebook post on Monday.

Uses in WhatsApp or Instagram

The boss of the Meta group indicates in particular to focus on creative and expressive tools in the short term. But in the future, Mark Zuckerberg acknowledges wanting to develop artificial intelligence “personalities” to help people in various ways.

The objective will also be to work on experiments around text and images. These projects could lead to new functions within the messaging applications WhatsApp and Messenger, but also to the creation of filters or advertising formats on Instagram.

The launch in November of the ChatGPT conversational robot from the OpenAI start-up shook up the world of AI by allowing the general public to see the ability of new “language models” to generate, in a few seconds, a text on a topic. given or to provide an explanation on a complex subject. But they also pose risks, be it factual errors, bias or data protection.

A test version of Microsoft’s Bing search engine, developed in partnership with OpenAI, quickly issued inconsistent responses, with the computer program notably expressing threats or its desire to steal the nuclear codes.

Understand language models

“Further research is needed to address the risks of bias, toxic comments and hallucinations,” says Meta, the parent company of Facebook and Instagram.

But it takes significant resources, especially in terms of computing power, to train and run these language models.

This “limits the ability of researchers to understand how and why these large language models work, hampering efforts to improve their robustness and mitigate known issues, such as bias, toxicity, and the ability to generate insights.” wrong,” notes the company. This is why Meta has developed several versions of LLaMA, requiring more or less resources.

As OpenAI and Microsoft limit access to technologies that power their AI, Mark Zuckerberg’s company decided to share how it built LLaMA so researchers can “more easily test new approaches to limit or eliminate ” The problems.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.