San Francisco /
The start-up californiana OpenAI, which successfully launched the ChatGPT interface at the end of 2022, capable of generating all kinds of texts on demand, presented this Tuesday GPT-4, a new version of the generative artificial intelligence technology that operates the famous chatbot.
“GPT-4 is a great multimedia model, less adept than humans in many real life scenarios, but as good as humans in many professional and academic contexts“, the company said in a statement.
“For example, passed the exam to become a lawyer with a score as good as the top 10 percent. The previous version, GPT 3.5, was at the level of the bottom 10 percent,” he added.
ChatGPT arouses a lot of enthusiasm but also controversy since it is freely available and used by millions of people around the world to write essays, lines of code, advertisements or simply to test its capabilities.
OpenAI, which has received billions of dollars from Microsoft, has established itself as well as leader in generative AI with its text generation models, but also of images, with his program DALL-E.
His boss, Sam Altman, recently explained that he is now working on the so-called “general” artificial intelligence, that is, programs with human cognitive abilities.
“Our mission is to ensure that general AI, AI systems smarter than humans in general, benefit all of humanity.he said on the company’s blog on February 24.
Multimedia capabilities are a step in that direction.
Unlike previous versions, GPT-4 is endowed with vision: it can process text but also images. However, it only outputs text.
It will be available in ChatGPT, but without the possibility of providing you with images at the moment.
OpenAI also notes that despite its capabilities, GPT-4 has “similar limitations” to its predecessors. “Still not completely reliable (makes up facts and makes reasoning errors).”
The company announced that it has hired more than 50 experts to assess the new dangers that could arise, for example for cybersecurity, in addition to the already known ones (generation of dangerous tips, faulty computer code, false information).
hc