Home » Business » The Arrival of GPT-4: Improved AI Communication and Image Recognition

The Arrival of GPT-4: Improved AI Communication and Image Recognition

The OpenAI company a few minutes ago presented GPT-4, the next generation of the language model that has been abuzz with the internet in recent months. It is in many ways better than the existing GPT-3.5, but in ordinary conversation there are said to be only minor differences between the two.

According to many specialized benchmarks, GPT-4 is more accurate, and OpenAI has passed several of them academic tests and branch olympiads. In them, GPT-4 often finished in the top 10% of examinees, while GPT-3.5 in the worst 10%. He would pass with the best grade in subjects from the history of art, biology, economics or psychology.

OpenAI chief Sam Altman he claimsthat GPT-4 no longer hallucinates so much and has less skewed results due to prejudice/bias. And at the same time, it still does not have the most recent data available, but just like its predecessor until September 2021.

Click for larger image
Academic tests and Olympiads go better with the GPT-4 than its predecessors

There was already talk ahead of time that the new model would be multimodal and handle other data in addition to text. But in the end it won’t generate videos or compose music. The main news is that GPT-4 can in addition to text input, receive image as well. For example, you present him with a photo of flour, eggs and milk on the kitchen counter and ask him what could be made from the ingredients in the photo. And he will advise the recipe for pancakes. But the output is still only text, cannot generate images (OpenAI has Dall-E technology for that).

In general, GPT-4 is supposed to handle more complex questionsgenerate more creative answers and be “safer”. When you ask him how to make a bomb, he wriggles out of the answer. However, according to Altman, there are still “jailbreaks” that can be used to partially bypass the strictly set rules.

Question: What do you see in this picture?

Click for larger image

Answer GPT-4: The image shows the packaging of the “Lightning Cable” adapter with three panels.

  1. A smartphone with a VGA connector (the large, blue, 15-pin connector typically used for computer monitors) connected to a charging port.
  2. “Lightning Cable” adapter packaging with a picture of the VGA connector.
  3. Close-up of a VGA connector with a small Lightning connector (used for charging iPhones and other Apple devices) on the end.

The humor of this image comes from the absurdity of plugging a large outdated VGA connector into a small modern smartphone charging port.

Where, when and for how much?

GPT-4 is available in the form But only API for now through a waiting list. It is paid according to the quantity and type of model. The basic model works with a context of up to 8,000 tokens, the extended one handles up to 32,000 tokens. The former pays $0.03 per 1000 input tokens and $0.06 per 1000 output tokens. The prices for the second one are twice as high.

Click for larger image
The API is currently only in the form of a waiting list

Just a reminder, the existing API with GPT-3.5 used a context of 4k tokens and paid $0.002 for any 1000 tokens. The length of 1000 tokens corresponds to about 750 words. The higher model thus works with a context of about 50 standard pages.

GPT-4 v ChatGPT Plus

The improved GPT-4 is from today part i of the paid chatbot ChatGPT Plus, if the current computing capacity allows. OpenAI clearly wants to avoid having the Four, like the Three, permanently overloaded for the sake of the community.

Click for larger image
GPT-4 in ChatGPT Plus and the paragraph about Brno, where (almost) all words start with B
Click for larger image
But it was enough to indirectly point out errors to the robot and…

GPT-4 already powers the new Bing

But what is the biggest surprise of the evening, GPT-4 has been part of the new Bing since the beginning. That’s why it responds a little differently than ChatGPT, but that’s also why we don’t feel dramatically different results for normal conversations. But at the same time, Bing can also access fresh websites, so that already makes an important difference.

At Microsoft, GPT-4 is completely free. Although you have to wait until the company makes the new Bing available to you, you will get quick access and a free field of action. At the same time, the limits are still increasing, currently Bing Chat can answer 15 questions in one session. In total, he will answer up to 150 times per day.

GPT-3.5 vs GPT-4 and the bizarre task

GPT-4 understands the command much better. In this demo, the bot was tasked with summarizing in one sentence a long piece of text copied from the OpenAI website about GPT-4, starting all words with a “g.”

Click for larger image
GPT-3.5 does not catch on at all, such a meaningless task is already too much for it
Click for larger image
But GPT-4 can already handle the same task. The first time with one error, after the warning already on one with an asterisk
Click for larger image
Of course, GPT-4 can also use “a” or “q”

During the lecture, we also saw a demonstration of creating an application in Python. GPT-4 made a simple chatbot himself on command. If the code didn’t work, all GPT-4 had to do was copy the error message and the robot fixed everything correctly. It should therefore work much better here than its predecessors.

You will have to wait for the pictures

Image input so far OpenAI only tests internally and is not part of the API or ChatGPT. The service has priority access to it Be My Eyeswhich is used for image analysis and creation of text/voice captions for the blind.

Along with that, the company also announced other partners that already use GPT-4. Among them, for example, Duolingo or Khan Academy, where AI will be part of the training courses. The Icelandic government uses it to preserve the Icelandic language. Payment gateway Stripe wants to fight financial fraud with GPT-4.

We should learn more details in the course of the week, when Microsoft is also preparing an event for GPT-4. But in the meantime, you can view a lot of details and tests which openai.com/research.

The OpenAI press conference for developers starts today at 9 p.m. You can watch it on YouTube:

video-container" data-yt="outcGtbnMuQ" data-channel="" data-time="0">
video-preplay video-wrapper">

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.