Home » today » Technology » How Meta has just surpassed Google and Apple to integrate powerful AI into smartphones!

How Meta has just surpassed Google and Apple to integrate powerful AI into smartphones!

Meta is bringing its Llama model to smartphones and it’s formidable! The AI ​​remains powerful, without draining the RAM and monopolizing the processor. How did Meta achieve this feat which clearly leaves Apple and Google in the lurch? Explanations.

Making an IAIA work directly on low-power smartphones is the dream of companies specializing in their development. And this dream, Meta announcement achieved it by running its Llama 3.2 1B and 3B models up to four times faster on a phone. Better yet, the models use less than half the memory of previous versions.

How did Meta achieve this result? Using a compression technique based on quantification. It is based on the simplification of the mathematical calculations that power AI models. This is, in a way, the principle explained to Futura by Mehdi Bennis, a researcher working on this type of revolutionary AI at the University of Oulu in Finland. To maintain response accuracy despite a considerably smaller model, Meta mixed two methods by combining QLoRA adapters and another called SpinQuant.

The first is based on quantification-aware training, with LoRA adapters. The idea is to define a fixed weight for the pre-trained model and add matrices that can be trained. The number of parameters that can be trained is reduced, and the AI ​​adaptation process is more efficient. The other is dedicated to portability and is called SpinQuant. Ultimately, with this combination, there is no longer any need for colossal computing power to achieve a result. When tested on Android OnePlus 12 phones, the models were 56% smaller and used 41% less memory while processing text more than twice as fast. The only limitation is the generation of texts which cannot exceed 8,000 characters.

Run AI directly on the smartphone

But that’s not all. While GoogleGoogle and AppleApple are taking an approach to mobile AI that is woven into their operating systems, Meta is making its compressed models open source and taking the opportunity to partner with chipmakers Qualcomm and MediaTek. With open source, there is no need for developers to wait for Android or iOSiOS updates to be able to create AI applications. And by leveraging partnerships with Qualcomm and MediaTek, Meta is optimizing its models for these widely used processors. This allows it to ensure that its AI will work effectively on phones in different price ranges, not just high-end devices.

What Meta is doing can be seen as similar to what happened in the computer world. Before the advent of PCs, processing power came from mainframe computers. It ended up on PCs.

On the same principle, today, AI follows this process to switch servers to direct operation on mobiles. This will still require phones powerful enough to work, but the benefit will be enhanced privacy of the data on the mobile, rather than it passing through via the cloud to be processed. A method that goes against Apple and Google’s vision for AI on smartphones.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.