X

According to Apple research, your iPhone may soon include some amazing AI technology

According to Apple research, your iPhone may soon include some amazing AI technology

Two recently released research papers that highlight the company’s efforts indicate that Apple is delving deeply into artificial intelligence technology. According to the research, Apple is developing on-device AI technology, such as a novel way to run large language models from an iPhone or iPad and a groundbreaking method for creating animatable avatars.

Aptly dubbed “LLM in a flash,” Apple’s research allows complex AI applications to run smoothly on iPhones or iPads by efficiently running LLMs on devices with limited memory. This could also entail using an on-device version of Siri, which is driven by generative AI and can assist with multiple tasks at once, produce text, and have enhanced natural language processing capabilities.

HUGS, which stands for Human Gaussian Splats, is a technique that can be used in as little as 30 minutes to turn brief video clips taken with an iPhone into fully animated avatars. HUGS is a neural rendering framework that can be trained with a few seconds of video to produce a detailed avatar that the user can customize in terms of animation.

What this implies for the Vision Pro and iPhone

There have been rumors that Apple is developing an internal AI chatbot known as “Apple GPT.” According to recent research, the company is making progress in running LLMs by utilizing flash memory on devices that are smaller and less powerful than the iPhone. This may lead to the availability of advanced generative AI tools on devices and the possibility of generative AI-powered Siri.

Beyond the much-needed enhancement of Siri, the implementation of an effective LLM inference strategy such as that outlined in LLM in a Flash may result in greater accessibility to generative AI tools, notable progress in mobile technology, and enhanced performance across a broad spectrum of applications on everyday devices.

Hugs is a technique that can generate pliable digital avatars from a mere few seconds (roughly 50-100 frames) of monocular video, which is arguably the greatest advancement of the two. Because the platform employs a disentangled representation of people and scenes, these human avatars can be animated and placed on various scenes.

Apple claims that HUGS performs better than rival techniques in animating human avatars, achieving rendering speeds 100 times faster than prior approaches and requiring a much shorter training time of only 30 minutes.

Using the iPhone’s camera and processing power to create an avatar could provide users of the device with a new level of realism and personalization in social media, gaming, educational, and augmented reality (AR) applications.

Hugs has the potential to significantly lessen the creep factor for the Digital Persona of the Apple Vision Pro, which was unveiled at the company’s most recent Worldwide Developers’ Conference (WWDC) in June. Utilizing HUGS, Vision Pro users could produce an extremely lifelike avatar with smooth motion and a 60 frames per second rendering time.

With realistic, user-controlled avatars, HUGS’s speed would also enable real-time rendering, which is essential for a seamless augmented reality experience and could improve applications for social media, gaming, and business.

Instead of focusing on machine learning when describing its product features, Apple tends to avoid using buzzwords like “AI.” These studies, however, point to a more thorough understanding of emerging AI technology. Nevertheless, Apple has yet to formally confirm its collaboration with Apple GPT and has not openly acknowledged incorporating generative AI into its products.

Categories: Technology
Komal:
X

Headline

You can control the ways in which we improve and personalize your experience. Please choose whether you wish to allow the following:

Privacy Settings

All rights received