Connect with us

Technology

Apple Launches Eight Small AI Language Models for On-Device Use

Published

on

Within the field of artificial intelligence, “small language models” have gained significant traction lately due to their ability to operate locally on a device rather than requiring cloud-based data center-grade computers. On Wednesday, Apple unveiled OpenELM, a collection of minuscule AI language models that are available as open source and small enough to run on a smartphone. For now, they’re primarily proof-of-concept research models, but they might serve as the foundation for Apple’s on-device AI products in the future.

Apple’s new AI models, collectively named OpenELM for “Open-source Efficient Language Models,” are currently available on the Hugging Face under an Apple Sample Code License. Since there are some restrictions in the license, it may not fit the commonly accepted definition of “open source,” but the source code for OpenELM is available.

A similar goal is pursued by Microsoft’s Phi-3 models, which we discussed on Tuesday. These models are small, locally executable AI models that can comprehend and process language to a reasonable degree. Although Apple’s OpenELM models range in size from 270 million to 3 billion parameters across eight different models, Phi-3-mini has 3.8 billion parameters.

By contrast, OpenAI’s GPT-3 from 2020 shipped with 175 billion parameters, and Meta’s largest model to date, the Llama 3 family, has 70 billion parameters (a 400 billion version is on the way). Although parameter count is a useful indicator of the complexity and capability of AI models, recent work has concentrated on making smaller AI language models just as capable as larger ones were a few years ago.

Eight OpenELM models are available in two flavors: four that are “pretrained,” or essentially a next-token version of the model in its raw form, and four that are “instructional-tuned,” or optimized for instruction following, which is more suitable for creating chatbots and AI assistants:

The maximum context window in OpenELM is 2048 tokens. The models were trained using datasets that are publicly available, including RefinedWeb, a subset of RedPajama, a version of PILE that has had duplications removed, and a subset of Dolma v1.6, which contains, according to Apple, roughly 1.8 trillion tokens of data. AI language models process data using tokens, which are broken representations of the data.

According to Apple, part of its OpenELM approach is a “layer-wise scaling strategy” that distributes parameters among layers more effectively, supposedly saving computational resources and enhancing the model’s performance even with fewer tokens used for training. This approach has allowed OpenELM to achieve 2.36 percent accuracy gain over Allen AI’s OLMo 1B (another small language model) with half as many pre-training tokens needed, according to Apple’s published white paper.

In addition, Apple made the code for CoreNet, the library it used to train OpenELM, publicly available. Notably, this code includes reproducible training recipes that make it possible to duplicate the weights, or neural network files—something that has not been seen in a major tech company before. Transparency, according to Apple, is a major objective for the organization: “The reproducibility and transparency of large language models are crucial for advancing open research, ensuring the trustworthiness of results, and enabling investigations into data and model biases, as well as potential risks.”

By releasing the source code, model weights, and training materials, Apple says it aims to “empower and enrich the open research community.” However, it also cautions that since the models were trained on publicly sourced datasets, “there exists the possibility of these models producing outputs that are biased, or objectionable in response to user prompts.”

Though the company may hire Google or OpenAI to handle more complex, off-device AI processing to give Siri a much-needed boost, Apple has not yet integrated this new wave of AI language model capabilities into its consumer devices. It is anticipated that the upcoming iOS 18 update—which is expected to be revealed in June at WWDC—will include new AI features that use on-device processing to ensure user privacy.

Technology

Google experiments with Android tablets’ desktop windowing

Published

on

Google is testing a new feature for Android tablets that would allow you to easily rearrange apps on your screen and resize them, which will facilitate multitasking. Developer previews of the “desktop windowing” functionality are now accessible, and you can even run multiple instances of the app simultaneously if they support it.

At the moment, Android tablet apps always open in full screen mode. Each program will show up in a window with controls to let you move, maximize, or close it when the new mode is enabled. Moreover, your open programs will be listed in a taskbar at the bottom of the screen.

It sounds a lot like Stage Manager for the iPad, which allows you to do the same with windows on your screen, or with almost any desktop operating system. For years, Samsung has also provided its DeX experience, which gives Android apps on Galaxy phones and tablets desktop-like window management.

When the functionality becomes available to all users, you may activate it by tapping and holding the window handle located at the top of an application’s screen. The shortcut meta key (Windows, Command, or Search) + Ctrl + Down can also be used to enter desktop mode if a keyboard is connected. (You can drag a window to the top of your screen to dismiss the mode, or you can close all of your open apps.)

Apps that are locked to portrait orientation can still be resized, according to Google, which could have odd visual effects if some apps aren’t optimized. Google intends to fix this in a later release, though, by scaling non-resizable apps’ user interfaces without changing their aspect ratios.

For the time being, users with the most recent Android 15 QPR1 Beta 2 for Pixel Tablets can access the developer preview.

Continue Reading

Technology

Sony Faces Backlash for Pricing PlayStation 5 Pro Well Above Xbox

Published

on

Sony Group Corp. has set the price of its new, faster PlayStation 5 Pro at $700, significantly higher than Microsoft’s Xbox Series X, which costs $600. The PlayStation 5 Pro, launching on November 7, comes at a $200 premium over the original PS5, suggesting Sony is targeting a loyal audience willing to pay extra for enhanced performance.

This pricing positions both Sony and Microsoft at the high end of the gaming console market. Four years into their product life cycles, the two most popular home consoles are moving towards premium models. Analysts are split on whether Sony’s pricing strategy will drive sales, especially as it seeks to grow its entertainment portfolio across gaming, anime, and film.

Industry analyst Serkan Toto described the PlayStation 5 Pro as a niche device aimed at hardcore PlayStation users, rather than a mass-market offering. “It’s about Sony skimming the absolute top end of the market,” he said, with the gaming world questioning Sony’s high pricing.

Others speculate that Sony’s pricing strategy is aimed at boosting margins, particularly after recent price hikes in Japan due to rising component costs like chips. The new console will allow for higher resolution and faster frame rates without requiring users to switch between performance modes, delivering 45% faster rendering than the standard PS5, according to lead architect Mark Cerny.

Despite the steep price, some analysts believe Sony could benefit. Citi analyst Kota Ezawa pointed out that no previous game console successor has been priced significantly higher than the original model, and that the PS5 Pro’s improved components may not justify such a big price jump. Nevertheless, the higher price could enhance Sony’s gross margins.

The PlayStation 5, which has sold over 59 million units since its 2020 release, has slightly lagged behind the PlayStation 4. The increased cost of the PS5 Pro may narrow its appeal, as the price edges closer to that of a gaming PC—one of the console market’s biggest competitors.

Reviewers also highlighted the lack of a disc drive in the new model, reflecting a broader industry shift from physical media to digital content. A disc drive will be available separately for purchase.

In a blog post, Sony announced that the PS5 Pro would enhance the performance of older titles, with several popular games such as Hogwarts Legacy, Final Fantasy VII Rebirth, and Spider-Man 2 receiving free updates to take advantage of the console’s new features.

Continue Reading

Technology

Apple’s iPhone 16 Launch: A Crucial Test for Consumer AI

Published

on

Apple is set to unveil its highly anticipated iPhone 16 lineup on Monday, Sept. 9, during its annual event at its Cupertino headquarters. The keynote, led by CEO Tim Cook, is expected to introduce not only the new iPhones but also the 10th anniversary Apple Watch and updated AirPods.

While the hardware lineup is impressive, Wall Street’s focus is elsewhere—on Apple’s generative AI platform, Apple Intelligence. This AI initiative, designed for iPhones, iPads, and Macs, represents Apple’s major push into the consumer AI space. Initially, investors were concerned about the company’s delay in launching AI compared to Microsoft and Google. However, after the platform was revealed at Apple’s WWDC conference in June, the company’s stock surged by 15%, outperforming tech giants like Microsoft, Amazon, and Google.

Apple Intelligence is now positioned as a key feature of the new iPhones, particularly those from the iPhone 15 Pro and newer models. Analysts believe this exclusivity will drive iPhone sales, with Morgan Stanley’s Erik Woodring predicting AI as a major factor in boosting the iPhone replacement cycle.

However, Apple Intelligence might be more than just a sales driver—it could shape consumer perceptions of generative AI itself.

Apple’s AI Ambitions

Apple’s upcoming event makes it clear that AI is front and center. From the tagline “It’s Glowtime” to the colorful logo reminiscent of Siri’s new look, the company is signaling a major AI focus.

The AI features Apple is integrating into its ecosystem are extensive. Users can expect tools that summarize text conversations, prioritize emails, enhance Siri’s capabilities, and offer access to OpenAI’s ChatGPT. Additional features like AI-powered proofreading and email optimization will also be part of the package, along with new apps developed to leverage AI through Apple’s hardware.

Wedbush analyst Dan Ives forecasts that Apple’s AI integration could bring in an extra $10 billion in annual services revenue, potentially boosting the company’s market cap to $4 trillion.

Though competitors like Samsung and Google have also introduced AI in their devices, Apple’s approach seems more compelling. Its June event showcased how seamlessly AI integrates into its ecosystem, making the technology feel more personal and essential compared to the offerings from Samsung’s Galaxy AI and Google’s Gemini platform.

The AI Risk

However, Apple faces challenges in ensuring Apple Intelligence’s success. The AI needs to avoid errors like those seen in Google’s AI tools, which have been criticized for providing bizarre recommendations. More importantly, Apple must prove that its AI is something consumers will genuinely want to use, rather than just a rushed feature aimed at appeasing investors.

As Apple ventures deeper into AI, its success or failure could shape the future of generative AI for everyday consumers.

Continue Reading

Trending

error: Content is protected !!