Connect with us

Technology

Kuo: Apple to declare latest MacBook with Mini-LED display in mid-2022

Published

on

Apple was dealing with another upgraded MacBook Air for 2022, yet he wasn’t explicit about the timetable. Presently Kuo claims that this reputed PC will be authoritatively presented at some point in mid-2022, which could show an April discharge very much like the 2021 iMac or even at WWDC in June.

The investigator additionally emphasizes his prior note about the Mini-LED show going to the cutting edge MacBook Air, however this time Kuo said that it will include a 13.3-inch screen. This recommends that in spite of the new innovation, the presentation will continue as before size as the current age. Mac is reputed to receive a 14-inch show for the new MacBook Pro, however it appears to be that the organization will save it for its more costly laptops.

For those new, Mini-LED depends on great many minuscule LEDs in the backdrop illumination, which brings about higher differentiation proportions and more profound blacks, like OLED.

As indicated by the report, the new MacBook Air will likewise highlight a redesigned Apple Silicon chip. Recently, a leaker uncovered that the new MacBook Air will be the principal Mac with a M2 chip, while the MacBook Pro to be presented in the not so distant future will accompany M1X — an updated variant of M1 with better graphics.

We expect Apple to release a new MacBook Air around the middle of 2022 with a 13.3-inch mini LED display. If the component shortage continues to improve in 2022, it will benefit from the new MacBook Air and Apple Silicon upgrades.

Bits of gossip likewise propose that the MacBook Air will get a significant update one year from now, just as the cutting edge MacBook Pro in the not so distant future. The MacBook Air setup, in any case, is relied upon to be accessible in numerous colors.

Technology

MM1, a Family of Multimodal AI Models with up to 30 billion Parameters, is being Developed by Apple Researchers

Published

on

In a pre-print paper, Apple researchers presented their work on developing a multimodal large language model (LLM) for artificial intelligence (AI). The paper describes how it was possible to achieve the advanced capabilities of multimodality and train the foundation model on both text-only data and images, and it was published on an online portal on March 14. The Cupertino-based tech giant has made new advances in AI in response to CEO Tim Cook’s statement during the company’s earnings calls, which stated that AI features might be released later this year.

ArXiv, an open-access online repository for scholarly papers, has published the research paper’s pre-print version. Peer review is not, however, applied to the papers that are posted here. The project is thought to be connected to Apple as well, even though the paper makes no mention of the company; this is because the majority of the researchers mentioned are connected to the machine learning (ML) division of Apple.

A family of multimodal models with up to 30 billion parameters, known as MM1, is the project that the researchers are currently working on. The paper’s authors referred to it as a “performant multimodal LLM (MLLM)” and noted that in order to build an AI model that can comprehend both text and image-based inputs, image encoders, the vision language connector, and other architecture elements and data decisions were made.

The paper provided an example in stating that “We demonstrate that achieving state-of-the-art (SOTA) few-shot results across multiple benchmarks, compared to other published pre-training results, requires a careful mix of image-caption, interleaved image-text, and text-only data for large-scale multimodal pre-training.”

To put it simply, the AI model has not received enough training to produce the intended results and is presently in the pre-training phase. This phase involves designing the model’s workflow and data processing eventually using the algorithm and AI architecture. The researchers at Apple were able to incorporate computer vision into the model by means of a vision language connector and image encoders. Upon conducting tests using a combination of image-only, image-text, and text-only data sets, the team discovered that the outcomes were comparable to those of other models at the same stage.

Although this is a significant breakthrough, there is insufficient evidence in this research paper to conclude that Apple will integrate a multimodal AI chatbot into its operating system. It’s difficult to even say at this point whether the AI model is multimodal in terms of receiving inputs or producing output (i.e., whether it can produce AI images or not). However, it can be said that the tech giant has made significant progress toward developing a native generative AI foundation model if the results are verified to be consistent following peer review.

Continue Reading

Technology

Google Gemini may be used by Apple to Power Certain AI Features on the iPhone

Published

on

In addition to leveraging Gemini to power features within its services and apps, Google makes its LLM available to outside developers. It has been reported that Apple and Google are in negotiations to license Gemini for the iPhone.

“Active negotiations to let Apple license Gemini, Google’s set of generative AI models, to power some new features coming to the iPhone software this year” are reported by Bloomberg. The company that powers Microsoft’s AI capabilities, OpenAI, has also had conversations with Apple.

As stated in today’s report, Gemini can be used for text and image generation, indicating that Apple is specifically seeking partners for cloud-based generative AI. With the impending release of “iOS 18, Apple is working to provide its own on-device AI models and capabilities.”

It’s still early in the talks, so it’s unclear what the AI will be called. The current partnership between the two companies—default search engine—would be greatly expanded by this.

Taking a look at the rest of the market, Google and Samsung announced a partnership in February so that the Galaxy S24’s voice recording and note-taking apps would have access to Gemini power summarization features. Additionally, Samsung’s photo gallery app uses Imagen 2 text-to-image diffusion for generative editing. Samsung is utilizing an on-device version of Gemini, but all of those features necessitate server-side processing.

Google provides Gemini in three sizes; the majority of first- and third-party apps use Pro. Whereas 1.0 Ultra powers the premium Gemini Advanced tier, Gemini 1.0 Pro powers the free version of gemini.google.com.

While Google previewed Gemini 1.5 in mid-February, which features a significantly larger context window that facilitates the absorption of more information, Gemini 1.0 is still available in stable. The “output more consistent, relevant, and useful” as a result may result.

  • Gemini Ultra: The biggest and most powerful model for extremely complicated jobs
  • Gemini Pro: The most effective model for scaling in various tasks
  • Gemini Nano: The most effective model for tasks performed on a device

Although multiple providers, including OpenAI, are mentioned as possibilities in today’s report, Bloomberg predicts that a deal will not be announced until WWDC in June.

Continue Reading

Technology

Open Source AI Chatbot Grok for Researchers and Developers is a feature of Elon Musk’s xAI

Published

on

On March 17, Elon Musk’s AI company, xAI, released the large language model (LLM) Grok-1 as open source. The billionaire declared last week that the AI chatbot will be open-sourced on his social media platform X, which was formerly known as Twitter. It is currently accessible to developers and researchers. Interestingly, the xAI developers said that only the pre-trained LLM has been made available to the general public. This implies that although Grok lacks training data, you can still build upon it using the weights and network architecture.

“We are releasing the base model weights and network architecture of Grok-1, our large language model,” xAI wrote in a blog post announcing the open release. Grok-1 is a Mixture-of-Experts model with 314 billion parameters that was trained from scratch using xAI. The AI company also mentioned that the Apache 2.0 license is being used to provide the LLM as open source software. GitHub is where interested parties can obtain the AI model.

VentureBeat reported that the Apache 2.0 license permits both commercial use and modification and redistribution. This implies that programmers can enhance the LLM, customize it for particular uses, and market it. The model cannot be trademarked, though, and developers will need to credit any modifications they made to the original code.

The version of Grok-1 that has been made available is from October 2023, before the model was trained using data from X, and this gives the Grok chatbot a distinct personality, even though a commercial license is being offered. This implies that obtaining the data will be the responsibility of any researchers or developers wishing to use the model.

The publicly available Grok-1 model, according to xAI, is a Mixture-of-Experts (MoE) model with 314 billion parameters. Compared to other LLMs in the public domain, like the Mistral 8x7B or Meta LLaMa 2, the parameter size is substantially larger. The AI’s context window grows as the parameter size increases. It can now respond with greater contextual accuracy and thoroughness as a result.

Grok, a chatbot created to rival the industry leaders, was introduced on November 3, 2023. Limited users who bought the X Premium+ subscription had access to it. Additionally, Musk announced at the time that Grok will receive a stand-alone app.

Continue Reading

Trending

error: Content is protected !!