Connect with us

Technology

Netflix carrying out an external subscription button for iOS clients

Published

on

Recently, Apple started permitting “reader” applications to give external links to clients so they can sign in and pay for a subscription from outside the App Store. Presently Netflix is carrying out a choice in its iOS application that takes clients to its website to complete a new Netflix subscription.

The Netflix application currently involves the new iOS API for reader applications that takes the client to an external website prior to making a subscription. It’s dubious when precisely Netflix started carrying out this choice to iPhone and iPad clients, however in view of reports, the rollout now is by all accounts around the world.

At the point when you tap the subscribe button, that’s what a message says “you’re about to leave the app and go to an external website.” The application likewise takes note of that the transaction will never again be Apple’s responsibility and that all subscription management ought to be finished under Netflix’s platform.

Tapping the Continue button takes you to the Netflix website where you can enter your personal data, pick a payment method, and buy into a Netflix plan. This, obviously, permits Netflix not to pay the 30% commission for every subscription made within iOS applications, which is diminished to 15% for repeating subscriptions after one year.

It’s quite important that Apple considers reading applications to be those that offer digital content like magazines, newspapers, books, audio, music, or video as the main functionality of the application.

Netflix had previously dumped in-app subscriptions

Regardless of this update, Netflix had dumped in-app subscriptions long ago. Back in 2018, the organization released an update to its iOS application that eliminated the choice to allow clients to buy into Netflix directly from its official iPhone and iPad application. Obviously, Apple attempted to prevent Netflix from diverting iOS clients to subscribe to a Netflix plan using Safari.

Because of late antitrust investigations, Apple has at long last permitted a few kinds of applications to offer elective subscription techniques beyond the App Store without paying a commission to the organization. All the more recently, Apple has additionally been compelled to permit App Store applications to give third-party payment techniques in certain nations like the Netherlands and South Korea.

Technology

MM1, a Family of Multimodal AI Models with up to 30 billion Parameters, is being Developed by Apple Researchers

Published

on

In a pre-print paper, Apple researchers presented their work on developing a multimodal large language model (LLM) for artificial intelligence (AI). The paper describes how it was possible to achieve the advanced capabilities of multimodality and train the foundation model on both text-only data and images, and it was published on an online portal on March 14. The Cupertino-based tech giant has made new advances in AI in response to CEO Tim Cook’s statement during the company’s earnings calls, which stated that AI features might be released later this year.

ArXiv, an open-access online repository for scholarly papers, has published the research paper’s pre-print version. Peer review is not, however, applied to the papers that are posted here. The project is thought to be connected to Apple as well, even though the paper makes no mention of the company; this is because the majority of the researchers mentioned are connected to the machine learning (ML) division of Apple.

A family of multimodal models with up to 30 billion parameters, known as MM1, is the project that the researchers are currently working on. The paper’s authors referred to it as a “performant multimodal LLM (MLLM)” and noted that in order to build an AI model that can comprehend both text and image-based inputs, image encoders, the vision language connector, and other architecture elements and data decisions were made.

The paper provided an example in stating that “We demonstrate that achieving state-of-the-art (SOTA) few-shot results across multiple benchmarks, compared to other published pre-training results, requires a careful mix of image-caption, interleaved image-text, and text-only data for large-scale multimodal pre-training.”

To put it simply, the AI model has not received enough training to produce the intended results and is presently in the pre-training phase. This phase involves designing the model’s workflow and data processing eventually using the algorithm and AI architecture. The researchers at Apple were able to incorporate computer vision into the model by means of a vision language connector and image encoders. Upon conducting tests using a combination of image-only, image-text, and text-only data sets, the team discovered that the outcomes were comparable to those of other models at the same stage.

Although this is a significant breakthrough, there is insufficient evidence in this research paper to conclude that Apple will integrate a multimodal AI chatbot into its operating system. It’s difficult to even say at this point whether the AI model is multimodal in terms of receiving inputs or producing output (i.e., whether it can produce AI images or not). However, it can be said that the tech giant has made significant progress toward developing a native generative AI foundation model if the results are verified to be consistent following peer review.

Continue Reading

Technology

Google Gemini may be used by Apple to Power Certain AI Features on the iPhone

Published

on

In addition to leveraging Gemini to power features within its services and apps, Google makes its LLM available to outside developers. It has been reported that Apple and Google are in negotiations to license Gemini for the iPhone.

“Active negotiations to let Apple license Gemini, Google’s set of generative AI models, to power some new features coming to the iPhone software this year” are reported by Bloomberg. The company that powers Microsoft’s AI capabilities, OpenAI, has also had conversations with Apple.

As stated in today’s report, Gemini can be used for text and image generation, indicating that Apple is specifically seeking partners for cloud-based generative AI. With the impending release of “iOS 18, Apple is working to provide its own on-device AI models and capabilities.”

It’s still early in the talks, so it’s unclear what the AI will be called. The current partnership between the two companies—default search engine—would be greatly expanded by this.

Taking a look at the rest of the market, Google and Samsung announced a partnership in February so that the Galaxy S24’s voice recording and note-taking apps would have access to Gemini power summarization features. Additionally, Samsung’s photo gallery app uses Imagen 2 text-to-image diffusion for generative editing. Samsung is utilizing an on-device version of Gemini, but all of those features necessitate server-side processing.

Google provides Gemini in three sizes; the majority of first- and third-party apps use Pro. Whereas 1.0 Ultra powers the premium Gemini Advanced tier, Gemini 1.0 Pro powers the free version of gemini.google.com.

While Google previewed Gemini 1.5 in mid-February, which features a significantly larger context window that facilitates the absorption of more information, Gemini 1.0 is still available in stable. The “output more consistent, relevant, and useful” as a result may result.

  • Gemini Ultra: The biggest and most powerful model for extremely complicated jobs
  • Gemini Pro: The most effective model for scaling in various tasks
  • Gemini Nano: The most effective model for tasks performed on a device

Although multiple providers, including OpenAI, are mentioned as possibilities in today’s report, Bloomberg predicts that a deal will not be announced until WWDC in June.

Continue Reading

Technology

Open Source AI Chatbot Grok for Researchers and Developers is a feature of Elon Musk’s xAI

Published

on

On March 17, Elon Musk’s AI company, xAI, released the large language model (LLM) Grok-1 as open source. The billionaire declared last week that the AI chatbot will be open-sourced on his social media platform X, which was formerly known as Twitter. It is currently accessible to developers and researchers. Interestingly, the xAI developers said that only the pre-trained LLM has been made available to the general public. This implies that although Grok lacks training data, you can still build upon it using the weights and network architecture.

“We are releasing the base model weights and network architecture of Grok-1, our large language model,” xAI wrote in a blog post announcing the open release. Grok-1 is a Mixture-of-Experts model with 314 billion parameters that was trained from scratch using xAI. The AI company also mentioned that the Apache 2.0 license is being used to provide the LLM as open source software. GitHub is where interested parties can obtain the AI model.

VentureBeat reported that the Apache 2.0 license permits both commercial use and modification and redistribution. This implies that programmers can enhance the LLM, customize it for particular uses, and market it. The model cannot be trademarked, though, and developers will need to credit any modifications they made to the original code.

The version of Grok-1 that has been made available is from October 2023, before the model was trained using data from X, and this gives the Grok chatbot a distinct personality, even though a commercial license is being offered. This implies that obtaining the data will be the responsibility of any researchers or developers wishing to use the model.

The publicly available Grok-1 model, according to xAI, is a Mixture-of-Experts (MoE) model with 314 billion parameters. Compared to other LLMs in the public domain, like the Mistral 8x7B or Meta LLaMa 2, the parameter size is substantially larger. The AI’s context window grows as the parameter size increases. It can now respond with greater contextual accuracy and thoroughness as a result.

Grok, a chatbot created to rival the industry leaders, was introduced on November 3, 2023. Limited users who bought the X Premium+ subscription had access to it. Additionally, Musk announced at the time that Grok will receive a stand-alone app.

Continue Reading

Trending

error: Content is protected !!