Connect with us

Technology

An Executive Says That Meta is Developing a Massive AI Model to Power its “Entire Video Ecosystem”

Published

on

An Executive Says That Meta is Developing a Massive AI Model to Power its Entire Video Ecosystem

According to a corporate executive on Wednesday, one of Meta’s substantial investment in AI is the creation of an AI system intended to run Facebook’s entire video recommendation engine across all of its platforms.

Facebook CEO Tom Alison stated that creating an AI recommendation algorithm that can run the company’s TikTok-like Reels short video service as well as more conventional, lengthier films is a part of Meta’s “technology roadmap that goes to 2026.”

Speaking on stage at Morgan Stanley’s tech conference in San Francisco, Alison stated that Meta has up to now employed distinct models for each of its products, including Reels, Groups, and the main Facebook Feed.

The business Meta has been investing billions of dollars in Nvidia graphics processing units, or GPUs, as part of its grand venture into artificial intelligence. These days, AI researchers primarily utilize them to train the kinds of massive language models that underpin other generative AI models including OpenAI’s well-known ChatGPT chatbot.

According to Alison, “phase 1” of Meta’s technology plan entailed converting the company’s present recommendation algorithms from more conventional computer chips to GPUs in order to enhance product performance as a whole.

Executives at Meta were astounded by how these large AI models could “handle lots of data and all kinds of very general-purpose types of activities like chatting” when interest in LLMs surged the previous year, according to Alison. After realizing there was a chance to create a massive recommendation model that could be used to many goods, Meta developed “this kind of new model architecture” last year, according to Alison, who also stated the business tested it on Reels.

Facebook saw “an 8% to 10% gain in Reels watch time” on the main Facebook app thanks to this new “model architecture,” which Alison said demonstrated how much better the model was “learning from the data than the previous generation.”

“We’ve really focused on kind of investing more in making sure that we can scale these models up with the right kind of hardware,” he said.

As part of its “phase 3” system re-architecture, Meta is currently working to test the technology and implement it across a number of products.

“Instead of just powering Reels, we’re working on a project to power our entire video ecosystem with this single model, and then can we add our Feed recommendation product to also be served by this model,” Alison said. “If we get this right, not only will the recommendations be kind of more engaging and more relevant, but we think the responsiveness of them can improve as well.”

Alison explained how it would function if it is effective, saying, “If you see something that you’re into in Reels, and then you go back to the Feed, we can kind of show you more similar content.”

According to Alison, Meta has amassed an enormous GPU stockpile that will support the company’s larger generative AI initiatives, such the creation of digital assistants.

One of the generative AI projects that Meta is thinking about is adding more advanced chat features to its main feed. This would allow someone to “easily just click a button and say, ‘Hey Meta AI, tell me more about what I’m seeing with Taylor Swift right now,'” after they see a “recommended post about Taylor Swift.”

Additionally, Meta is experimenting with integrating its AI chat feature into Facebook groups. This means that someone in a group on Facebook baking may ask a question about desserts and receive a response from a virtual assistant.

“I believe there is a chance to integrate generative AI into a multiplayer consumer setting,” stated Alison.

Technology

Biden, Kishida Secure Support from Amazon and Nvidia for $50 Million Joint AI Research Program

Published

on

As the two countries seek to enhance cooperation around the rapidly advancing technology, President Joe Biden and Japanese Prime Minister Fumio Kishida have enlisted Amazon.com Inc. and Nvidia Corp. to fund a new joint artificial intelligence research program.

A senior US official briefed reporters prior to Wednesday’s official visit at the White House, stating that the $50 million project will be a collaborative effort between Tsukuba University outside of Tokyo and the University of Washington in Seattle. A separate collaborative AI research program between Carnegie Mellon University in Pittsburgh and Tokyo’s Keio University is also being planned by the two nations.

The push for greater research into artificial intelligence comes as the Biden administration is weighing a series of new regulations designed to minimize the risks of AI technology, which has developed as a key focus for tech companies. The White House announced late last month that federal agencies have until the end of the year to determine how they will assess, test, and monitor the impact of government use of AI technology.

In addition to the university-led projects, Microsoft Corp. announced on Tuesday that it would invest $2.9 billion to expand its cloud computing and artificial intelligence infrastructure in Japan. Brad Smith, the president of Microsoft, met with Kishida on Tuesday. The company released a statement announcing its intention to establish a new AI and robotics lab in Japan.

Kishida, the second-largest economy in Asia, urged American business executives to invest more in Japan’s developing technologies on Tuesday.

“Your investments will enable Japan’s economic growth — which will also be capital for more investments from Japan to the US,” Kishida said at a roundtable with business leaders in Washington.

Continue Reading

Technology

OnePlus and OPPO Collaborate with Google to Introduce Gemini Models for Enhanced Smartphone AI

Published

on

As anticipated, original equipment manufacturers, or OEMs, are heavily integrating AI into their products. Google is working with OnePlus, OPPO, and other companies to integrate Gemini models into their smartphones. They intend to introduce the Gemini models on smartphones later this year, becoming the first OEMs to do so. Gemini models will go on sale later in 2024, as announced at the Google Cloud Next 24 event. Gemini models are designed to provide users with an enhanced artificial intelligence (AI) experience on their gadgets.

Customers in China can now create AI content on-the-go with devices like the OnePlus 12 and OPPO Find X7 thanks to OnePlus and OPPO’s Generative AI models.

The AI Eraser tool was recently made available to all OnePlus customers worldwide. This AI-powered tool lets users remove unwanted objects from their photos. For OnePlus and OPPO, AI Eraser is only the beginning.

In the future, the businesses hope to add more AI-powered features like creating original social media content and summarizing news stories and audio.

AndesGPT LLM from OnePlus and OPPO powers AI Eraser. Even though the Samsung Galaxy S24 and Google Pixel 8 series already have this feature, it is still encouraging to see OnePlus and OPPO taking the initiative to include AI capabilities in their products.

OnePlus and OPPO devices will be able to provide customers with a more comprehensive and sophisticated AI experience with the release of the Gemini models. It is important to remember that OnePlus and OPPO already power the Trinity Engine, which makes using phones incredibly smooth, and use AI and computational mathematics to enhance mobile photography.

By 2024, more original equipment manufacturers should have AI capabilities on their products. This is probably going to help Google because OEMs will use Gemini as the foundation upon which to build their features.

Continue Reading

Technology

Meta Explores AI-Enabled Search Bar on Instagram

Published

on

In an attempt to expand the user base for its generative AI-powered products, Meta is moving forward. The business is experimenting with inserting Meta AI into the Instagram search bar for both chat with AI and content discovery, in addition to testing the chatbot Meta AI with users in nations like India on WhatsApp.

When you type a query into the search bar, Meta AI initiates a direct message (DM) exchange in which you can ask questions or respond to pre-programmed prompts. Aravind Srinivas, CEO of Perplexity AI, pointed out that the prompt screen’s design is similar to the startup’s search screen.

Plus, it might make it easier for you to find fresh Instagram content. As demonstrated in a user-posted video on Threads, you can search for Reels related to a particular topic by tapping on a prompt such as “Beautiful Maui sunset Reels.”

Additionally, TechCrunch spoke with a few users who had the ability to instruct Meta AI to look for recommendations for Reels.

By using generative AI to surface new content from networks like Instagram, Meta hopes to go beyond text generation.

With TechCrunch, Meta verified the results of its Instagram AI experiment. But the company didn’t say whether or not it uses generative AI technology for search.

A Meta representative told TechCrunch, “We’re testing a range of our generative AI-powered experiences publicly in a limited capacity. They are under development in varying phases.”

There are a ton of posts available discussing Instagram search quality. It is therefore not surprising that Meta would want to enhance search through the use of generative AI.

Furthermore, Instagram should be easier to find than TikTok, according to Meta. In order to display results from Reddit and TikTok, Google unveiled a new perspectives feature last year. Instagram is developing a feature called “Visibility off Instagram” that could allow posts to appear in search engine results, according to reverse engineer Alessandro Paluzzi, who made this discovery earlier this week on X.

Continue Reading

Trending

error: Content is protected !!