Connect with us


These Indications Point to the Real Purpose of OpenAI’s Dubious Q* Project



These Indications Point to the Real Purpose of OpenAI's Dubious Q Project

After CEO Sam Altman was temporarily removed from his position and returned to OpenAI last week, there were two reports claiming that a top-secret project at the company had alarmed some researchers there with its potential to find a potent new way to solve unsolvable problems.

“Given vast computing resources, the new model was able to solve certain mathematical problems,” Reuters reported, citing a single unnamed source. “Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success.” The Information said that Q* was seen as a breakthrough that would lead to “far more powerful artificial intelligence models,” adding that “the pace of development alarmed some researchers focused on AI safety,” citing a single unnamed source.

What might Q* at some point be? Consolidating a nearby perused of the underlying reports with thought of the most sweltering issues in artificial intelligence right presently proposes it could be connected with an undertaking that OpenAI declared in May, guaranteeing strong new outcomes from a strategy called “process oversight.”

The task included Ilya Sutskever, OpenAI’s main researcher and prime supporter, who expelled Altman yet later abnegated — The Data says he drove work on Q*. The work from May was centered around decreasing the coherent slipups made by enormous language models (LLMs). Process management, which includes preparing a simulated intelligence model to separate the means expected to take care of an issue, can work on a calculation’s possibilities finding the right solution. The task demonstrated the way that this could help LLMs, which frequently simplify mistakes on rudimentary numerical statements, tackle such issues all the more successfully.

Andrew Ng, a Stanford College teacher who drove simulated intelligence labs at both Google and Baidu and who acquainted many individuals with AI through his classes on Coursera, says that further developing huge language models is the following sensible move toward making them more helpful. ” LLMs are not that great at math, but rather nor are people,” Ng says. ” In any case, in the event that you give me a pen and paper, I’m vastly improved at duplication, and I believe it’s really not that hard to calibrate a LLM with memory to have the option to go through the calculation for duplication.”

There are different signs to what Q* could be. The name might be a mention to Q-learning, a type of support discovering that includes a calculation figuring out how to take care of an issue through sure or negative criticism, which has been utilized to make game-playing bots and to tune ChatGPT to be more useful. Some have proposed that the name may likewise be connected with the A* search calculation, generally used to have a program track down the ideal way to an objective.

The Data tosses one more sign in with the general mish-mash: ” Sutskever’s advancement permitted OpenAI to defeat restrictions on getting an adequate number of great information to prepare new models,” its story says. ” The exploration included utilizing PC produced [data], instead of genuine information like text or pictures pulled from the web, to prepare new models.” That gives off an impression of being a reference to preparing calculations with supposed engineered preparing information, which has arisen as a method for preparing all the more remarkable computer based intelligence models.

Subbarao Kambhampati, a teacher at Arizona State College who is exploring the thinking restrictions of LLMs, believes that Q* might include utilizing tremendous measures of manufactured information, joined with support learning, to prepare LLMs to explicit errands like basic math. Kambhampati takes note of that there is no assurance that the methodology will sum up into something that can sort out some way to tackle any conceivable numerical question.

For more hypothesis on what Q* may be, read this post by an AI researcher who arranges the specific circumstance and hints in noteworthy and coherent detail. The TLDR variant is that Q* could be a work to utilize support learning and a couple of different methods to further develop a huge language model’s capacity to settle errands by thinking through strides en route. Albeit that could improve ChatGPT at math problems, it’s muddled whether it would naturally propose man-made intelligence frameworks could dodge human control.


AI-Powered Chatbot Launched, According to Figure Technology Solutions



The launch of Figure Technology Solutions’ AI-powered chatbot, which was created with the most recent large language models, was announced today. Figure is a provider of a disruptive and scaled technology platform designed to improve efficiency and transparency in financial services. By utilizing AI and machine learning to power its revolutionary lending ecosystem solutions and sustain a highly stable loan portfolio, Figure is demonstrating its dedication to this goal with this strategic launch. Figure is demonstrating its ability to integrate AI into daily operations, providing efficiency and effectiveness in servicing and targeting customers, with its existing AI/ML processes ranging from advanced prospect targeting capabilities to processes designed to streamline operations.

The goal of Figure’s AI chatbot is to improve and expedite the HELOC application and origination process, as well as the platform’s overall customer service experience. During and after the hours that Figure’s Customer Support Specialists are in operation, the chatbot is accessible to offer operational support. In an effort to speed up customer response times and free up Customer Support Specialists’ time to handle more intricate queries, the chatbot offers Figure’s CSS sample answers to frequently asked questions about HELOC products and application procedures during business hours.

Figure’s AI chatbot is intended to answer basic questions after hours, making the application process easier for clients. This AI chatbot acts as round-the-clock support, enhancing the usability and effectiveness of Figure’s loan origination platform by assisting users with their initial inquiries and offering crucial information and help. As a result, Figure’s lending technology solutions platform will function more efficiently and its customer service experience will be enhanced.

The AI chatbot demonstrates Figure’s ongoing efforts to create a lending technology platform that is among the best in the industry and that streamlines and expedites the loan origination and purchase processes. With the use of AI chatbot technology, Figure has been able to handle a nearly 30% increase in monthly chat volume while still offering HELOC customers a constant, round-the-clock channel of communication and more accurate responses.

Chief Data Officer at Figure Technology Solutions Ruben Padron stated, “The mortgage lending space is still highly manual, and there remains a pressing need for automation within the industry.” “We think Figure is putting itself at the forefront of the tech revolution in the mortgage space with the creation of extremely effective customer solutions like the AI chatbot.” “Our aim is to enhance the efficiency of the mortgage and lending industry by leveraging our generative AI portfolio to support our in-house tech-enabled platform. This will allow us to optimize value for our partners and customers.”

In the future, Figure plans to keep improving the AI chatbot to better assist users. Some of the improvements will be in the areas of context saving, customer verification, and chat history carry-forward.

Continue Reading


Adobe Unveils AI-Enhanced Mobile App for Content Creation



Adobe has released a new mobile app called Adobe Express, which leverages generative artificial intelligence (GenAI) from Adobe Firefly to make content creation easier.

The company said in a press release on Thursday, April 18, that users would be able to create and distribute social media posts, videos, flyers, logos, and other types of content with the new mobile app.

According to the release, Govind Balakrishnan, senior vice president of Adobe Express and Digital Media Services, “brings the magic of Firefly generative AI directly into web and mobile content creation services.”

Per the release, the new mobile app is an all-in-one content editor that incorporates the photo, design, video, and GenAI tools from Adobe.

Users of any skill level can easily complete complex tasks with straightforward text prompts thanks to the app’s integration of the company’s Firefly GenAI, according to the release.

According to the release, you can use Text to Image to create images, Text Effects to generate text stylings, Generative Fill to add or remove objects from photos, and Text to Template to create editable templates.

According to the release, this is the first time these Firefly-powered features have been made available on mobile devices.

Balakrishnan stated in the release, “We’re excited to see a record number of customers turning to Adobe Express to promote their ideas, passions, and businesses through digital content and on TikTok, Instagram, X, Facebook, and other social platforms.”

A quarterly earnings call in March saw executives from Adobe announce that the company has been implementing GenAI features across its product lines for digital media, digital experience, publishing, and advertising.

All client segments have demonstrated a high level of demand for these features, according to the business. Since its launch in 2023, Firefly, for instance, has assisted users in creating over 6.5 billion images, vectors, designs, and text effects..

The content supply chain for businesses is set to be revolutionized by Adobe’s latest product launch, GenStudio and Firefly, which it announced in March along with additional GenAI capabilities.

New features in asset management, creation and production, delivery and activation, workflow and planning, insights and reporting, and asset management are among these additions. Their purpose is to furnish organizations with a cohesive and smooth content supply chain.

Continue Reading


Llama 3, a Dedicated AI Web Portal, is Announced by Meta



On April 18, Meta made the announcement that Llama 3, its most recent large language model (LLM), had launched. It was hailed as a “major leap over Llama 2.”

According to the company, it has already released the first two models of the current version, which have 8B and 70B parameters. 400B parameters will be featured in future models.

A “large, high-quality training dataset” with over 15 trillion tokens—7 times larger and 4 times more code than Llama 2—was used to train Llama 3, as highlighted by Meta. To maintain the quality of the data, Llama 3 also includes filtering methods, such as NSFW filters.

Over half of the 12 use cases show that LLama 3 performs better than Llama 2 and rival models like Claude Sonnet from Anthropic, Mistral Medium, and Chat GPT-3.5 from OpenAI.

Text-based models comprised the initial releases of Llama 3. But multilingual and multimodal releases are on the way. “Core LLM capabilities” as defined by Meta will be exhibited by them, along with a longer context and improved reasoning and coding performance.

All significant cloud providers, model API providers, and other services will host Llama 3, according to the company’s plans. The product will be released “everywhere,” as planned.

Greater user Accessibility

Developers are the target audience for Llama 3, but Meta has also introduced new channels for end users in the US and over 12 other countries to access AI services.

A recent inclusion is a specialized website called Meta AI, where users can get homework help, trivia games, simulated job interviews, and writing help powered by AI.

Facebook, Instagram, WhatsApp, Messenger, and other products from Meta are all integrated with Meta AI. Additionally, the service is available in the US through Ray-Ban Meta smart glasses, and the company has plans to expand it to include its Meta Quest VR headset.

The announcement of Meta’s expanded AI product line follows the release of updates to rival services. The competition between consumer-focused AI services progressed when ChatGPT upgraded to GPT-4 Turbo on April 11 and Microsoft Copilot upgraded to GPT-4 Turbo beginning in March.

Continue Reading


error: Content is protected !!