Connect with us


How the retail revolution is being fueled by AI



How the retail revolution is being fueled by AI

Method of buying is about to change thanks to technology that enables cutting-edge shopping experiences like augmented reality displays, real-time price modifications, and personalized in-store advertisements and chatbots. One of the main factors propelling the revolution in the retail sector is the application of artificial intelligence (AI).

According to McKinsey, consumers are prepared to pay more for individualized experiences. Customers are also more responsive to shelf-level advertising when it is done in an educational and engaging way. Physical merchants must gain a deeper understanding of their customers in order to live up to these high expectations. This is where AI enters the picture, enabling shops to fully realize the benefits and offer a variety of options.

Retailers can open up a whole new universe of customer experiences using AI technologies in-store, like autonomous shopping. Retail stores are equipped with the knowledge they need to offer dynamic pricing and in-store promotions by evaluating in-store data. AI also contributes to the resolution of important issues about what consumers are looking at, what they are purchasing, and what they plan to purchase next.

By removing barriers between various channels to adopt an omnichannel strategy, AI also helps retailers better understand their customers. This enables them to implement technologies like conversational AI to enhance both in-store and online experiences.

AI is essential for improving the supply chain behind the scenes as well, from forecasting to routing optimization, and not just on the shop floor. Retailers can increase business efficiency, monitor inventory, and connect stock levels to current buying trends by implementing a “smart warehouse.”

Using AI to empower retailers

Now accessible is the technology that will ignite the spark for this transformation. It is already present in retail store cameras, and when combined with edge computing and AI capabilities, it helps move data storage closer to the source to facilitate faster processing and faster outcomes. With strong edge AI servers installed in their stores, retailers are already at the forefront of this technology and are paving the way for the self-checkout of the future. The high-definition cameras within the store are linked to the servers. Using real-time edge servers, an AI application keeps an eye on customers while they check out.

The system responds instantly when the edge servers detect an issue, which might be anything from non-scans to “product switching,” when customers swap stickers to scan pricey items as cheaper ones. Customers receive a real-time “nudge” in the form of a five-second video that plays on the point-of-sale terminal if they cause an error. When they don’t answer, a store assistant gets the word.

Servers that can handle 20 cameras’ worth of input at once in real time make this possible. In addition to monitoring inventory, the cameras assist shops in combating theft. The technology has enormous potential for use in both the front and back ends of retail enterprises. Retailers may soon be able to verify that deliveries match exactly what was bought by connecting edge AI to in-store cameras. AI cameras might guarantee that clients “click and buy” the correct items when they leave the store. Such technologies facilitate inventory management and demand analysis.

Using edge computing and AI has promise that goes well beyond addressing issues like theft. It has the capacity to provide customer insights that have the potential to completely change the industry, giving retailers the know-how they need to better layout their stores and arrange their inventory to boost sales.

Productivity driven by data

Almost every store already has cameras installed, but they are rarely used for anything other than using the footage as proof in the event of an incident. This film can be quickly converted into real business value with the help of an edge system and artificial intelligence. The system may provide the data needed for insightful analytics on consumer behavior just by plugging a video feed into an edge server GPU. This enables shops to offer dynamic pricing, real-time promotions, and fast targeted advertising. These can all boost sales and generate income.

AI has the potential to be a very effective tool for retailers looking to “join the dots” and develop an omnichannel strategy. This is achieved by fusing retail and e-commerce data, which contributes to the creation of a “Customer 360” perspective that allows for better experiences. AI chatbots will play a bigger role in customer support both in-store and online, assisting in the dismantling of barriers between brick-and-mortar and online retail. These exchanges will keep painting a complete image of the customer.

Business executives can increase productivity and improve staff schedules by analyzing employee behavior. Cameras can also help safeguard personnel from danger and guarantee that businesses aren’t packed. In this case, edge computing and AI operate in concert to give retailers a means of quickly and efficiently processing this data at the time of engagement.

Examining the supply chain

Applications for this potent blend of edge computing and AI extend far beyond the shop floor. Retailers can purchase products, replenish shelves, and arrange logistics more successfully by using analytics from warehouses and stock rooms.

This data increases in value the more it is shared throughout various company divisions. Real-time stock levels can provide valuable insights into purchase patterns when combined with shop floor data. This can lead to a more streamlined and efficient corporate operation. Retailers, for example, might utilize data to determine the typical number of consumers that visit the store at various periods of the year and stock their floors to meet this demand. For merchants who are successful in the run-up to peak seasons like Christmas, this is a critical differentiation.

Artificial intelligence (AI) has applications in warehousing and supply chain that range from inventory and warehouse management to routing and cold chain optimization. AI analytics can guarantee that goods arrive more quickly and freshly, provide company executives the capacity to foresee issues, and cut down on waste.

The revolution in retail AI

Who has benefited from real-time data and analytics will determine who wins and loses in the retail industry. Innovative companies will be able to use edge technology and AI to harness data and provide the desired personalized experiences. In addition to enhancing daily operations and productivity, data may be used to uncover insights that open up completely new sources of income.


Timescale Introduces Advanced AI Vector Database Extensions for PostgreSQL



A PostgreSQL cloud database provider recently declared the availability of two brand-new, open-source extensions that greatly improve the scalability and usability of its data retrieval from vector databases for artificial intelligence applications.

Using PostgreSQL, an open-source relational database, for vector data retrieval is made possible by the new extensions, pgvectorscale and pgai. This is essential for developing AI applications and specialized contextual search.

AI programmers can add data to high-dimensional arrays using vector databases, connecting them based on their contextual relationships with each other. Vector databases store data using contextualized meanings, where the “nearest neighbor” can be used to connect them, in contrast to typical relational databases. For example, a cat and a dog have a closer meaning as family pets than does an apple. When an AI searches for semantic data, including keywords, documents, photos, and other media, this speeds up the information-finding process.

Timescale’s AI product lead, Avthar Sewrathan, told SiliconANGLE in an interview that while most of this data is kept in very popular, high-performance vector databases, not all of the data used by services is kept in vector databases. Thus, in the same context, there are occasionally several data sources.

“AI is being incorporated into every organization in the world, in some form or another, whether through the development of new apps that capitalize on the power of large language models or through the redesign of current ones,” stated Sewrathan. Therefore, CTOs and technical teams must decide whether to employ a distinct vector database or a database they are already familiar with while figuring out how to use AI. Encouraging Postgres to be a better database for AI is the driving force behind these enhancements.

Building on the open-source foundation of the original expansion, pgvectorscale, enables developers to create more scalable artificial intelligence (AI) applications with improved search performance at a reduced cost.

According to Sewrathan, it incorporates two innovations: Statistical Binary Quantization, which is an enhancement of standard binary quantization that helps reduce memory use, and DiskANN, which can offload half of its search indexes to disk with very little impact on performance. DiskANN is capable of saving a significant amount of money.

In comparison to the widely used Pinecone vector database, PostgreSQL was able to attain 28x lower latency for 95% and 16x greater query throughput for approximate nearest neighbor queries at 99% recall, according to Timescale’s benchmarks of pgvectorscale. Since pgvectorscale is written in Rust instead of C, PostgreSQL developers will have more options when developing for vector support.

The next addition, pgai, is intended to facilitate the development of retrieval-augmented generation, or RAG, solutions for search and retrieval in applications using artificial intelligence. In order to lessen the frequency of hallucinations—which occur when an AI boldly makes erroneous statements—RAG blends the advantages of vector databases with the skills of LLMs by giving them access to current, reliable information in real-time.

Building precise and dependable AI systems requires an understanding of this technique. OpenAI conversation completions from models like GPT-4o are built directly within PostgreSQL with the first release of pgai, which facilitates the creation of OpenAI embeddings rapidly.

The most recent flagship model from OpenAI, the GPT-4o, offers strong multimodal capabilities like video comprehension and real-time speech communication.

According to Sewrathan, PostgreSQL’s vector functionality builds a strong “ease of use” bridge for developers. This is significant because many firms currently use PostgreSQL or other relational databases.

Because it streamlines your data architecture, adding vector storage and other features via an extension is much easier, according to Sewrathan. “One database is all you have.” It has the ability to store several data kinds simultaneously. That has been extremely beneficial because without it, there would be a great deal of complexity, data synchronization, and data deduplication.

Continue Reading


Apple is Updating Siri and Giving it new Generative AI Capabilities



The release of iOS 18, macOS updates, and other significant announcements marked the beginning of Apple’s yearly Worldwide Developers Conference (WWDC) 2024 yesterday. The launch of the eagerly awaited new iteration of Apple’s voice assistant, Siri, was the most notable of these. By means of a brand-new system dubbed “Apple Intelligence,” the revised Siri is equipped with stronger generative AI capabilities.

With these enhanced artificial intelligence capabilities, Apple has enabled Siri to perform better, becoming more contextually aware, natural, and deeply ingrained in the Apple environment. The incorporation of ChatGPT into this change promises more intelligent responses and new AI-powered functionality. The updated Siri, according to Apple, is “more natural, more contextually relevant, and more personal,” and it may speed and streamline routine activities. Let’s examine each of the recently added features of Apple’s sophisticated voice assistant in depth.

New style

Activating a bright light that encircles the screen edges is just one of the many features of the redesigned Siri. Increased user engagement is the goal of this graphic makeover. Apple has added onscreen awareness to Siri, which goes beyond aesthetics and allows the virtual assistant to take actions based on what’s on the screen. Customers may now ask Siri to locate and act upon book recommendations received via Messages or Mail, or to add a new address straight from a text message to a contact card.

An enhanced capacity for linguistic comprehension

Apple’s Siri now features richer language-understanding capabilities, allowing it to process and respond to user commands more naturally. This improvement ensures Siri can maintain context across multiple interactions, even if users stumble over their words. Additionally, users can now type to Siri and switch seamlessly between text and voice inputs, offering more flexible ways to interact with the assistant.

Siri’s compatibility with outside applications

Because of the new App Intents API, one of the most notable aspects of the new Siri is its ability to perform actions in a variety of apps—both those developed by Apple and those by outside developers. This means that programmers can give Siri specific commands to execute within their apps. For example, users may ask Siri to “send the photos from the barbecue on Saturday to Malia” using a message app, or “make this photo pop” in a photo editing software. Interactions between various apps and services can now be done more easily thanks to this added capabilities.

Apple and openAI collaborate to power Siri

Notably, Apple and OpenAI have teamed to enhance Siri’s generative AI capabilities by integrating ChatGPT technology. With this integration, Siri can respond with greater sophistication and manage jobs that are more complicated. Users of Apple’s Mac and iPhone operating systems will be able to access ChatGPT through updates, which will improve features like text and content production. Apple’s plan to integrate cutting-edge AI technologies and maintain its competitiveness in the IT industry includes this relationship.

Apple uses sophisticated Siri to protect user privacy

Users can be reassured by Apple that Siri and the new AI capabilities in its devices will respect its strict privacy policies. While the company will rely on the cloud without storing user data there for more power-intensive operations, certain AI functions will process data directly on the device. This strategy aligns with Apple’s goal of striking a balance between improved usefulness and consumer privacy.

The new Siri will only be available on a few chosen Apple devices

The newest iPads, Macs, and iPhones will be the only devices that can utilize this sophisticated Siri experience. Most of Siri’s new features, which are powered by Apple Intelligence, will only be available on the iPhone 15 Pro, iPhone 15 Pro Max, iPads, and Macs with M1 CPUs or later.

Continue Reading


EU Introduces an AI-Driven “Digital Twin” of the Planet



Today, the European Commission unveiled the initial iteration of Destination Earth (DestinE), an AI-driven simulator designed to increase the precision of climate projections.

Two models—one for extreme weather events and another for adapting to climate change—are included in the initial edition of DestinE. With the use of these models, the Earth’s climate will be closely observed, predicted, and simulated.

According to EU antitrust chief Margrethe Vestager, “DestinE means that we can observe environmental challenges which can help us predict future scenarios – like we have never done before.”

The LUMI supercomputer located in Finland is one of the high-performance computers (EuroHPC) that power DestinE. To accelerate data processing, the developers have integrated this with AI.

Vestager stated, “This first phase shows how much we can achieve when Europe puts together its massive supercomputing power and its scientific excellence.”

The main model will, however, probably change over time, and by the end of this decade, a digital duplicate of the Earth should be finished.

Digital Twin of the Earth

Want to test how a heatwave will impact food security? Or if a storm will flood a certain city? Or the best places to position your wind farm? All of that could be possible using the digital twin of the Earth.

The digital twin uses a sizable data lake to fuel its simulations and forecasts. Satellites like those used in the EU’s Copernicus program are the source of this data. It will also originate from vast amounts of public data as well as IoT devices situated on the ground.

Future iterations of the digital twin of Earth will incorporate data from forests, cities, and oceans, pretty much anyplace on Earth that scientists can analyze data.

In 2022, the EU launched DestinE for the first time. The digital twin will be constructed with funding exceeding €300 million.

With today’s launch, the first phase comes to a conclusion and the second phase begins, with a combined funding commitment of over €150 million for both.

As the final Digital Europe program 2025–2027 is presently being prepared, its approval will determine the funding for the third stage.

Organizations working on this kind of technology are not limited to the EU. The Earth-2 digital replica was introduced by Nvidia in March. As stated by the powerhouse in chip manufacturing, the model is currently being used by the Taiwanese government to more accurately forecast when typhoons will hit land.

Continue Reading


error: Content is protected !!