Connect with us

Technology

Bringing Machine Learning Projects to Reality from Concept to Finish

Published

on

Bringing Machine Learning Projects to Reality from Concept to Finish

The greatest innovation ever made by humanity is stalling out of the gate. Projects using machine learning have the potential to assist us in navigating the biggest hazards we face, such as child abuse, pandemics, wildfires, and climate change. It can improve healthcare, increase sales, reduce expenses, stop fraud, and streamline manufacturing.

However, ML projects frequently fall short of expectations or fail to launch at all. They incur heavy losses when they stall before deploying. The fact that businesses frequently concentrate more on the technology than on the best way to use it is one of the main problems. This is akin to being more enthusiastic about a rocket’s development than its eventual launch.

Changing a Misplaced Focus to Deployment from Technology

The issue with ML is its widespread use. Despite all the excitement surrounding the underlying technology, the specifics of how its implementation enhances corporate operations are sometimes overlooked. ML is currently too hot for its own benefit in this sense. The lesson has finally dawned on me after decades of consulting and organizing ML conferences.

Today’s ML enthusiasm is overblown because it perpetuates the ML fallacy, a widespread misunderstanding. It operates as follows: ML algorithms’ models are intrinsically valuable (which is not always true), as they can successfully produce models that stand up for new, unforeseen scenarios (which is both amazing and true). Only when machine learning (ML) generates organizational change, or when a model produced by ML is used to actively enhance operations, does ML become valuable. A model has no real value until it is actively employed to change the way your company operates. A model won’t deploy itself and won’t resolve any business issues on its own. Only if you use ML to cause disruptions will it truly be the disruptive technology that it promises to be.

Regrettably, companies frequently fall short in bridging the “culture gap” between data scientists and business stakeholders, which keeps models hoarded and prevents deployment. When it comes to “mundane” managerial tasks, data scientists—who carry out the model creation step—generally don’t want to be bothered with them and become completely fixated on data science. They frequently overlook a strict business procedure that would involve stakeholders in cooperatively planning the model’s adoption and instead take it for granted.

However, a lot of business people, particularly those who are already inclined to disregard the specifics because they are “too technical,” have been persuaded to believe that this amazing technology is a magic bullet that will fix all of their problems. When it comes to project specifics, they defer to data scientists. It’s difficult to convince them, though, when they eventually have to deal with the operational disruption that a deployed model would cause. The stakeholder is caught off guard and hesitates before changing operations that are essential to the business’s profitability.

The hose and the faucet don’t connect because no one takes proactive responsibility. The operational team drops the ball far too frequently when the data scientist presents a workable model and they aren’t prepared for it. Although there are amazing exceptions and spectacular achievements, the generally dismal performance of ML that we currently see portends widespread disillusionment and possibly even the dreaded AI winter.

The Resolution: Business Machine Learning

The solution is to meticulously plan for deployment right from the start of every machine learning project. It takes more preaching, mingling, cross-disciplinary cooperation, and change-management panache to lay the foundation for the operational change that deployment would bring about than many, including myself, first thought.

In order to do this, a skilled team needs to work together to follow an end-to-end procedure that starts with deployment backward planning. The six steps that make up this technique, which refer to as bizML, are as follows.

Determine the deployment’s objective

Describe the business value proposition (i.e., operationalization or implementation) and how machine learning (ML) will impact operations to make them better.

Example: In order to prepare a more effective delivery process, UPS makes predictions about which destination addresses will receive package deliveries.

Decide on the prediction’s objective

Describe the predictions made by the ML model for each unique case. When it comes to business, every little detail counts.

Example: How many shipments across how many stops will be needed tomorrow for each destination? For instance, by 8:30 a.m., a collection of three office buildings at 123 Main St. with 24 business suites will need two stops, each with three packages.

Decide on the metrics for the evaluation

Establish the important benchmarks to monitor during the deployment and training of the model, as well as the performance threshold that needs to be met for the project to be deemed successful.

Examples include miles traveled, gasoline gallons used, carbon emissions in tons, and stops per mile (the more stops per mile a route has, the more value is gained from each mile of driving).

Get the information ready

Establish the format and format requirements for the training data.

Example: Gather a plethora of both positive and bad instances so that you can learn from them. Include places that did receive delivery on particular days as well as those who did not.

Get the model trained

Utilize the data to create a prediction model. The object that has been “learned” is the model.

Neural networks, decision trees, logistic regression, and ensemble models are a few examples.

Put the model to use

Apply the knowledge gained to new cases by using the model to provide predicted scores, or probabilities, and then take appropriate action based on those scores to enhance business operations.

Example: UPS enhanced its system for allocating packages to delivery trucks at shipping centers by taking into account both known and anticipated packages. An estimated 18.5 million miles, $35 million, 800,000 gallons of fuel, and 18,500 metric tons of emissions are saved annually because to this technology.

These six phases outline a business procedure that provides a clever route for ML implementation. Regardless of whether they work in a technical or business capacity, everyone who wants to engage in machine learning projects needs to be knowledgeable about them.

Step 6 culminates in deployment, and then you’re done. Now to start something new. BizML just marks the start of a continuous process, a new stage in managing enhanced operations and maintaining functionality. A model needs to be maintained when it is launched, which includes regular monitoring and refreshing.

Completing these six stages in this order is practically a given. Let’s begin at the conclusion to comprehend why. Model training and deployment are the two primary ML processes, and they are the last two culminating steps, steps 5 and 6. BizML drives the project to its successful conclusion.

Step 4: Prepare the data is a known prerequisite that comes right before those two and is always completed before model training. For machine learning software to function, the data you feed it must be in the correct format. Since corporations began using linear regression in the 1960s, that stage has been a crucial component of modeling initiatives.

You have to do commercial magic first, then the technical magic. That is the purpose of the first three steps. They initiate a crucial “preproduction” stage of pitching, mingling, and working together to reach a consensus on how machine learning will be implemented and how its effectiveness would be assessed. Crucially, these preliminary actions encompass much more than just deciding on the project’s economic goal. They push data scientists to step outside of their comfort zone and collaborate closely with business-side staff, and they ask business people to delve into the specifics of how forecasts will change operations.

Including Business Partners in the Process

While not frequent, following all six of the bizML practice’s steps is not unheard of. Even though they are rare, many machine learning programs are quite successful. Though it has taken some time for a well-known, established framework to emerge, many seasoned data scientists are familiar with the concepts at the core of the bizML framework.

Business executives and other stakeholders are the ones who probably need it the most, but they are also the ones who are least likely to know about it. As a matter of fact, the general business community is still unaware of the necessity of specialist business practices in the first place. This makes sense because the popular story misleads them. AI is frequently overhyped as a mysterious yet fascinating panacea. In the meantime, a lot of data scientists would much rather crunch figures than take the time to explain.

Technology

Timescale Introduces Advanced AI Vector Database Extensions for PostgreSQL

Published

on

A PostgreSQL cloud database provider recently declared the availability of two brand-new, open-source extensions that greatly improve the scalability and usability of its data retrieval from vector databases for artificial intelligence applications.

Using PostgreSQL, an open-source relational database, for vector data retrieval is made possible by the new extensions, pgvectorscale and pgai. This is essential for developing AI applications and specialized contextual search.

AI programmers can add data to high-dimensional arrays using vector databases, connecting them based on their contextual relationships with each other. Vector databases store data using contextualized meanings, where the “nearest neighbor” can be used to connect them, in contrast to typical relational databases. For example, a cat and a dog have a closer meaning as family pets than does an apple. When an AI searches for semantic data, including keywords, documents, photos, and other media, this speeds up the information-finding process.

Timescale’s AI product lead, Avthar Sewrathan, told SiliconANGLE in an interview that while most of this data is kept in very popular, high-performance vector databases, not all of the data used by services is kept in vector databases. Thus, in the same context, there are occasionally several data sources.

“AI is being incorporated into every organization in the world, in some form or another, whether through the development of new apps that capitalize on the power of large language models or through the redesign of current ones,” stated Sewrathan. Therefore, CTOs and technical teams must decide whether to employ a distinct vector database or a database they are already familiar with while figuring out how to use AI. Encouraging Postgres to be a better database for AI is the driving force behind these enhancements.

Building on the open-source foundation of the original expansion, pgvectorscale, enables developers to create more scalable artificial intelligence (AI) applications with improved search performance at a reduced cost.

According to Sewrathan, it incorporates two innovations: Statistical Binary Quantization, which is an enhancement of standard binary quantization that helps reduce memory use, and DiskANN, which can offload half of its search indexes to disk with very little impact on performance. DiskANN is capable of saving a significant amount of money.

In comparison to the widely used Pinecone vector database, PostgreSQL was able to attain 28x lower latency for 95% and 16x greater query throughput for approximate nearest neighbor queries at 99% recall, according to Timescale’s benchmarks of pgvectorscale. Since pgvectorscale is written in Rust instead of C, PostgreSQL developers will have more options when developing for vector support.

The next addition, pgai, is intended to facilitate the development of retrieval-augmented generation, or RAG, solutions for search and retrieval in applications using artificial intelligence. In order to lessen the frequency of hallucinations—which occur when an AI boldly makes erroneous statements—RAG blends the advantages of vector databases with the skills of LLMs by giving them access to current, reliable information in real-time.

Building precise and dependable AI systems requires an understanding of this technique. OpenAI conversation completions from models like GPT-4o are built directly within PostgreSQL with the first release of pgai, which facilitates the creation of OpenAI embeddings rapidly.

The most recent flagship model from OpenAI, the GPT-4o, offers strong multimodal capabilities like video comprehension and real-time speech communication.

According to Sewrathan, PostgreSQL’s vector functionality builds a strong “ease of use” bridge for developers. This is significant because many firms currently use PostgreSQL or other relational databases.

Because it streamlines your data architecture, adding vector storage and other features via an extension is much easier, according to Sewrathan. “One database is all you have.” It has the ability to store several data kinds simultaneously. That has been extremely beneficial because without it, there would be a great deal of complexity, data synchronization, and data deduplication.

Continue Reading

Technology

Apple is Updating Siri and Giving it new Generative AI Capabilities

Published

on

The release of iOS 18, macOS updates, and other significant announcements marked the beginning of Apple’s yearly Worldwide Developers Conference (WWDC) 2024 yesterday. The launch of the eagerly awaited new iteration of Apple’s voice assistant, Siri, was the most notable of these. By means of a brand-new system dubbed “Apple Intelligence,” the revised Siri is equipped with stronger generative AI capabilities.

With these enhanced artificial intelligence capabilities, Apple has enabled Siri to perform better, becoming more contextually aware, natural, and deeply ingrained in the Apple environment. The incorporation of ChatGPT into this change promises more intelligent responses and new AI-powered functionality. The updated Siri, according to Apple, is “more natural, more contextually relevant, and more personal,” and it may speed and streamline routine activities. Let’s examine each of the recently added features of Apple’s sophisticated voice assistant in depth.

New style

Activating a bright light that encircles the screen edges is just one of the many features of the redesigned Siri. Increased user engagement is the goal of this graphic makeover. Apple has added onscreen awareness to Siri, which goes beyond aesthetics and allows the virtual assistant to take actions based on what’s on the screen. Customers may now ask Siri to locate and act upon book recommendations received via Messages or Mail, or to add a new address straight from a text message to a contact card.

An enhanced capacity for linguistic comprehension

Apple’s Siri now features richer language-understanding capabilities, allowing it to process and respond to user commands more naturally. This improvement ensures Siri can maintain context across multiple interactions, even if users stumble over their words. Additionally, users can now type to Siri and switch seamlessly between text and voice inputs, offering more flexible ways to interact with the assistant.

Siri’s compatibility with outside applications

Because of the new App Intents API, one of the most notable aspects of the new Siri is its ability to perform actions in a variety of apps—both those developed by Apple and those by outside developers. This means that programmers can give Siri specific commands to execute within their apps. For example, users may ask Siri to “send the photos from the barbecue on Saturday to Malia” using a message app, or “make this photo pop” in a photo editing software. Interactions between various apps and services can now be done more easily thanks to this added capabilities.

Apple and openAI collaborate to power Siri

Notably, Apple and OpenAI have teamed to enhance Siri’s generative AI capabilities by integrating ChatGPT technology. With this integration, Siri can respond with greater sophistication and manage jobs that are more complicated. Users of Apple’s Mac and iPhone operating systems will be able to access ChatGPT through updates, which will improve features like text and content production. Apple’s plan to integrate cutting-edge AI technologies and maintain its competitiveness in the IT industry includes this relationship.

Apple uses sophisticated Siri to protect user privacy

Users can be reassured by Apple that Siri and the new AI capabilities in its devices will respect its strict privacy policies. While the company will rely on the cloud without storing user data there for more power-intensive operations, certain AI functions will process data directly on the device. This strategy aligns with Apple’s goal of striking a balance between improved usefulness and consumer privacy.

The new Siri will only be available on a few chosen Apple devices

The newest iPads, Macs, and iPhones will be the only devices that can utilize this sophisticated Siri experience. Most of Siri’s new features, which are powered by Apple Intelligence, will only be available on the iPhone 15 Pro, iPhone 15 Pro Max, iPads, and Macs with M1 CPUs or later.

Continue Reading

Technology

EU Introduces an AI-Driven “Digital Twin” of the Planet

Published

on

Today, the European Commission unveiled the initial iteration of Destination Earth (DestinE), an AI-driven simulator designed to increase the precision of climate projections.

Two models—one for extreme weather events and another for adapting to climate change—are included in the initial edition of DestinE. With the use of these models, the Earth’s climate will be closely observed, predicted, and simulated.

According to EU antitrust chief Margrethe Vestager, “DestinE means that we can observe environmental challenges which can help us predict future scenarios – like we have never done before.”

The LUMI supercomputer located in Finland is one of the high-performance computers (EuroHPC) that power DestinE. To accelerate data processing, the developers have integrated this with AI.

Vestager stated, “This first phase shows how much we can achieve when Europe puts together its massive supercomputing power and its scientific excellence.”

The main model will, however, probably change over time, and by the end of this decade, a digital duplicate of the Earth should be finished.

Digital Twin of the Earth

Want to test how a heatwave will impact food security? Or if a storm will flood a certain city? Or the best places to position your wind farm? All of that could be possible using the digital twin of the Earth.

The digital twin uses a sizable data lake to fuel its simulations and forecasts. Satellites like those used in the EU’s Copernicus program are the source of this data. It will also originate from vast amounts of public data as well as IoT devices situated on the ground.

Future iterations of the digital twin of Earth will incorporate data from forests, cities, and oceans, pretty much anyplace on Earth that scientists can analyze data.

In 2022, the EU launched DestinE for the first time. The digital twin will be constructed with funding exceeding €300 million.

With today’s launch, the first phase comes to a conclusion and the second phase begins, with a combined funding commitment of over €150 million for both.

As the final Digital Europe program 2025–2027 is presently being prepared, its approval will determine the funding for the third stage.

Organizations working on this kind of technology are not limited to the EU. The Earth-2 digital replica was introduced by Nvidia in March. As stated by the powerhouse in chip manufacturing, the model is currently being used by the Taiwanese government to more accurately forecast when typhoons will hit land.

Continue Reading

Trending

error: Content is protected !!