Connect with us

Technology

Bringing Machine Learning Projects to Reality from Concept to Finish

Published

on

Bringing Machine Learning Projects to Reality from Concept to Finish

The greatest innovation ever made by humanity is stalling out of the gate. Projects using machine learning have the potential to assist us in navigating the biggest hazards we face, such as child abuse, pandemics, wildfires, and climate change. It can improve healthcare, increase sales, reduce expenses, stop fraud, and streamline manufacturing.

However, ML projects frequently fall short of expectations or fail to launch at all. They incur heavy losses when they stall before deploying. The fact that businesses frequently concentrate more on the technology than on the best way to use it is one of the main problems. This is akin to being more enthusiastic about a rocket’s development than its eventual launch.

Changing a Misplaced Focus to Deployment from Technology

The issue with ML is its widespread use. Despite all the excitement surrounding the underlying technology, the specifics of how its implementation enhances corporate operations are sometimes overlooked. ML is currently too hot for its own benefit in this sense. The lesson has finally dawned on me after decades of consulting and organizing ML conferences.

Today’s ML enthusiasm is overblown because it perpetuates the ML fallacy, a widespread misunderstanding. It operates as follows: ML algorithms’ models are intrinsically valuable (which is not always true), as they can successfully produce models that stand up for new, unforeseen scenarios (which is both amazing and true). Only when machine learning (ML) generates organizational change, or when a model produced by ML is used to actively enhance operations, does ML become valuable. A model has no real value until it is actively employed to change the way your company operates. A model won’t deploy itself and won’t resolve any business issues on its own. Only if you use ML to cause disruptions will it truly be the disruptive technology that it promises to be.

Regrettably, companies frequently fall short in bridging the “culture gap” between data scientists and business stakeholders, which keeps models hoarded and prevents deployment. When it comes to “mundane” managerial tasks, data scientists—who carry out the model creation step—generally don’t want to be bothered with them and become completely fixated on data science. They frequently overlook a strict business procedure that would involve stakeholders in cooperatively planning the model’s adoption and instead take it for granted.

However, a lot of business people, particularly those who are already inclined to disregard the specifics because they are “too technical,” have been persuaded to believe that this amazing technology is a magic bullet that will fix all of their problems. When it comes to project specifics, they defer to data scientists. It’s difficult to convince them, though, when they eventually have to deal with the operational disruption that a deployed model would cause. The stakeholder is caught off guard and hesitates before changing operations that are essential to the business’s profitability.

The hose and the faucet don’t connect because no one takes proactive responsibility. The operational team drops the ball far too frequently when the data scientist presents a workable model and they aren’t prepared for it. Although there are amazing exceptions and spectacular achievements, the generally dismal performance of ML that we currently see portends widespread disillusionment and possibly even the dreaded AI winter.

The Resolution: Business Machine Learning

The solution is to meticulously plan for deployment right from the start of every machine learning project. It takes more preaching, mingling, cross-disciplinary cooperation, and change-management panache to lay the foundation for the operational change that deployment would bring about than many, including myself, first thought.

In order to do this, a skilled team needs to work together to follow an end-to-end procedure that starts with deployment backward planning. The six steps that make up this technique, which refer to as bizML, are as follows.

Determine the deployment’s objective

Describe the business value proposition (i.e., operationalization or implementation) and how machine learning (ML) will impact operations to make them better.

Example: In order to prepare a more effective delivery process, UPS makes predictions about which destination addresses will receive package deliveries.

Decide on the prediction’s objective

Describe the predictions made by the ML model for each unique case. When it comes to business, every little detail counts.

Example: How many shipments across how many stops will be needed tomorrow for each destination? For instance, by 8:30 a.m., a collection of three office buildings at 123 Main St. with 24 business suites will need two stops, each with three packages.

Decide on the metrics for the evaluation

Establish the important benchmarks to monitor during the deployment and training of the model, as well as the performance threshold that needs to be met for the project to be deemed successful.

Examples include miles traveled, gasoline gallons used, carbon emissions in tons, and stops per mile (the more stops per mile a route has, the more value is gained from each mile of driving).

Get the information ready

Establish the format and format requirements for the training data.

Example: Gather a plethora of both positive and bad instances so that you can learn from them. Include places that did receive delivery on particular days as well as those who did not.

Get the model trained

Utilize the data to create a prediction model. The object that has been “learned” is the model.

Neural networks, decision trees, logistic regression, and ensemble models are a few examples.

Put the model to use

Apply the knowledge gained to new cases by using the model to provide predicted scores, or probabilities, and then take appropriate action based on those scores to enhance business operations.

Example: UPS enhanced its system for allocating packages to delivery trucks at shipping centers by taking into account both known and anticipated packages. An estimated 18.5 million miles, $35 million, 800,000 gallons of fuel, and 18,500 metric tons of emissions are saved annually because to this technology.

These six phases outline a business procedure that provides a clever route for ML implementation. Regardless of whether they work in a technical or business capacity, everyone who wants to engage in machine learning projects needs to be knowledgeable about them.

Step 6 culminates in deployment, and then you’re done. Now to start something new. BizML just marks the start of a continuous process, a new stage in managing enhanced operations and maintaining functionality. A model needs to be maintained when it is launched, which includes regular monitoring and refreshing.

Completing these six stages in this order is practically a given. Let’s begin at the conclusion to comprehend why. Model training and deployment are the two primary ML processes, and they are the last two culminating steps, steps 5 and 6. BizML drives the project to its successful conclusion.

Step 4: Prepare the data is a known prerequisite that comes right before those two and is always completed before model training. For machine learning software to function, the data you feed it must be in the correct format. Since corporations began using linear regression in the 1960s, that stage has been a crucial component of modeling initiatives.

You have to do commercial magic first, then the technical magic. That is the purpose of the first three steps. They initiate a crucial “preproduction” stage of pitching, mingling, and working together to reach a consensus on how machine learning will be implemented and how its effectiveness would be assessed. Crucially, these preliminary actions encompass much more than just deciding on the project’s economic goal. They push data scientists to step outside of their comfort zone and collaborate closely with business-side staff, and they ask business people to delve into the specifics of how forecasts will change operations.

Including Business Partners in the Process

While not frequent, following all six of the bizML practice’s steps is not unheard of. Even though they are rare, many machine learning programs are quite successful. Though it has taken some time for a well-known, established framework to emerge, many seasoned data scientists are familiar with the concepts at the core of the bizML framework.

Business executives and other stakeholders are the ones who probably need it the most, but they are also the ones who are least likely to know about it. As a matter of fact, the general business community is still unaware of the necessity of specialist business practices in the first place. This makes sense because the popular story misleads them. AI is frequently overhyped as a mysterious yet fascinating panacea. In the meantime, a lot of data scientists would much rather crunch figures than take the time to explain.

Technology

LG Introduces Smarter Features in 2024 OLED and QNED AI TVs for India

Published

on

The much awaited 2024 portfolio of OLED evo AI and QNED AI TVs was unveiled today by LG Electronics India. With their advanced AI capabilities and improved audiovisual experiences, these televisions—which were unveiled at CES 2024 earlier this year—are poised to completely transform home entertainment.

AI-Powered Performance: The Television of the Future

The inclusion of LG’s cutting-edge Alpha 9 Gen 6 AI processor is the lineup’s most notable feature for 2024. Compared to earlier versions, the AI performance can be increased four times thanks to this powerhouse. Beautiful graphics are produced by the AI Picture Pro feature with AI Super Upscaling, and simulated 9.1.2 surround sound is used by AI Sound Pro to create an immersive audio experience.

A Wide Variety of Choices to Meet Every Need

QNED MiniLED (QNED90T), QNED88T, and QNED82T alternatives are available in LG’s 2024 range in addition to OLED evo G4, C4, and B4 series models. With screens ranging from a small 42 inches to an amazing 97 inches, this varied variety accommodates a broad spectrum of consumer tastes.

Features for Entertainment and Gaming to Improve the Experience

The new TVs guarantee an exciting gaming experience with their array of capabilities. Among them include a refresh rate of 4K 144Hz, extensive HDMI 2.1 functionality, and Game Optimizer, which makes it simple to adjust between display presets for various genres. In order to provide fluid gameplay, the TVs also feature AMD FreeSync and NVIDIA G-SYNC Compatible technologies.

Cinephiles will value the TVs’ dynamic tone mapping of HDR material, which guarantees the best possible picture quality in any kind of viewing conditions. Films are shown as the director intended with the Filmmaker Mode, which further improves the cinematic pleasure.

Intelligent and Sophisticated WebOS

Featuring an intuitive UI and enhanced functions, LG’s latest WebOS platform powers the 2024 collection. LG has launched the WebOS Re:New program, which promises to upgrade users’ operating systems for the next five years. This ensures that consumers will continue to benefit from the newest features and advancements for many years to come.

The Cost and Accessibility

The QNED AI and LG OLED evo AI TVs for 2024 have pricing beginning at INR 119,990. These TVs are available for purchase through LG’s wide network of retail partners in India.

The Future of Home Entertainment

LG Electronics India has proven its dedication to innovation and stretching the limits of home entertainment once more with their 2024 portfolio. With their amazing graphics, immersive audio, and smart capabilities that adapt to changing consumer demands, the new OLED evo AI and QNED AI TVs promise to provide an unmatched viewing experience.

Continue Reading

Technology

Anomalo Expands Availability of AI-Powered Data Quality Platform on Google Cloud Marketplace

Published

on

Anomalo declared that it has broadened its collaboration with Google Cloud and placed its platform on the Google Cloud Marketplace, enabling customers to use their allotted Google Cloud spend to buy Anomalo right away. Without requiring them to write code, define thresholds, or configure rules, Anomalo gives businesses a method to keep an eye on the quality of data being handled or stored in Google Cloud’s BigQuery, AlloyDB, and Dataplex.

GenAI and machine learning (ML) models are being built and operationalized at scale by modern data-powered enterprises, who are also utilizing their centralized data to perform real-time, predictive analytics. That being said, the quality of the data that drives dashboards and production models determines their overall quality. One regrettable reality that many data-driven businesses soon come to terms with is that a large portion of their data is either , outdated, corrupt, or prone to unintentional and unwanted modifications. Because of this, businesses end up devoting more effort to fixing problems with their data than to realizing the potential of that data.

GenAI and machine learning (ML) models are being built and operationalized at scale by modern data-powered enterprises, who are also utilizing their centralized data to perform real-time, predictive analytics. That being said, the quality of the data that drives dashboards and production models determines their overall quality. A prevalent issue faced by numerous data-driven organizations is that a significant portion of their data is either missing, outdated, corrupted, or prone to unanticipated and unwanted modifications. Instead of utilizing their data to its full potential, businesses wind up spending more time fixing problems with it.

Keller Williams, BuzzFeed, and Aritzia are among the joint Anomalo and Google Cloud clients. As stated by Gilad Lotan, head of data science and analytics at BuzzFeed, “Anomalo with Google Cloud’s BigQuery gives us more confidence and trust in our data so we can make decisions faster and mature BuzzFeed Inc.’s data operation.” “We can identify problems before stakeholders and data users throughout the organization even realize they exist thanks to Anomalo’s automatic detection of data quality and availability.” Thanks to BigQuery and Anomalo’s combined capabilities, it’s an excellent place for data teams to be as they transition from reactive to proactive operations.

“Our shared goal of assisting businesses in gaining confidence in the data they rely on to run their operations is closely aligned with that of Google Cloud. Our clients are using BigQuery and Dataplex to manage, track, and create data-driven applications as a result of the skyrocketing volumes of data. Co-founder and CEO of Anomalo Elliot Shmukler stated, “It was a no-brainer to bring our AI-powered data quality monitoring to Google Cloud Marketplace as a next step in this partnership, and a massive win.”

According to Dai Vu, Managing Director, Marketplace & ISV GTM Programs at Google Cloud, “bringing Anomalo to Google Cloud Marketplace will help customers quickly deploy, manage, and grow the data quality platform on Google Cloud’s trusted, global infrastructure.” “Anomalo can now support customers on their digital transformation journeys and scale in a secure manner.”

Continue Reading

Technology

Soket AI Labs Unveils Pragna-1B AI Model in Partnership with Google Cloud

Published

on

The open-source multilingual foundation model, known as “Pragna-1B,” was released on Wednesday by the Indian artificial intelligence (AI) research company Soket AI Labs in association with Google Cloud services.

In addition to English, Bengali, Gujarati, and Hindi, the model will offer AI services in other Indian vernacular languages.

“A key factor in the Pragna-1B model’s pre-training was our collaboration with Google Cloud. Our development of Pragna-1B was both efficient and economical thanks to the utilization of Google Cloud’s AI Infrastructure. Asserting comparable performance and efficacy in language processing tasks to similar category models, Pragna-1B demonstrates unmatched inventiveness and efficiency despite having been trained on fewer parameters, according to Soket AI Labs founder Abhishek Upperwal.”

Pragna-1B, he continued, “is specifically designed for vernacular languages. It provides balanced language representation and facilitates faster and more efficient tokenization, making it ideal for organizations looking to optimize operations and enhance functionality.”

By adding Soket’s AI developer platform to the Google Cloud Marketplace and the Pragna model series to the Google Vertex AI model repository, Soket AI Labs and Google Cloud will shortly expand their partnership even further.

Developers will have a strong, efficient experience fine-tuning models thanks to this connection. According to the business, the combination of Vertex AI and TPUs’ high-performance resources with Soket’s AI Developer Platform’s user-friendly interface would provide the best possible efficiency and scalability for AI projects.

According to the firm, this partnership would also make it possible for technical teams to collaborate on the fundamental tasks involved in creating high-quality datasets and training massive models for Indian languages.

“Our collaboration with Soket AI Labs to democratize AI innovation in India makes us very happy.” Pragna-1B, which was developed on Google Cloud, represents a groundbreaking advancement in Indian language technology and provides businesses with improved scalability and efficiency, according to Bikram Singh Bedi, Vice President and Country Managing Director, Google Cloud India.

Since its founding in 2019, Soket has changed its focus from being a decentralized data exchange for smart cities to an artificial intelligence research company.

Continue Reading

Trending

error: Content is protected !!