Connect with us

Technology

Apple Launches Eight Small AI Language Models for On-Device Use

Published

on

Within the field of artificial intelligence, “small language models” have gained significant traction lately due to their ability to operate locally on a device rather than requiring cloud-based data center-grade computers. On Wednesday, Apple unveiled OpenELM, a collection of minuscule AI language models that are available as open source and small enough to run on a smartphone. For now, they’re primarily proof-of-concept research models, but they might serve as the foundation for Apple’s on-device AI products in the future.

Apple’s new AI models, collectively named OpenELM for “Open-source Efficient Language Models,” are currently available on the Hugging Face under an Apple Sample Code License. Since there are some restrictions in the license, it may not fit the commonly accepted definition of “open source,” but the source code for OpenELM is available.

A similar goal is pursued by Microsoft’s Phi-3 models, which we discussed on Tuesday. These models are small, locally executable AI models that can comprehend and process language to a reasonable degree. Although Apple’s OpenELM models range in size from 270 million to 3 billion parameters across eight different models, Phi-3-mini has 3.8 billion parameters.

By contrast, OpenAI’s GPT-3 from 2020 shipped with 175 billion parameters, and Meta’s largest model to date, the Llama 3 family, has 70 billion parameters (a 400 billion version is on the way). Although parameter count is a useful indicator of the complexity and capability of AI models, recent work has concentrated on making smaller AI language models just as capable as larger ones were a few years ago.

Eight OpenELM models are available in two flavors: four that are “pretrained,” or essentially a next-token version of the model in its raw form, and four that are “instructional-tuned,” or optimized for instruction following, which is more suitable for creating chatbots and AI assistants:

The maximum context window in OpenELM is 2048 tokens. The models were trained using datasets that are publicly available, including RefinedWeb, a subset of RedPajama, a version of PILE that has had duplications removed, and a subset of Dolma v1.6, which contains, according to Apple, roughly 1.8 trillion tokens of data. AI language models process data using tokens, which are broken representations of the data.

According to Apple, part of its OpenELM approach is a “layer-wise scaling strategy” that distributes parameters among layers more effectively, supposedly saving computational resources and enhancing the model’s performance even with fewer tokens used for training. This approach has allowed OpenELM to achieve 2.36 percent accuracy gain over Allen AI’s OLMo 1B (another small language model) with half as many pre-training tokens needed, according to Apple’s published white paper.

In addition, Apple made the code for CoreNet, the library it used to train OpenELM, publicly available. Notably, this code includes reproducible training recipes that make it possible to duplicate the weights, or neural network files—something that has not been seen in a major tech company before. Transparency, according to Apple, is a major objective for the organization: “The reproducibility and transparency of large language models are crucial for advancing open research, ensuring the trustworthiness of results, and enabling investigations into data and model biases, as well as potential risks.”

By releasing the source code, model weights, and training materials, Apple says it aims to “empower and enrich the open research community.” However, it also cautions that since the models were trained on publicly sourced datasets, “there exists the possibility of these models producing outputs that are biased, or objectionable in response to user prompts.”

Though the company may hire Google or OpenAI to handle more complex, off-device AI processing to give Siri a much-needed boost, Apple has not yet integrated this new wave of AI language model capabilities into its consumer devices. It is anticipated that the upcoming iOS 18 update—which is expected to be revealed in June at WWDC—will include new AI features that use on-device processing to ensure user privacy.

Technology

Techno and IBM Watsonx is a New Era of Reliable AI Announced by Mahindra

Published

on

Together with IBM, Tech Mahindra, a global leader in digital solutions and technology consulting, is assisting organizations in accelerating the adoption of generative AI in a sustainable manner around the globe.

This partnership combines IBM’s Watsonx AI and data platform with AI Assistants with Tech Mahindra’s array of AI products, TechM amplifAI0->∞.

Customers may now access a range of new generative AI services, frameworks, and solution architectures by combining Tech Mahindra’s AI engineering and consulting talents with IBM Watsonx’s capabilities. This makes it possible to create AI programs that let businesses automate operations using their reliable data. Additionally, it gives companies a foundation on which to build reliable AI models, encourages explainability to help control bias and risk, and permits scalable AI deployment in on-premises and hybrid cloud settings.

Chief digital services officer of Tech Mahindra Kunal Purohit says that in order to revitalize businesses, organizations should prioritize responsible AI practices and the integration of generative AI technology.

“Our partnership with IBM can facilitate digital transformation for businesses, the uptake of GenAI, modernization, and ultimately business expansion for our international clientele,” Purohit continued.

Tech Mahindra has created an operational virtual Watsonx Center of Excellence (CoE) to better improve business skills in AI. Using their combined competencies to produce unique offers and solutions, this CoE serves as a co-innovation center, with a dedicated team tasked with optimizing synergies between the two organizations.

The collaborative offerings and solutions developed through this partnership could help enterprises achieve their goals of constructing machine learning models using open-source frameworks while also enabling them to scale and accelerate the impact of generative AI. These AI-driven solutions have the potential to aid organisations enhance efficiency and productivity responsibly.

IBM Ecosystem General Manager Kate Woolley emphasized the potential of the partnership and added that, when generative AI is developed on a basis of explainability, openness, and trust, it may act as a catalyst for innovation and open up new market opportunities.

“Our partnership with Tech Mahindra is anticipated to broaden Watsonx’s user base and enable even more clients to develop reliable AI as we strive to integrate our know-how and technology to support enterprise use cases like digital labor, code modernization, and customer support,” stated Woolley.

This partnership is in line with Tech Mahindra’s ongoing efforts to revolutionize businesses through cutting-edge AI-led products and services. Some of their most recent offerings include Evangelize Pair Programming, Imaging amplifAIer, Operations amplifAIer, Email amplifAIer, Enterprise Knowledge Search, and Generative AI Studio.

The two businesses had previously worked together, which is noteworthy. On the company’s Singapore site, Tech Mahindra had announced earlier this year that it would be opening a Synergy Lounge in partnership with IBM. For APAC organizations, this lounge aims to expedite the adoption of digital. Technology like as artificial intelligence (AI), intelligent automation, edge computing, 5G, hybrid cloud, and cybersecurity can all be effectively implemented and utilized with its assistance.

In addition to Tech Mahindra, IBM Watsonx has been applied in other partnerships to expedite the application of generative artificial intelligence. Early in the year, the GSMA and IBM also announced a new cooperation to develop the GSMA Foundry Generative AI program and GSMA Advance’s AI Training program, respectively, to boost the use and capabilities of generative AI in the telecom industry.

The program is also available digitally, and it covers the technical underpinnings of generative AI in addition to its business strategy. For architects and developers looking for in-depth, useful expertise on generative AI, this program employs IBM Watsonx to deliver hands-on training.

Continue Reading

Technology

OpenAI Enhances ChatGPT with Google Drive Integration, Streamlined File Access, and Advanced Analytics

Published

on

A major update to ChatGPT was released by OpenAI, enabling users to analyze data straight from OneDrive and Google Drive without having to download and upload. Over the following few weeks, this new feature—which is only available to ChatGPT subscribers who have paid—will be gradually added to the service with the goal of streamlining data analysis and saving customers time and trouble.

According to a blog post by OpenAI, “ChatGPT is now more connected to your data than ever before.” “With the integration of Google Drive and OneDrive, you can directly access and analyse your files – from Excel spreadsheets to PowerPoint presentations – within the chatbot.”

According to OpenAI, ChatGPT can analyze files “more quickly” because to this direct access, which is available to ChatGPT Plus, Enterprise, and Teams users. However, GPT-4o, the improved version of GPT-4 that powers ChatGPT’s premium tiers, is presently the only way to access the additional data analytics tools.

OpenAI has enhanced ChatGPT’s comprehension and manipulation of data, going beyond simple file access. Now, a variety of data-related operations may be carried out by users using natural language commands, such as:

  • Executing analytics-related Python code
  • Combining and streamlining datasets
  • Producing graphs with data from files

Additionally, ChatGPT’s charting capabilities have improved significantly. Now, users may expand their views, engage with the created tables and charts, and personalize the visualisations by altering the colors, posing queries about particular cells, and more. With the exception of several chart types, the chatbot can now create static versions of interactive bar, line, pie, and scatter plot charts.

Additionally, OpenAI emphasized the security of user data. Users of ChatGPT Teams and Enterprise will not have their data used to train AI models, and ChatGPT Plus members have the option to disable this capability.

Continue Reading

Technology

India is the Most Adopting Country in Asia Pacific for Generative AI

Published

on

India’s use of Generative AI (GenAI) is demonstrated in a research produced by Deloitte titled Generative AI in Asia Pacific: Young Employees Lead as Employers Play Catch-Up. Out of 13 nations, India ranks first in terms of the use and adoption of GenAI, according to a poll conducted among 11,900 people in Asia Pacific. It is astounding to learn that 83% of Indian workers and 93% of students actively use this technology.

India has a good adoption rate of GenAI, which is driven by youthful, tech-savvy workers known as “Generation AI.” These young employees are increasing productivity, learning new skills, managing workloads, and saving time by utilizing GenAI. Employers are facing new opportunities and problems as a result of this shift.

The study estimates that within the following five years, everyday utilization of GenAI would rise by 182%. The belief that GenAI can increase the Asia-Pacific region’s contribution to the global economy is reflected in this growth. Eighty-three percent of Indians think it improves social results, and about seventy-five percent think it has economic benefits.

Important Discoveries:

  • Though only 50% of workers and students in Asia Pacific think their bosses are aware of their use, they are driving the GenAI revolution.
  • Seventeen percent of Asia Pacific’s working hours, or around 1.1 billion hours a year, could be impacted by GenAI.
  • More rapidly than industrialized economies, developing nations are implementing GenAI at a rate of thirty percent.
  • Around 6.3 hours are saved weekly by GenAI users in Asia Pacific, while 7.85 hours are saved by Indian users.
  • Work-life balance has been enhanced, according to 41% of time-saving GenAI users.
  • As per the staff of these businesses, seventy-five percent of them have not adopted GenAI yet.

The AI and data capability leader for Deloitte Asia Pacific, Chris Lewin, stated, “One of the most exciting things about working with GenAI is that it is happening to everything, everywhere, all at once, across the globe.” “Over the past twelve months, we have observed that teams in Italy and Ireland can very immediately relate to the issues that our clients in Indonesia or India are facing.” A crucial insight is that while the swift integration of AI won’t result in the immediate loss of jobs, companies that don’t adjust will bear the consequences. Competing companies that provide AI solutions that have the potential to completely change the nature of modern work will attract their employees, especially fresh talent.

Continue Reading

Trending

error: Content is protected !!