Connect with us

Technology

Intel Unveils New Initiative for AI Hardware and Software Providers

Published

on

The AI PC Developer Program and the inclusion of independent hardware vendors are the two new artificial intelligence (AI) initiatives that Intel Corporation announced today as part of the AI PC Acceleration Program. In Intel’s mission to empower the hardware and software ecosystem to optimize and maximize AI on over 100 million Intel-based AI PCs by 2025, these significant milestones are being reached.

What the New Programs Do: To provide a seamless developer experience and facilitate the large-scale adoption of new AI technologies by software developers and independent software vendors (ISVs), the AI PC Developer Program was created expressly for them. It offers access to developer kits containing the newest Intel hardware, which includes the Intel® CoreTM Ultra processor, as well as tools, workflows, and frameworks for AI deployment.

Developers now have easy access to AI PC and client-focused toolkits, documentation, and training through the updated developer resource pages. The purpose of these compiled resources is to assist developers in optimizing AI and machine learning (ML) application performance and accelerating new use cases by fully utilizing Intel Core Ultra processor technologies.

As part of Intel’s global partner network, which aims to optimize AI performance in the PC industry, developers can become members of the AI PC Acceleration Program.

The opportunity to prepare, optimize, and enable their hardware for Intel AI PCs has been extended to independent hardware vendors (IHVs) through the AI PC Acceleration Program. The Open Labs at Intel are available to qualified partners, where they can receive early-stage technical and co-engineering support for their hardware solutions and platforms. Additionally, Intel makes reference hardware available through this program to qualified IHV partners so they can test and optimize their technology in order to minimize launch-day glitches.

The AI PC Accelerator Program has already onboarded 150 hardware vendors worldwide, according to Matt King, senior director of Intel’s Client Hardware Ecosystem. “We can’t wait to expand our cutting-edge software and hardware solutions and share this momentum with our large, open developer community.”

The AI Acceleration Program for IHVs is open to IHVs and developers. In order to innovate and improve the AI PC experience, Intel is collaborating with its hardware partners. Come along with Intel as they accelerate innovation.

Why It Matters: Artificial Intelligence will radically alter a wide range of human activities, including creation, learning, working, and interacting. By utilizing Intel’s cutting-edge platform, which consists of central processing units, neural processing units, and graphics processing units, along with optimized software and hardware, anyone can take advantage of artificial intelligence with an AI PC. In order to provide end users with improved performance, productivity, innovation, and creativity, Intel works with a wide range of partners within an open ecosystem. As it empowers ISVs and IHVs, Intel is spearheading advancements in the AI PC era.

Intel provides additional value to developers through its programs, which include:

  • Enhancement of Compatibility: By providing developers with access to the most recent Intel Core Ultra developer kits, optimization, and software tools, they can guarantee that their apps and software operate seamlessly on the newest Intel processors, thereby improving compatibility and the overall end-user.
  • Performance Optimization: Software can be made more efficient and perform better if it is optimized for particular hardware architectures early in the development cycle. Better performance will be possible once AI PCs are broadly accessible thanks to this.
  • Increased Market Prospects and Worldwide Reach: Working with Intel and its extensive network of open, AI-enabled partners offers chances to grow within the network, penetrate new markets, and succeed in a variety of industries.

Through 2024, Intel will release over 300 AI-accelerated features with Intel Core Ultra processors across 230 designs from 12 international original equipment manufacturers. Intel also provides a wide range of toolkits for AI developers to utilize.

Regarding the AI PC Acceleration Program: Since its announcement in October 2023, the program has sought to link independent software and hardware providers with Intel resources, such as training, co-engineering, software optimization, hardware, design resources, technical know-how, co-marketing, and sales opportunities.

Technology

Google Expands the Availability of AI Support with Gemini AI to Android 10 and 11

Published

on

Android 10 and 11 are now compatible with Google’s Gemini AI, which was previously limited to Android 12 and above. As noted by 9to5google, this modification greatly expands the pool of users who can take advantage of AI-powered support for their tablets and smartphones.

Due to a recent app update, Google has lowered the minimum requirement for Gemini, which now makes its advanced AI features accessible to a wider range of users. Previously, Gemini required Android 12 or later to function. The AI assistant can now be installed and used on Android 10 devices thanks to the updated Gemini app, version v1.0.626720042, which can be downloaded from the Google Play Store.

This expansion, which shows Google’s goal to make AI technology more inclusive, was first mentioned by Sumanta Das on X and then further highlighted by Artem Russakoviskii. Only the most recent versions of Android were compatible with Gemini when it was first released earlier this year. Google’s latest update demonstrates the company’s dedication to expanding the user base for its AI technology.

Gemini is now fully operational after updating the Google app and Play Services, according to testers using Android 10 devices. Tests conducted on an Android 10 Google Pixel revealed that Gemini functions seamlessly and a user experience akin to that of more recent models.

Because users with older Android devices will now have access to the same AI capabilities as those with more recent models, the wider compatibility has important implications for them. Expanding Gemini’s support further demonstrates Google’s dedication to making advanced AI accessible to a larger segment of the Android user base.

Users of Android 10 and 11 can now access Gemini, and they can anticipate regular updates and new features. This action marks a significant turning point in Google’s AI development and opens the door for future functional and accessibility enhancements, improving everyone’s Android experience.

Continue Reading

Technology

OpenAI Releases new Features to Encourage Businesses to Develop Artificial Intelligence (AI) Solutions

Published

on

A significant portion of OpenAI’s business is focused on assisting enterprise customers in developing AI products, despite the company’s consumer-facing products, such as ChatGPT and DALL-E, receiving the majority of attention. They are now receiving new tools for those customers.

Corporate clients that power their AI tools with OpenAI’s application programming interface (API) will receive improved security features, the company announced in a blog post, including the option to use single sign-on and multi-factor authentication by default. In order to lessen the chance of any data leaks onto the public internet, OpenAI has also implemented 256-bit AES encryption during data transfers.

Additionally, OpenAI has introduced a new Projects feature that makes it easier for businesses to manage who has access to various AI tools. Companies should find it easier to stick to their budgets with the new cost-saving features, according to OpenAI. One such feature is the ability to use a Batch API to reduce spending by up to 50%.

Although the OpenAI announcement this week isn’t as exciting as a new GPT-4 version or text-to-video generation capabilities, it’s still significant. With OpenAI’s toolset, businesses all over the world are developing a wide range of AI tools for both internal and external use. If certain essential security and cost-savings improvements aren’t made, those businesses might look elsewhere or, worse yet, decide against pursuing AI projects altogether.

Security improvements may be especially important to companies and employees, as well as the eventual customers using their AI tools. If AI can deliver stronger security features, both company and user data is safer.

OpenAI stated that its new features not only address security and cost-savings, but also some of the requests made by its customers. Ingesting 10,000 files into AI tools is now possible for businesses, compared to just 20 files earlier. Additionally, according to the company, OpenAI’s platform should be less expensive to run and easier to use thanks to new file management features and the ability to control usage on the go.

Now accessible are all of OpenAI’s new API features. The company intends to continue enhancing its platform with cost-saving and security features in the future.

Continue Reading

Technology

Apple Launches Eight Small AI Language Models for On-Device Use

Published

on

Within the field of artificial intelligence, “small language models” have gained significant traction lately due to their ability to operate locally on a device rather than requiring cloud-based data center-grade computers. On Wednesday, Apple unveiled OpenELM, a collection of minuscule AI language models that are available as open source and small enough to run on a smartphone. For now, they’re primarily proof-of-concept research models, but they might serve as the foundation for Apple’s on-device AI products in the future.

Apple’s new AI models, collectively named OpenELM for “Open-source Efficient Language Models,” are currently available on the Hugging Face under an Apple Sample Code License. Since there are some restrictions in the license, it may not fit the commonly accepted definition of “open source,” but the source code for OpenELM is available.

A similar goal is pursued by Microsoft’s Phi-3 models, which we discussed on Tuesday. These models are small, locally executable AI models that can comprehend and process language to a reasonable degree. Although Apple’s OpenELM models range in size from 270 million to 3 billion parameters across eight different models, Phi-3-mini has 3.8 billion parameters.

By contrast, OpenAI’s GPT-3 from 2020 shipped with 175 billion parameters, and Meta’s largest model to date, the Llama 3 family, has 70 billion parameters (a 400 billion version is on the way). Although parameter count is a useful indicator of the complexity and capability of AI models, recent work has concentrated on making smaller AI language models just as capable as larger ones were a few years ago.

Eight OpenELM models are available in two flavors: four that are “pretrained,” or essentially a next-token version of the model in its raw form, and four that are “instructional-tuned,” or optimized for instruction following, which is more suitable for creating chatbots and AI assistants:

The maximum context window in OpenELM is 2048 tokens. The models were trained using datasets that are publicly available, including RefinedWeb, a subset of RedPajama, a version of PILE that has had duplications removed, and a subset of Dolma v1.6, which contains, according to Apple, roughly 1.8 trillion tokens of data. AI language models process data using tokens, which are broken representations of the data.

According to Apple, part of its OpenELM approach is a “layer-wise scaling strategy” that distributes parameters among layers more effectively, supposedly saving computational resources and enhancing the model’s performance even with fewer tokens used for training. This approach has allowed OpenELM to achieve 2.36 percent accuracy gain over Allen AI’s OLMo 1B (another small language model) with half as many pre-training tokens needed, according to Apple’s published white paper.

In addition, Apple made the code for CoreNet, the library it used to train OpenELM, publicly available. Notably, this code includes reproducible training recipes that make it possible to duplicate the weights, or neural network files—something that has not been seen in a major tech company before. Transparency, according to Apple, is a major objective for the organization: “The reproducibility and transparency of large language models are crucial for advancing open research, ensuring the trustworthiness of results, and enabling investigations into data and model biases, as well as potential risks.”

By releasing the source code, model weights, and training materials, Apple says it aims to “empower and enrich the open research community.” However, it also cautions that since the models were trained on publicly sourced datasets, “there exists the possibility of these models producing outputs that are biased, or objectionable in response to user prompts.”

Though the company may hire Google or OpenAI to handle more complex, off-device AI processing to give Siri a much-needed boost, Apple has not yet integrated this new wave of AI language model capabilities into its consumer devices. It is anticipated that the upcoming iOS 18 update—which is expected to be revealed in June at WWDC—will include new AI features that use on-device processing to ensure user privacy.

Continue Reading

Trending

error: Content is protected !!