Connect with us

Technology

Google will allow file manager applications demand “All Files Access” on Android 11 next month

Published

on

Google has begun to send out emails to developers whose applications demand broad access to device storage. The email tells engineers that, beginning May fifth, they should educate Google why their application demands broad storage access or they will not be permitted to distribute refreshes that target Android 11.

Before Android 11, applications could demand broad access to a device’s storage by announcing the READ_EXTERNAL_STORAGE permission in their Manifest and requesting that the client award it. Numerous applications that had no legitimate need to peruse every one of the files stored on the device’s storage were requesting this permission, making Google slender capacity access consents with Android 11’s “Scoped Storage” changes. Nonetheless, for applications that truly need broader storage access, for example, document supervisors, Google urged them to keep on focusing on Android 10 (API level 29) and to demand “legacy” storage access by proclaiming requestLegacyExternalStorage=true in their Manifest.

Heritage access permits applications to have broad access to the gadget’s storage without being exposed to Scoped Storage restrictions. In any case, all applications that target Android 11 (API level 30) or more are dependent upon Scoped Storage limitations and can’t demand inheritance admittance to gadget stockpiling. All things being equal, they should demand another consent called MANAGE_EXTERNAL_STORAGE (appeared to the client as “All Files Access”) to be given wide storage access (barring a modest bunch of catalogs like/Android/information or/Android/obb).

Beginning November of 2021, all applications and application updates submitted to Google Play should target Android 11, implying that file manager apps and other applications that need more extensive stockpiling access should ultimately change to the Scoped Storage model and solicitation the All Files Access authorization. The lone issue is that Google at present doesn’t permit designers to demand the “All Files Access” permission. Google prior said it needs engineers to sign a Declaration Form before the application will be permitted on Google Play. This Declaration Form is expected to permit Google to remove applications that have no requirement for “All Files Access”, similar as how Google restricts access to the SMS, Call Log, and the QUERY_ALL_PACKAGES consents.

Despite the fact that Google reported their intention to make designers sign a Declaration Form right back in November of 2019, they actually haven’t made those Declaration Forms really accessible. The organization refered to labor force difficulties originating from the COVID-19 pandemic with regards to why they were conceding permitting applications focusing on Android 11 and mentioning “All Files Access” to be uploaded to Google Play. Google set the unknown date of “early 2021” for when they would open up the Declaration Form.

Presently at last, Google has begun to advise engineers when applications can really demand the “All Files Access” permission. The email sent to developers is confusingly phrased, however a recently distributed help page adds some lucidity. As indicated by the help page, applications that target Android 11 and request “All Files Access” can finally be transferred to Google Play beginning May 2021, which is probably when the Declaration Form goes live. For a rundown of allowed uses, special cases, and invalid uses of “All Files Access”, just as recommended elective APIs, visit Google’s support page.

Technology

Nvidia Unveils NIM for Seamless Deployment of AI Models in Production

Published

on

Nvidia unveiled Nvidia NIM, a new software platform intended to speed up the deployment of personalized and pre-trained AI models into production environments, at its GTC conference today. By combining a model with an optimized inferencing engine and packing it into a container that can be accessed as a microservice, NIM takes the software work that Nvidia has done around inferencing and optimizing models and makes it easily accessible.

According to Nvidia, if the company had any internal AI talent at all, it would normally take developers weeks, if not months, to ship similar containers. For businesses looking to accelerate their AI roadmap, Nvidia’s NIM clearly aims to build an ecosystem of AI-ready containers that use its hardware as the base layer and these carefully chosen microservices as the main software layer.

Currently, NIM supports open models from Google, Hugging Face, Meta, Microsoft, Mistral AI, Stability AI, A121, Adept, Cohere, Getty Images, and Shutterstock in addition to models from NVIDIA. To make these NIM microservices available on SageMaker, Kubernetes Engine, and Azure AI, respectively, Nvidia is already collaborating with Amazon, Google, and Microsoft. Additionally, they’ll be incorporated into LlamaIndex, LangChain, and Deepset frameworks.

In a press conference held prior to today’s announcements, Manuvir Das, Nvidia’s head of enterprise computing, stated, “We believe that the Nvidia GPU is the best place to run inference of these models on […] and we believe that NVIDIA NIM is the best software package, the best runtime, for developers to build on top of so that they can focus on the enterprise applications — and just let Nvidia do the work to produce these models for them in the most efficient, enterprise-grade manner, so that they can just do the rest of their work.”“

TensorRT, TensorRT-LLM, and Triton Inference Server will be the inference engines used by Nvidia. Nvidia microservices that will be made available via NIM include the Earth-2 model for weather and climate simulations, cuOpt for routing optimizations, and Riva for customizing speech and translation models.

The Nvidia RAG LLM operator, for instance, will soon be available as a NIM, a move that the company hopes will simplify the process of creating generative AI chatbots that can extract unique data.

Without a few announcements from partners and customers, this wouldn’t be a developer conference. Presently, NIM’s clientele includes companies like Box, Cloudera, Cohesity, Datastax, Dropbox, and NetApp.

NVIDIA founder and CEO Jensen Huang stated, “Established enterprise platforms are sitting on a goldmine of data that can be transformed into generative AI copilots.” “These containerized AI microservices, developed with our partner ecosystem, are the building blocks for enterprises in every industry to become AI companies.”

Continue Reading

Technology

AWS and Nvidia Collaborate on AI Advancement Infrastructure

Published

on

To enhance generative artificial intelligence (GenAI), Amazon Web Services (AWS) and Nvidia are prolonging their 13-year partnership.

The firms stated in a press release on Monday, March 18, that this partnership intends to introduce the new Nvidia Blackwell GPU platform to AWS, providing clients with cutting-edge and safe infrastructure, software, and services.

According to the release, the GB200 Grace Blackwell Superchip and B100 Tensor Core GPUs are part of the Nvidia Blackwell platform. This platform allows customers to build and run multitrillion parameter large language models (LLMs) faster, at a massive scale, and securely. It does this by combining AWS’s Elastic Fabric Adapter Networking with the hyper-scale clustering of Amazon EC2 UltraClusters and the advanced virtualization and security features of the Nitro system.

According to the release, AWS intends to provide EC2 instances with the new B100 GPUs installed in EC2 UltraClusters to accelerate large-scale generative AI training and inference.

Nvidia founder and CEO Jensen Huang stated in the press release that “our partnership with AWS is accelerating new generative AI capabilities and providing customers with unprecedented computing power to push the boundaries of what’s possible.”

“We currently offer the widest range of Nvidia GPU solutions for customers,” said Adam Selipsky, CEO of AWS, “and the deep collaboration between our two organizations goes back more than 13 years, when together we launched the world’s first GPU cloud instance on AWS.”

This partnership places a high priority on security, the release states. To prevent unauthorized access to model weights and encrypt data transfer, the AWS Nitro System, AWS Key Management Service (AWS KMS), encrypted Elastic Fabric Adapter (EFA), and Blackwell encryption are integrated.

According to the release, the cooperation goes beyond hardware and infrastructure. Additionally, AWS and Nvidia are collaborating to hasten the creation of GenAI applications across a range of sectors. They provide generative AI inference through the integration of Nvidia NIM inference microservices with Amazon SageMaker.

In the healthcare and life sciences sector, AWS and Nvidia are expanding computer-aided drug discovery with new Nvidia BioNeMo FMs for generative chemistry, protein structure prediction, and understanding how drug molecules interact with targets, per the release. These models will be available on AWS HealthOmics, a service purpose-built for healthcare and life sciences organizations.

The partnership’s extension occurs at a time when interest in artificial intelligence has caused Nvidia’s valuation to soar in just nine months, from $1 trillion to over $2 trillion. With an 80% market share, the company dominates the high-end AI chip market.

AWS has been releasing GenAI-powered tools for various industries concurrently.

Continue Reading

Technology

NVIDIA Releases 6G Research Cloud Platform to Use AI to Improve Wireless Communications

Published

on

Today, NVIDIA unveiled a 6G research platform that gives academics a cutting-edge method to create the next wave of wireless technology.

The open, adaptable, and linked NVIDIA 6G Research Cloud platform provides researchers with a full suite of tools to enhance artificial intelligence (AI) for radio access network (RAN) technology. With the help of this platform, businesses can expedite the development of 6G technologies, which will link trillions of devices to cloud infrastructures and create the groundwork for a hyperintelligent world augmented by driverless cars, smart spaces, a plethora of immersive education experiences, extended reality, and cooperative robots.

Its early adopters and ecosystem partners include Ansys, Arm, ETH Zurich, Fujitsu, Keysight, Nokia, Northeastern University, Rohde & Schwarz, Samsung, SoftBank Corp., and Viavi.

According to NVIDIA senior vice president of telecom Ronnie Vasishta, “the massive increase in connected devices and host of new applications in 6G will require a vast leap in wireless spectral efficiency in radio communications.” “The application of AI, a software-defined, full-RAN reference stack, and next-generation digital twin technology will be critical to accomplishing this.”

There are three core components to the NVIDIA 6G Research Cloud platform:

The 6G NVIDIA Aerial Omniverse Digital Twin: Physically realistic simulations of entire 6G systems, from a single tower to a city, are made possible by this reference application and developer sample. Realistic terrain and object properties are combined with software-defined radio access networks (RANs) and simulators for user equipment. Researchers will be able to simulate, develop base-station algorithms based on site-specific data, and train models in real time to increase transmission efficiency by using the Omniverse Aerial Digital Twin.

NVIDIA Aerial CUDA-Accelerated RAN: A software-defined, full-RAN stack that provides researchers with a great deal of flexibility in terms of real-time customization, programming, and testing of 6G networks.

NVIDIA Sionna Neural Radio Framework: This framework uses NVIDIA GPUs to generate and capture data, train AI and machine learning models at scale, and integrates seamlessly with well-known frameworks like PyTorch and TensorFlow. NVIDIA Sionna, the top link-level research tool for wireless simulations based on AI/ML, is also included in this.

The 6G development research cloud platform’s components can all be used by top researchers in the field to further their work.

Charlie Zang, senior vice president of Samsung Research America, stated that the future convergence of 6G and AI holds the potential to create a technological landscape that is revolutionary. As a result, “an era of unmatched innovation and connectivity will usher in,” redefining our interactions with the digital world through seamless connectivity and intelligent systems.

In order to develop the next generation of wireless technology, simulation and testing will be crucial. Prominent vendors in this domain are collaborating with NVIDIA to address the novel demands of artificial intelligence utilizing 6G.

According to Shawn Carpenter, program director of Ansys’ 5G/6G and space division, “Ansys is committed to advancing the mission of the 6G Research Cloud by seamlessly integrating the cutting-edge Ansys Perceive EM solver into the Omniverse ecosystem.” “Digital twin creation for 6G systems is revolutionized by perceive EM.” Without a doubt, the combination of Ansys and NVIDIA technologies will open the door for 6G communication systems with AI capabilities.

According to Keysight Communications Solutions Group president and general manager Kailash Narayanan, “access to wireless-specific design tools is limited yet needed to build robust AI.” “Keysight is excited to contribute its expertise in wireless networks to support the next wave of innovation in 6G communications networks.”

Telcos can now fully utilize 6G and prepare for the next wave of wireless technology thanks to the NVIDIA 6G Research Cloud platform, which combines these potent foundational tools. Registering for the NVIDIA 6G Developer Program gives researchers access to the platform.

Continue Reading

Trending

error: Content is protected !!