Connect with us

Technology

Apple discharges Leather Sleeve with MagSafe accessories for all iPhone 12 models

Published

on

Apple on Friday unobtrusively delivered the long awaited Leather Sleeve with MagSafe uphold for all new iPhone 12 models, finishing the rollout of a line of MagSafe accessories reported in October.

Apple disclosed the Leather Sleeve during a month ago’s iPhone occasion and vowed to dispatch the accessory before year’s end.

Valued at $129 for all models, the case is produced using delicate, “specially tanned and finished European leather” that builds up a characteristic patina after some time. The sleeve is intended to suit perfectly over iPhone 12, iPhone 12 smaller than usual, iPhone 12 Pro and iPhone 12 Max, offering security of scratches and drops. Inside is an opening for cards, while the going with lash is removable.

When embedded, iPhone 12 shows the time through a pattern for the situation, with the advanced clock’s text style shading coordinating that of the sleeve. It’s a pleasant touch empowered by MagSafe’s NFC capacities.

A MagSafe extra, Apple’s Leather Sleeve empowers exact situating of the MagSafe charging puck for productive remote charging. Like the organization’s other calfskin case plans, consistent utilization of MagSafe extras will bring about “slight imprints,” Apple says.

The Leather Sleeve with MagSafe is offered in four colors — Baltic Blue, Pink Citrus, Saddle Brown and (Product)RED — and is accessible for request from the online Apple Store. Current boat times are set at Dec. 16 to Dec. 23 with free assistance or as right on time as Dec. 15 with express transportation.

Technology

Nvidia Unveils NIM for Seamless Deployment of AI Models in Production

Published

on

Nvidia unveiled Nvidia NIM, a new software platform intended to speed up the deployment of personalized and pre-trained AI models into production environments, at its GTC conference today. By combining a model with an optimized inferencing engine and packing it into a container that can be accessed as a microservice, NIM takes the software work that Nvidia has done around inferencing and optimizing models and makes it easily accessible.

According to Nvidia, if the company had any internal AI talent at all, it would normally take developers weeks, if not months, to ship similar containers. For businesses looking to accelerate their AI roadmap, Nvidia’s NIM clearly aims to build an ecosystem of AI-ready containers that use its hardware as the base layer and these carefully chosen microservices as the main software layer.

Currently, NIM supports open models from Google, Hugging Face, Meta, Microsoft, Mistral AI, Stability AI, A121, Adept, Cohere, Getty Images, and Shutterstock in addition to models from NVIDIA. To make these NIM microservices available on SageMaker, Kubernetes Engine, and Azure AI, respectively, Nvidia is already collaborating with Amazon, Google, and Microsoft. Additionally, they’ll be incorporated into LlamaIndex, LangChain, and Deepset frameworks.

In a press conference held prior to today’s announcements, Manuvir Das, Nvidia’s head of enterprise computing, stated, “We believe that the Nvidia GPU is the best place to run inference of these models on […] and we believe that NVIDIA NIM is the best software package, the best runtime, for developers to build on top of so that they can focus on the enterprise applications — and just let Nvidia do the work to produce these models for them in the most efficient, enterprise-grade manner, so that they can just do the rest of their work.”“

TensorRT, TensorRT-LLM, and Triton Inference Server will be the inference engines used by Nvidia. Nvidia microservices that will be made available via NIM include the Earth-2 model for weather and climate simulations, cuOpt for routing optimizations, and Riva for customizing speech and translation models.

The Nvidia RAG LLM operator, for instance, will soon be available as a NIM, a move that the company hopes will simplify the process of creating generative AI chatbots that can extract unique data.

Without a few announcements from partners and customers, this wouldn’t be a developer conference. Presently, NIM’s clientele includes companies like Box, Cloudera, Cohesity, Datastax, Dropbox, and NetApp.

NVIDIA founder and CEO Jensen Huang stated, “Established enterprise platforms are sitting on a goldmine of data that can be transformed into generative AI copilots.” “These containerized AI microservices, developed with our partner ecosystem, are the building blocks for enterprises in every industry to become AI companies.”

Continue Reading

Technology

AWS and Nvidia Collaborate on AI Advancement Infrastructure

Published

on

To enhance generative artificial intelligence (GenAI), Amazon Web Services (AWS) and Nvidia are prolonging their 13-year partnership.

The firms stated in a press release on Monday, March 18, that this partnership intends to introduce the new Nvidia Blackwell GPU platform to AWS, providing clients with cutting-edge and safe infrastructure, software, and services.

According to the release, the GB200 Grace Blackwell Superchip and B100 Tensor Core GPUs are part of the Nvidia Blackwell platform. This platform allows customers to build and run multitrillion parameter large language models (LLMs) faster, at a massive scale, and securely. It does this by combining AWS’s Elastic Fabric Adapter Networking with the hyper-scale clustering of Amazon EC2 UltraClusters and the advanced virtualization and security features of the Nitro system.

According to the release, AWS intends to provide EC2 instances with the new B100 GPUs installed in EC2 UltraClusters to accelerate large-scale generative AI training and inference.

Nvidia founder and CEO Jensen Huang stated in the press release that “our partnership with AWS is accelerating new generative AI capabilities and providing customers with unprecedented computing power to push the boundaries of what’s possible.”

“We currently offer the widest range of Nvidia GPU solutions for customers,” said Adam Selipsky, CEO of AWS, “and the deep collaboration between our two organizations goes back more than 13 years, when together we launched the world’s first GPU cloud instance on AWS.”

This partnership places a high priority on security, the release states. To prevent unauthorized access to model weights and encrypt data transfer, the AWS Nitro System, AWS Key Management Service (AWS KMS), encrypted Elastic Fabric Adapter (EFA), and Blackwell encryption are integrated.

According to the release, the cooperation goes beyond hardware and infrastructure. Additionally, AWS and Nvidia are collaborating to hasten the creation of GenAI applications across a range of sectors. They provide generative AI inference through the integration of Nvidia NIM inference microservices with Amazon SageMaker.

In the healthcare and life sciences sector, AWS and Nvidia are expanding computer-aided drug discovery with new Nvidia BioNeMo FMs for generative chemistry, protein structure prediction, and understanding how drug molecules interact with targets, per the release. These models will be available on AWS HealthOmics, a service purpose-built for healthcare and life sciences organizations.

The partnership’s extension occurs at a time when interest in artificial intelligence has caused Nvidia’s valuation to soar in just nine months, from $1 trillion to over $2 trillion. With an 80% market share, the company dominates the high-end AI chip market.

AWS has been releasing GenAI-powered tools for various industries concurrently.

Continue Reading

Technology

NVIDIA Releases 6G Research Cloud Platform to Use AI to Improve Wireless Communications

Published

on

Today, NVIDIA unveiled a 6G research platform that gives academics a cutting-edge method to create the next wave of wireless technology.

The open, adaptable, and linked NVIDIA 6G Research Cloud platform provides researchers with a full suite of tools to enhance artificial intelligence (AI) for radio access network (RAN) technology. With the help of this platform, businesses can expedite the development of 6G technologies, which will link trillions of devices to cloud infrastructures and create the groundwork for a hyperintelligent world augmented by driverless cars, smart spaces, a plethora of immersive education experiences, extended reality, and cooperative robots.

Its early adopters and ecosystem partners include Ansys, Arm, ETH Zurich, Fujitsu, Keysight, Nokia, Northeastern University, Rohde & Schwarz, Samsung, SoftBank Corp., and Viavi.

According to NVIDIA senior vice president of telecom Ronnie Vasishta, “the massive increase in connected devices and host of new applications in 6G will require a vast leap in wireless spectral efficiency in radio communications.” “The application of AI, a software-defined, full-RAN reference stack, and next-generation digital twin technology will be critical to accomplishing this.”

There are three core components to the NVIDIA 6G Research Cloud platform:

The 6G NVIDIA Aerial Omniverse Digital Twin: Physically realistic simulations of entire 6G systems, from a single tower to a city, are made possible by this reference application and developer sample. Realistic terrain and object properties are combined with software-defined radio access networks (RANs) and simulators for user equipment. Researchers will be able to simulate, develop base-station algorithms based on site-specific data, and train models in real time to increase transmission efficiency by using the Omniverse Aerial Digital Twin.

NVIDIA Aerial CUDA-Accelerated RAN: A software-defined, full-RAN stack that provides researchers with a great deal of flexibility in terms of real-time customization, programming, and testing of 6G networks.

NVIDIA Sionna Neural Radio Framework: This framework uses NVIDIA GPUs to generate and capture data, train AI and machine learning models at scale, and integrates seamlessly with well-known frameworks like PyTorch and TensorFlow. NVIDIA Sionna, the top link-level research tool for wireless simulations based on AI/ML, is also included in this.

The 6G development research cloud platform’s components can all be used by top researchers in the field to further their work.

Charlie Zang, senior vice president of Samsung Research America, stated that the future convergence of 6G and AI holds the potential to create a technological landscape that is revolutionary. As a result, “an era of unmatched innovation and connectivity will usher in,” redefining our interactions with the digital world through seamless connectivity and intelligent systems.

In order to develop the next generation of wireless technology, simulation and testing will be crucial. Prominent vendors in this domain are collaborating with NVIDIA to address the novel demands of artificial intelligence utilizing 6G.

According to Shawn Carpenter, program director of Ansys’ 5G/6G and space division, “Ansys is committed to advancing the mission of the 6G Research Cloud by seamlessly integrating the cutting-edge Ansys Perceive EM solver into the Omniverse ecosystem.” “Digital twin creation for 6G systems is revolutionized by perceive EM.” Without a doubt, the combination of Ansys and NVIDIA technologies will open the door for 6G communication systems with AI capabilities.

According to Keysight Communications Solutions Group president and general manager Kailash Narayanan, “access to wireless-specific design tools is limited yet needed to build robust AI.” “Keysight is excited to contribute its expertise in wireless networks to support the next wave of innovation in 6G communications networks.”

Telcos can now fully utilize 6G and prepare for the next wave of wireless technology thanks to the NVIDIA 6G Research Cloud platform, which combines these potent foundational tools. Registering for the NVIDIA 6G Developer Program gives researchers access to the platform.

Continue Reading

Trending

error: Content is protected !!