Connect with us

Technology

And afterward a Pixel Watch Rumor Killed the Excitement

Published

on

There were bunches of happy jokes to be had after Google declared the Pixel Watch at I/O last week, for the most part since gossipy tidbits about such a watch’s presence have carried on for quite a long time. We truly giggled a piece when it was true, since we nearly didn’t know it was truly official. It is true, incidentally.

Not long after the jokes, we couldn’t resist the opportunity to track down energy in the divulging. Google had at long last gotten it done – they were getting ready to give us a Pixel Watch, the one Wear OS watch we feel has been absent from the environment all along. The plan is right on track. Google is tying-in Fitbit for wellbeing following. It seems to be the ideal size. It’ll try and run some new form of Wear OS that sounds like it has significant enhancements. Everything arranged out of the entryway, regardless of whether we realize the little subtleties like specs or cost.

And afterward not long before the end of the week hit, the principal gossip encompassing the genuine Pixel Watch made an appearance to kill every one of the energies. The team at 9to5Google heard from sources who recommended the 2022 Pixel Watch will run a 2018 chipset from Samsung. Brother, what? Noooo.

As indicated by this report, Google is utilizing the Exynos 9110, a double center chipset first utilized by Samsung in the Galaxy Watch that appeared in 2018. The chip was large enough in the Samsung world that it additionally found its direction into the Galaxy Watch Active 2 a year after the fact and afterward the Galaxy Watch 3 one more year after that.

The Exynos 9110 was a more than skilled chip, that is without a doubt. A 10nm chip fueled Tizen and gave one of the better smartwatch encounters available. For the Galaxy Watch 3, logical thanks to the knock in RAM from Samsung, I noted in my audit that the watch ran very well and easily took care of every one of the undertakings I tossed at it. So what’s the issue?

It’s a chip from 2018, man. The most concerning issue in the Wear OS world for a large portion of the beyond 6 years has been that all gadgets ran old innovation from Qualcomm and couldn’t stay aware of the times, contenders, and headways in tech. We thought we were at last continuing on from that storyline with the send off of Samsung’s W920 chip in the Galaxy Watch 4 line last year but, we are right here.

Google is allegedly utilizing this chip on the grounds that the Pixel Watch has been in progress for quite a while and quite possibly’s attempting to change to a more current chip would have additionally set it behind. Or on the other hand perhaps Samsung isn’t in any event, able to let any other individual utilize the 5nm W920 yet. Since plainly Google hate Qualcomm chips for gadgets any longer, the 12nm Wear 4100+ was possible impossible.

The expectation, essentially for the present, is that Google has invested a lot of energy (like numerous years) figuring out ways of getting all that and afterward some out of this chip. Since I don’t remember seeing a Wear OS watch run the 9110, perhaps we’ll be in every way in for a shock. Google is very great at enhancing its gadgets with chipsets that aren’t generally top level (think Pixel 5… Pixel 6 as well), so we could see that again in the Pixel Watch.

However, i’m stressed over broad execution. Google has proactively said that Wear OS 3 brings huge changes and gave admonitions about more seasoned watches having the option to run it, even those with Qualcomm’s Wear 4100 and 4100+ chips. Google clarified that the update from Wear OS 2 for Wear OS 3 on gadgets running that chip could leave the experience affected. The Exynos 9110 is in fact a more proficient chip than those.

My other concern, as far as insight or the Pixel Watch’s storyline, is that it won’t make any difference how great Google makes it assuming they utilize the Exynos 9110. Google utilizing a 4 year-old chipset is the sort of thing that composes its own titles, and not positively. We’re as of now seeing them and the Pixel Watch is 5 months from send off.

Technology

AWS and Nvidia Collaborate on AI Advancement Infrastructure

Published

on

To enhance generative artificial intelligence (GenAI), Amazon Web Services (AWS) and Nvidia are prolonging their 13-year partnership.

The firms stated in a press release on Monday, March 18, that this partnership intends to introduce the new Nvidia Blackwell GPU platform to AWS, providing clients with cutting-edge and safe infrastructure, software, and services.

According to the release, the GB200 Grace Blackwell Superchip and B100 Tensor Core GPUs are part of the Nvidia Blackwell platform. This platform allows customers to build and run multitrillion parameter large language models (LLMs) faster, at a massive scale, and securely. It does this by combining AWS’s Elastic Fabric Adapter Networking with the hyper-scale clustering of Amazon EC2 UltraClusters and the advanced virtualization and security features of the Nitro system.

According to the release, AWS intends to provide EC2 instances with the new B100 GPUs installed in EC2 UltraClusters to accelerate large-scale generative AI training and inference.

Nvidia founder and CEO Jensen Huang stated in the press release that “our partnership with AWS is accelerating new generative AI capabilities and providing customers with unprecedented computing power to push the boundaries of what’s possible.”

“We currently offer the widest range of Nvidia GPU solutions for customers,” said Adam Selipsky, CEO of AWS, “and the deep collaboration between our two organizations goes back more than 13 years, when together we launched the world’s first GPU cloud instance on AWS.”

This partnership places a high priority on security, the release states. To prevent unauthorized access to model weights and encrypt data transfer, the AWS Nitro System, AWS Key Management Service (AWS KMS), encrypted Elastic Fabric Adapter (EFA), and Blackwell encryption are integrated.

According to the release, the cooperation goes beyond hardware and infrastructure. Additionally, AWS and Nvidia are collaborating to hasten the creation of GenAI applications across a range of sectors. They provide generative AI inference through the integration of Nvidia NIM inference microservices with Amazon SageMaker.

In the healthcare and life sciences sector, AWS and Nvidia are expanding computer-aided drug discovery with new Nvidia BioNeMo FMs for generative chemistry, protein structure prediction, and understanding how drug molecules interact with targets, per the release. These models will be available on AWS HealthOmics, a service purpose-built for healthcare and life sciences organizations.

The partnership’s extension occurs at a time when interest in artificial intelligence has caused Nvidia’s valuation to soar in just nine months, from $1 trillion to over $2 trillion. With an 80% market share, the company dominates the high-end AI chip market.

AWS has been releasing GenAI-powered tools for various industries concurrently.

Continue Reading

Technology

NVIDIA Releases 6G Research Cloud Platform to Use AI to Improve Wireless Communications

Published

on

Today, NVIDIA unveiled a 6G research platform that gives academics a cutting-edge method to create the next wave of wireless technology.

The open, adaptable, and linked NVIDIA 6G Research Cloud platform provides researchers with a full suite of tools to enhance artificial intelligence (AI) for radio access network (RAN) technology. With the help of this platform, businesses can expedite the development of 6G technologies, which will link trillions of devices to cloud infrastructures and create the groundwork for a hyperintelligent world augmented by driverless cars, smart spaces, a plethora of immersive education experiences, extended reality, and cooperative robots.

Its early adopters and ecosystem partners include Ansys, Arm, ETH Zurich, Fujitsu, Keysight, Nokia, Northeastern University, Rohde & Schwarz, Samsung, SoftBank Corp., and Viavi.

According to NVIDIA senior vice president of telecom Ronnie Vasishta, “the massive increase in connected devices and host of new applications in 6G will require a vast leap in wireless spectral efficiency in radio communications.” “The application of AI, a software-defined, full-RAN reference stack, and next-generation digital twin technology will be critical to accomplishing this.”

There are three core components to the NVIDIA 6G Research Cloud platform:

The 6G NVIDIA Aerial Omniverse Digital Twin: Physically realistic simulations of entire 6G systems, from a single tower to a city, are made possible by this reference application and developer sample. Realistic terrain and object properties are combined with software-defined radio access networks (RANs) and simulators for user equipment. Researchers will be able to simulate, develop base-station algorithms based on site-specific data, and train models in real time to increase transmission efficiency by using the Omniverse Aerial Digital Twin.

NVIDIA Aerial CUDA-Accelerated RAN: A software-defined, full-RAN stack that provides researchers with a great deal of flexibility in terms of real-time customization, programming, and testing of 6G networks.

NVIDIA Sionna Neural Radio Framework: This framework uses NVIDIA GPUs to generate and capture data, train AI and machine learning models at scale, and integrates seamlessly with well-known frameworks like PyTorch and TensorFlow. NVIDIA Sionna, the top link-level research tool for wireless simulations based on AI/ML, is also included in this.

The 6G development research cloud platform’s components can all be used by top researchers in the field to further their work.

Charlie Zang, senior vice president of Samsung Research America, stated that the future convergence of 6G and AI holds the potential to create a technological landscape that is revolutionary. As a result, “an era of unmatched innovation and connectivity will usher in,” redefining our interactions with the digital world through seamless connectivity and intelligent systems.

In order to develop the next generation of wireless technology, simulation and testing will be crucial. Prominent vendors in this domain are collaborating with NVIDIA to address the novel demands of artificial intelligence utilizing 6G.

According to Shawn Carpenter, program director of Ansys’ 5G/6G and space division, “Ansys is committed to advancing the mission of the 6G Research Cloud by seamlessly integrating the cutting-edge Ansys Perceive EM solver into the Omniverse ecosystem.” “Digital twin creation for 6G systems is revolutionized by perceive EM.” Without a doubt, the combination of Ansys and NVIDIA technologies will open the door for 6G communication systems with AI capabilities.

According to Keysight Communications Solutions Group president and general manager Kailash Narayanan, “access to wireless-specific design tools is limited yet needed to build robust AI.” “Keysight is excited to contribute its expertise in wireless networks to support the next wave of innovation in 6G communications networks.”

Telcos can now fully utilize 6G and prepare for the next wave of wireless technology thanks to the NVIDIA 6G Research Cloud platform, which combines these potent foundational tools. Registering for the NVIDIA 6G Developer Program gives researchers access to the platform.

Continue Reading

Technology

MM1, a Family of Multimodal AI Models with up to 30 billion Parameters, is being Developed by Apple Researchers

Published

on

In a pre-print paper, Apple researchers presented their work on developing a multimodal large language model (LLM) for artificial intelligence (AI). The paper describes how it was possible to achieve the advanced capabilities of multimodality and train the foundation model on both text-only data and images, and it was published on an online portal on March 14. The Cupertino-based tech giant has made new advances in AI in response to CEO Tim Cook’s statement during the company’s earnings calls, which stated that AI features might be released later this year.

ArXiv, an open-access online repository for scholarly papers, has published the research paper’s pre-print version. Peer review is not, however, applied to the papers that are posted here. The project is thought to be connected to Apple as well, even though the paper makes no mention of the company; this is because the majority of the researchers mentioned are connected to the machine learning (ML) division of Apple.

A family of multimodal models with up to 30 billion parameters, known as MM1, is the project that the researchers are currently working on. The paper’s authors referred to it as a “performant multimodal LLM (MLLM)” and noted that in order to build an AI model that can comprehend both text and image-based inputs, image encoders, the vision language connector, and other architecture elements and data decisions were made.

The paper provided an example in stating that “We demonstrate that achieving state-of-the-art (SOTA) few-shot results across multiple benchmarks, compared to other published pre-training results, requires a careful mix of image-caption, interleaved image-text, and text-only data for large-scale multimodal pre-training.”

To put it simply, the AI model has not received enough training to produce the intended results and is presently in the pre-training phase. This phase involves designing the model’s workflow and data processing eventually using the algorithm and AI architecture. The researchers at Apple were able to incorporate computer vision into the model by means of a vision language connector and image encoders. Upon conducting tests using a combination of image-only, image-text, and text-only data sets, the team discovered that the outcomes were comparable to those of other models at the same stage.

Although this is a significant breakthrough, there is insufficient evidence in this research paper to conclude that Apple will integrate a multimodal AI chatbot into its operating system. It’s difficult to even say at this point whether the AI model is multimodal in terms of receiving inputs or producing output (i.e., whether it can produce AI images or not). However, it can be said that the tech giant has made significant progress toward developing a native generative AI foundation model if the results are verified to be consistent following peer review.

Continue Reading

Trending

error: Content is protected !!