Connect with us

Technology

Apple’s next iPhone may be more costly

Published

on

Apple could make the impending iPhone 14 a larger number of costly than the iPhone 13, as indicated by Apple expert Ming-Chi Kuo. Kuo accepts that the typical selling cost (ASP) of the joined iPhone 14 arrangement might increment by 15% when contrasted with the iPhone 13 setup.

For reference, the standard iPhone 13 model beginnings at $799 (with transporter limits), while the Pro and Pro Max models knock that value up to $999 and $1,099, separately. While Kuo doesn’t specify cost expectations for individual gadgets, he thinks the ASP of the iPhone 14 arrangement (Pro models included) could float around $1,000 to $1,050. Kuo faults the ASP increment on a possibly more costly iPhone Pro and Pro Max, as well as a “higher shipment proportion.”

In June, Wedbush Securities examiner Dave Ives let The Sun know that he anticipates that the iPhone 14 should cost $100 more than the iPhone 13 because of cost increments influencing the worldwide store network. In the interim, gossip from Korean leaker Lanzuk recommends that Apple will just raise the cost of the Pro models, not the base iPhone 14.

While the base iPhone 14 is supposed to accompany a worked on 48-megapixel back confronting camera and a selfie camera with self-adjust, the iPhone 14 Pro and Pro Max models are reputed to get the majority of the redesigns. The Pro and Pro Max might jettison the score that houses the forward looking camera for a pill-molded opening punch pattern, come outfitted with the new A16 chip, and backing a consistently in plain view.

Technology

NVIDIA Releases 6G Research Cloud Platform to Use AI to Improve Wireless Communications

Published

on

Today, NVIDIA unveiled a 6G research platform that gives academics a cutting-edge method to create the next wave of wireless technology.

The open, adaptable, and linked NVIDIA 6G Research Cloud platform provides researchers with a full suite of tools to enhance artificial intelligence (AI) for radio access network (RAN) technology. With the help of this platform, businesses can expedite the development of 6G technologies, which will link trillions of devices to cloud infrastructures and create the groundwork for a hyperintelligent world augmented by driverless cars, smart spaces, a plethora of immersive education experiences, extended reality, and cooperative robots.

Its early adopters and ecosystem partners include Ansys, Arm, ETH Zurich, Fujitsu, Keysight, Nokia, Northeastern University, Rohde & Schwarz, Samsung, SoftBank Corp., and Viavi.

According to NVIDIA senior vice president of telecom Ronnie Vasishta, “the massive increase in connected devices and host of new applications in 6G will require a vast leap in wireless spectral efficiency in radio communications.” “The application of AI, a software-defined, full-RAN reference stack, and next-generation digital twin technology will be critical to accomplishing this.”

There are three core components to the NVIDIA 6G Research Cloud platform:

The 6G NVIDIA Aerial Omniverse Digital Twin: Physically realistic simulations of entire 6G systems, from a single tower to a city, are made possible by this reference application and developer sample. Realistic terrain and object properties are combined with software-defined radio access networks (RANs) and simulators for user equipment. Researchers will be able to simulate, develop base-station algorithms based on site-specific data, and train models in real time to increase transmission efficiency by using the Omniverse Aerial Digital Twin.

NVIDIA Aerial CUDA-Accelerated RAN: A software-defined, full-RAN stack that provides researchers with a great deal of flexibility in terms of real-time customization, programming, and testing of 6G networks.

NVIDIA Sionna Neural Radio Framework: This framework uses NVIDIA GPUs to generate and capture data, train AI and machine learning models at scale, and integrates seamlessly with well-known frameworks like PyTorch and TensorFlow. NVIDIA Sionna, the top link-level research tool for wireless simulations based on AI/ML, is also included in this.

The 6G development research cloud platform’s components can all be used by top researchers in the field to further their work.

Charlie Zang, senior vice president of Samsung Research America, stated that the future convergence of 6G and AI holds the potential to create a technological landscape that is revolutionary. As a result, “an era of unmatched innovation and connectivity will usher in,” redefining our interactions with the digital world through seamless connectivity and intelligent systems.

In order to develop the next generation of wireless technology, simulation and testing will be crucial. Prominent vendors in this domain are collaborating with NVIDIA to address the novel demands of artificial intelligence utilizing 6G.

According to Shawn Carpenter, program director of Ansys’ 5G/6G and space division, “Ansys is committed to advancing the mission of the 6G Research Cloud by seamlessly integrating the cutting-edge Ansys Perceive EM solver into the Omniverse ecosystem.” “Digital twin creation for 6G systems is revolutionized by perceive EM.” Without a doubt, the combination of Ansys and NVIDIA technologies will open the door for 6G communication systems with AI capabilities.

According to Keysight Communications Solutions Group president and general manager Kailash Narayanan, “access to wireless-specific design tools is limited yet needed to build robust AI.” “Keysight is excited to contribute its expertise in wireless networks to support the next wave of innovation in 6G communications networks.”

Telcos can now fully utilize 6G and prepare for the next wave of wireless technology thanks to the NVIDIA 6G Research Cloud platform, which combines these potent foundational tools. Registering for the NVIDIA 6G Developer Program gives researchers access to the platform.

Continue Reading

Technology

MM1, a Family of Multimodal AI Models with up to 30 billion Parameters, is being Developed by Apple Researchers

Published

on

In a pre-print paper, Apple researchers presented their work on developing a multimodal large language model (LLM) for artificial intelligence (AI). The paper describes how it was possible to achieve the advanced capabilities of multimodality and train the foundation model on both text-only data and images, and it was published on an online portal on March 14. The Cupertino-based tech giant has made new advances in AI in response to CEO Tim Cook’s statement during the company’s earnings calls, which stated that AI features might be released later this year.

ArXiv, an open-access online repository for scholarly papers, has published the research paper’s pre-print version. Peer review is not, however, applied to the papers that are posted here. The project is thought to be connected to Apple as well, even though the paper makes no mention of the company; this is because the majority of the researchers mentioned are connected to the machine learning (ML) division of Apple.

A family of multimodal models with up to 30 billion parameters, known as MM1, is the project that the researchers are currently working on. The paper’s authors referred to it as a “performant multimodal LLM (MLLM)” and noted that in order to build an AI model that can comprehend both text and image-based inputs, image encoders, the vision language connector, and other architecture elements and data decisions were made.

The paper provided an example in stating that “We demonstrate that achieving state-of-the-art (SOTA) few-shot results across multiple benchmarks, compared to other published pre-training results, requires a careful mix of image-caption, interleaved image-text, and text-only data for large-scale multimodal pre-training.”

To put it simply, the AI model has not received enough training to produce the intended results and is presently in the pre-training phase. This phase involves designing the model’s workflow and data processing eventually using the algorithm and AI architecture. The researchers at Apple were able to incorporate computer vision into the model by means of a vision language connector and image encoders. Upon conducting tests using a combination of image-only, image-text, and text-only data sets, the team discovered that the outcomes were comparable to those of other models at the same stage.

Although this is a significant breakthrough, there is insufficient evidence in this research paper to conclude that Apple will integrate a multimodal AI chatbot into its operating system. It’s difficult to even say at this point whether the AI model is multimodal in terms of receiving inputs or producing output (i.e., whether it can produce AI images or not). However, it can be said that the tech giant has made significant progress toward developing a native generative AI foundation model if the results are verified to be consistent following peer review.

Continue Reading

Technology

Google Gemini may be used by Apple to Power Certain AI Features on the iPhone

Published

on

In addition to leveraging Gemini to power features within its services and apps, Google makes its LLM available to outside developers. It has been reported that Apple and Google are in negotiations to license Gemini for the iPhone.

“Active negotiations to let Apple license Gemini, Google’s set of generative AI models, to power some new features coming to the iPhone software this year” are reported by Bloomberg. The company that powers Microsoft’s AI capabilities, OpenAI, has also had conversations with Apple.

As stated in today’s report, Gemini can be used for text and image generation, indicating that Apple is specifically seeking partners for cloud-based generative AI. With the impending release of “iOS 18, Apple is working to provide its own on-device AI models and capabilities.”

It’s still early in the talks, so it’s unclear what the AI will be called. The current partnership between the two companies—default search engine—would be greatly expanded by this.

Taking a look at the rest of the market, Google and Samsung announced a partnership in February so that the Galaxy S24’s voice recording and note-taking apps would have access to Gemini power summarization features. Additionally, Samsung’s photo gallery app uses Imagen 2 text-to-image diffusion for generative editing. Samsung is utilizing an on-device version of Gemini, but all of those features necessitate server-side processing.

Google provides Gemini in three sizes; the majority of first- and third-party apps use Pro. Whereas 1.0 Ultra powers the premium Gemini Advanced tier, Gemini 1.0 Pro powers the free version of gemini.google.com.

While Google previewed Gemini 1.5 in mid-February, which features a significantly larger context window that facilitates the absorption of more information, Gemini 1.0 is still available in stable. The “output more consistent, relevant, and useful” as a result may result.

  • Gemini Ultra: The biggest and most powerful model for extremely complicated jobs
  • Gemini Pro: The most effective model for scaling in various tasks
  • Gemini Nano: The most effective model for tasks performed on a device

Although multiple providers, including OpenAI, are mentioned as possibilities in today’s report, Bloomberg predicts that a deal will not be announced until WWDC in June.

Continue Reading

Trending

error: Content is protected !!