Connect with us

Technology

Google’s Pixel phones can read your heart rate with their cameras

Published

on

Google is adding heart and respiratory rate monitors to the Fit application on Pixel telephones this month, and it intends to add them to other Android telephones later on. The two highlights depend on the cell phone camera: it measures respiratory rate by checking the ascent and fall of a client’s chest, and heart rate by tracking color change as blood moves through the fingertip.

The features are simply proposed to allow clients track overall wellness and can’t assess or diagnose medical conditions, the organization said.

To quantify respiratory rate (the quantity of breaths somebody takes each moment) utilizing the application, clients point the telephone’s forward looking camera at their head and chest. To gauge pulse, they place their finger over the back confronting camera.

A doctor checks a patient’s respiratory rate by watching their chest rise and fall, and the Google include mirrors that technique, said Jack Po, an item chief at Google Health, in a press instructions. “The machine learning technique that we leverage basically tries to emulate that,” he said.

Google’s pulse screen is like an element that Samsung remembered for various more seasoned model Galaxy cell phones, including the Galaxy S10. The organization eliminated the element for the S10E, S20, and later telephones.

Pulse information from Google’s application will be less thorough than the sorts of readings somebody could get from a wearable gadget, which can persistently screen something like pulse as somebody experiences their every day life. Yet, an at-home component that can monitor these measurements on interest is as yet a valuable device, Po said in the instructions. Anything that expands the quantity of estimations somebody has of their heart or breathing rate is significant — specialists, for instance, typically just get an estimation probably like clockwork as somebody comes into an office, he said.

“If users were to take their heart rate once a week, they would actually get a lot of value,” Po said. “They’ll get a lot of value in tracking whether their heart rate might be improving, if exercise is paying off.”

Google decided to join these capacities into the cell phone to make it open to the broadest number of individuals, Po said. “A lot of people, especially in disadvantaged economic classes right now, don’t have things like wearables, but would still really benefit from the ability to be able to track their breathing rate, heart rate, et cetera.”

Internal studies on Pixel telephones demonstrated that the respiratory rate include was exact inside one breath each moment both for individuals with and without medical issue, said Jiening Zhan, a specialized lead at Google Health, during the press preparation. The pulse include was exact inside 2 percent. That component was tried on individuals with a scope of skin tones, and it had a comparative exactness for light and dark skin, she said. The group intends to distribute a logical paper with the information from its evaluations.

The team will concentrate how well the highlights work on different telephones prior to making them accessible outside of the Pixel. “We want to make sure that you know, the rigorous testing is done before it’s released to other devices,” Zhan said.

At the present time, the features are depicted as instruments that can be utilized for general wellbeing. Google isn’t asserting that they can play out a clinical capacity — which is the reason it needn’t bother with leeway from the Food and Drug Administration (FDA) to add them to the application.

At last, they may bring the application down that road, Po showed. The tests done on the highlights show that they’re reliable with clinical items, he said, so it’s a chance later on. “Frankly, we haven’t done enough testing and validation to say that it can definitely work for those use cases yet, but it’s definitely something we’re exploring,” Po said.

Technology

Samsung Debuts AI-Driven Neo QLED Smart TVs in India, Starting from Rs 1,39,990

Published

on

Samsung is the most recent company to introduce electronics and home appliances with AI capabilities to the Indian consumer market.

The South Korean tech giant unveiled its newest range of state-of-the-art smart TVs, including its flagship OLED 4K and Neo QLED 8K models, which come with the NQ8 AI Gen3 processor, which offers on-device AI capabilities for audio enhancement and image upscaling.

With these cutting-edge TVs, Samsung is taking a stab at on-device generative AI technology. They were first shown at the Unbox and Discover 2024 launch event in Bengaluru. AI-powered home entertainment gadgets are a new trend that begins with this.

Powered by TizenOS, the Neo QLED 8K series is a significant advancement from Samsung, featuring cloud gaming, an educational hub, and smart yoga features that can be accessed by connecting the TV to an AI-enabled yoga mat.

With two variants (QN900D and QN800D) and sizes (65 to 85 inches), the Neo QLED 8K series accommodates a wide range of consumer preferences. The entry-level 65-inch variant is priced at Rs 3,199,90.

For those who pre-order the Neo QLED 8K, Neo QLED 4K, or glare-free OLED models before April 30, 2024, Samsung sweetens the pot for early adopters by providing free soundbars or substitute options like the Freestyle or Music Frame.

The company’s dedication to enhancing the home entertainment experience through AI integration was emphasized by JB Park, President and CEO of Samsung Southwest Asia. “We’ve incorporated artificial intelligence (AI) into home entertainment to provide our customers with outstanding viewing experiences.” He stated, “With the power of AI, our 2024 collection of Neo QLED 8K, Neo QLED 4K, and OLED TVs redefines the home entertainment experience and offers innovations across accessibility, sustainability, and enhanced security.”

New AI technologies like AI Picture Technology, AI Upscaling Pro, and AI Sound Technology are introduced by the Neo QLED 48, which has the NQ8 AI Gen3 processor with 512 neural networks.

In the meantime, the Neo QLED 4K and OLED TVs, which are equipped with the NQ4 AI Gen2 Processor from the previous generation, provide a rich feature set designed to satisfy a variety of customer needs.

The Neo QLED 4K series has five sizes ranging from 55 to 98 inches and starts at Rs 1,39,990. It is available in models QN85D and QN90D.

Conversely, Samsung’s OLED TV, which starts at Rs 1,64,990, comes in sizes ranging from 55 to 83 inches and features glare-free technology. It is available in models S95D and S90D.

Continue Reading

Technology

Windracers and Purdue University Unveil AI Aviation Center Collaboration

Published

on

The world’s first research center devoted to the development of artificial intelligence (AI) in aviation has been established by Windracers, a pioneer in autonomous air travel, in collaboration with Purdue University. Today’s opening of the Center on AI for Digital, Autonomous and Augmented Aviation (AIDA3) represents a significant advancement in the research and use of unmanned aerial vehicles (UAVs) and related technologies.

Since its founding in 2017, Windracers has significantly contributed to the decrease in the price of providing humanitarian aid by using cutting-edge design and manufacturing techniques. Thanks to its in-house Masterless autopilot system, the company’s ULTRA UAV—which is renowned for its adaptability and ease of maintenance—runs without the need for a remote pilot. The British Antarctic Survey and the Royal Navy are just two of the prominent organizations that have used the UAV, which is designed to operate in harsh environments, to demonstrate its dependability through long-term autonomous flights.

Windracers’ founder and executive chairman, Stephen Wright, revealed his excitement for the new project: “We are incredibly excited to launch this groundbreaking AI center with Purdue University, a highly respected academic institution whose alumni include the first and most recent humans to step on the moon.” We want to make the aviation industry completely automated and low-cost by taking another enormous step forward,” he said.

In order to improve capabilities ranging from demand analytics to real-time weather prediction, AIDA3 will investigate the applications of artificial intelligence (AI) and machine learning (ML) in autonomous systems. With potential to have a big impact on other industries and commercial logistics, the center aims to increase the scalability and efficiency of self-flying aircraft operations.

Sabine Brunswicker, AIDA3 director and Purdue professor, highlighted the current challenges and the center’s mission: Existing AI/ML models are not sufficiently reliable to close the loop from data to actions in the real-world that is safe, trustworthy, and scalable, she explained. “Currently, it can take 10 people to operate one UAV. It is time for one operator to be able to coordinate 100 UAVs at the same time. Our mission is to go beyond current AI/ML models where the potential benefits of smarter UAVs can be fully realized globally.”

As part of the partnership, Windracers will receive support for research and development in the US market. Two of its ULTRA UAVs will be provided by the company for continuous testing at Purdue University Airport, which is well-known for being Amelia Earhart’s research base. The development of autonomous flight technologies will be aided by these unmanned aerial vehicles (UAVs), named Armstrong and Earhart.

As a component of Purdue University’s Institute for Physical Artificial Intelligence (IPAI), AIDA3 aims to develop novel solutions by fusing artificial intelligence with the physical world.

Continue Reading

Technology

Linux Foundation Introduces Industry Drive to Enhance Generative AI for Enterprises

Published

on

A new project to advance generative AI for businesses has been made public by the Linux Foundation.

The Open Platform for Enterprise AI is the latest project undertaken by the LF AI & Data Foundation (OPEA). It is positioned as a Sandbox Project, acting as a trial run for the newest concepts and innovations from the Foundation.

Significantly, it is supported by top organizations in the field of AI development and application, such as Anyscale, Cloudera, DataStax, Domino Data Lab, Hugging Face, Intel, KX, MariaDB Foundation, Minio, Qdrant, Red Hat, SAS, VMware, Yellowbrick Data, Zilliz, and more.

Leaders in the Industry Pledge to Improve GenAI

In a press release, the organization stated that the main goal of the LF AI & Data Foundation is to foster an open community for AI and data. By providing greater opportunities for collaboration, it hopes to “drive open source innovation in the AI and data domains.”

With a focus on open model development and the hardened and optimized support of multiple compilers and toolchains, Ibrahim Haddad, Executive Director for LF AI & Data, expressed excitement about the project’s potential to speed up AI integration among enterprises: “We’re thrilled to welcome OPEA to LF AI & Data with the promise to open source, standardized, modular, and heterogenous Retrieval-Augmented Generation (RAG) pipelines for enterprises.”

In addition, Haddad stated, “OPEA will open up new possibilities in AI by developing a comprehensive, modular framework that leads technology stacks.”

The new initiative’s launch, according to the Linux Foundation, is highly appropriate given the rapid advancements in GenAI technology in recent years, which have resulted in a fragmentation of tools, techniques, and solutions that require attention.

In the face of global pressure to advance democratized AI, the Foundation is confident that it can create equal opportunities through standardizing elements such as reference solutions, architecture blueprints, and frameworks.

Continue Reading

Trending

error: Content is protected !!