Connect with us

Technology

TiVo to launch Apple TV application later this year with live TV support and DVR access

Published

on

At CES 2019 this week, TiVo displayed new applications for the Apple TV, Fire TV, and Roku. The new applications will permit existing TiVo clients to see live and recorded substance on various TVs utilizing their current equipment, instead of spending for extra TiVo Mini boxes.

Dave Zatz was first to recognize TiVo’s demonstration at CES this week. Zatz says TiVo plans to launch the applications amid Q2 and Q3 of this current year. The Fire TV application is said to come first, followed by Roku and eventually Apple TV.

Right now, TiVo offers its TiVo Mini as an approach to bring access to live and recorded TiVo substance to different TVs. The TiVo Mini VOX, which includes a voice-activated remote, right now retails for at $179.99 – making it an expensive method to extend TiVo all through your whole home.

The new application for Apple TV and different boxes will enable clients to get to their recorded and live substance at no extra expense. The organization says it doesn’t plan to charge clients an upfront expense for the application, nor is it planning an extra month to month cost.

The TiVo application for Apple TV, Amazon Fire TV, and Roku will permit existing TiVo subscribers to access live and recorded substance from anyplace. You will miss out on a few features, however, including SkipMode for blowing through commercials and 5.1 audio.

TiVo has been revising its strategy over the last several years to accommodate cord cutters. A year ago, it launched the Bolt OTA for use with an over-the-air antenna, instead of a traditional cable box. The Bolt OTA offers recording capacities just as access to other streaming services.

Mark David is a writer best known for his science fiction, but over the course of his life he published more than sixty books of fiction and non-fiction, including children's books, poetry, short stories, essays, and young-adult fiction. He publishes news on apstersmedia.com related to the science.

Technology

Qualcomm Broadens Snapdragon X Series for AI-Powered PCs

Published

on

A well-known chip manufacturer worldwide, Qualcomm has announced plans to increase the range of its Snapdragon X Series products. According to the details, the new expansion will bring a new platform that will benefit more Windows PCs by offering better performance, longer battery life, and on-device AI capabilities.

Beginning in the middle of 2024, original equipment manufacturers (OEMs) plan to release PCs with Snapdragon X Plus and Snapdragon X Elite technology together.

The company released a statement describing the Snapdragon X Plus’s “Qualcomm Oryon CPU,” a specially integrated processor that offers up to 37% faster performance than rival models while using up to 54% less power.

“With radical new AI experiences emerging in this period of rapid development and deployment, Snapdragon X Plus will power AI-supercharged PCs that enable even more users to excel,” stated Kedar Kondap, SVP and general manager of Qualcomm Technologies’ compute and gaming division.

Kondap continued, “We are once again pushing the boundaries of what is possible in mobile computing by delivering leading CPU performance, AI capabilities, and power efficiency.”

The Qualcomm Hexagon NPU, which can process 45 TOPS (tera operations per second), powers the chip, which is said to be the fastest laptop NPU in the world. It is made to meet the demands of on-device AI-driven applications.

The Snapdragon 7+ Gen 3 chipset, the newest member of the Snapdragon 7-series lineup, is released by Qualcomm. With major improvements to both performance and features, it is positioned as the most sophisticated mid-range chipset to date. Its compatibility with on-device generative AI models, including Baichuan-7B, Llama 2, and Gemini Nano, is one noteworthy improvement. With this chipset, CPU performance is improved by up to 15% and GPU performance is impressively improved by 45% over its predecessors.

Photography enthusiasts will find the Snapdragon 7+ Gen 3 chipset ideal as one of its notable features is support for taking 200-megapixel high-resolution photos. To ensure faster and more dependable wireless connectivity, it also has advanced connectivity options like Wi-Fi 7. Users can also enjoy lightning-fast 5G speeds because it is compatible with both Sub6 and mmWave 5G networks.

Continue Reading

Technology

Neura AI Blockchain Opens Public Testnet for Mainnet Development

Published

on

The “Road to Mainnet” campaign by Neura AI Blockchain lays out a complex roadmap that is expected to propel the mainnet to success. With its smooth integration of AI, Web3, and Cloud computing, this much anticipated Layer-1 blockchain offers state-of-the-art Web3 solutions.

Neura has started a new collection on Galxe to commemorate this accomplishment and give users the chance to win a unique Neura NFT.

Neura’s strategy plan outlines how to get the Neura Network in front of development teams that are excited to explore the potential of blockchain technology. Neura AI Blockchain solves issues faced by many Web3 startups with features like an Initial Model Offering (IMO) framework and a decentralized GPU marketplace.

Web3 developers are invited to participate in the AI Innovators campaign, which Neura has launched to demonstrate its capabilities, in exchange for tempting prizes.

This developer competition aims to showcase Neura Blockchain’s AI and platform capabilities, supporting its ecosystem on the Road to Mainnet, rather than just be a competitive event.

Neura Blockchain is at the forefront of utilizing blockchain and artificial intelligence in a world where these technologies are rapidly developing. Because of its custom features that unlock the best AI features in the Web3 space, its launch in 2024 is something to look forward to.

The Road to Mainnet public testnet competition, according to Neura, will highlight important Web3 features like improving the effectiveness of deploying and running AI models, encouraging user participation, and creating a positive network effect among these overlapping technologies.

Continue Reading

Technology

Microsoft Introduces Phi-3 Mini, its Tiniest AI Model to date

Published

on

The Phi-3 Mini, the first of three lightweight models from Microsoft, is the company’s smallest AI model to date.

Microsoft is exploring models that are trained on smaller-than-usual datasets as an increasing number of AI models enter the market. According to The Verge, Phi-3 Mini is now available on Hugging Face, Ollama, and Azure. It has 3.8 billion parameters, or the number of complex instructions a model can understand. Two more models are planned for release. Phi-3 Medium and Phi-3 Small measure 14 billion parameters and seven bullion parameters, respectively. It is estimated that ChatGPT 4 contains more than a trillion parameters, to put things into perspective.

Released in December 2023, Microsoft’s Phi-2 model has 2.7 billion parameters and can achieve performance levels comparable to some larger models. According to the company, Phi-3 can now perform better than its predecessor, providing responses that are comparable to those that are ten times larger.

Benefits of the Phi-3 Mini

Generally speaking, smaller AI models are less expensive to develop and operate. Because of their compact design, they work well on personal computers and phones, which facilitates their adaptation and mass market introduction.

Microsoft has a group devoted to creating more manageable AI models, each with a specific focus. For instance, as its name would imply, Orca-Math is primarily concerned with solving math problems. T.

There are other companies that are focusing on this field as well. For example, Google has Gemma 2B and 7B that are focused on language and chatbots, Anthropic has Claude 3 Haiku that is meant to read and summarize long research papers (just like Microsoft’s CoPilot), and Meta has Llama 3 8B that is prepared to help with coding.

Although smaller AI models are more suitable for personal use, businesses may also find use for them. These AI models are ideal for internal use since internal datasets from businesses are typically smaller, they can be installed more quickly, are less expensive, and easier to use.

Continue Reading

Trending

error: Content is protected !!