Connect with us


Apple supposedly planning 13-inch MacBook Air and iPad Pros with OLED displays



It shows up “increasingly likely” that Apple will launch a new 13-inch MacBook with an OLED display in 2024, as per show industry expert Ross Young. In a tweet shared with his Super Followers today, Young said the scratch pad is supposed to be a new MacBook Air, yet he said there is plausible it will have other branding.

Young, who has precisely uncovered a range of display-related data for the iPhone 13 Pro, iPad mini, MacBook Pro, and different gadgets, likewise anticipates that Apple should release new 11-inch and 12.9-inch iPad Pro models with OLED displays in 2024.

In one more tweet shared with his Super Followers, Young said the OLED displays in each of the three new gadgets will embrace a two-stack tandem structure, in which there are two red, green, and blue emission layers, considering expanded brightness and lower power consumption. OLED displays likewise don’t need backlighting for additional power proficiency.

Young said every one of the gadgets will adopt LTPO display technology for a variable refresh rate somewhere in the range of 1Hz and 120Hz, a feature that Apple calls ProMotion. All iPad Pro models released starting around 2017 as of now feature ProMotion, however the refresh rate can drop as low as 24Hz, while ProMotion would be all new to the MacBook Air.

Macintosh is as of now focused on transitioning its Mac and iPad lines to LCD displays with mini-LED backlighting, and OLED displays would be the next step. Unlike mini-LED displays, OLED panels use self-emitting pixels and do not require backlighting which could further develop contrast ratio and further add to longer battery life.


A security executive at Microsoft refers to generative AI as a “super power” in the industry




Vasu Jakkal, a security executive at Microsoft, stated that generative artificial intelligence is crucial to the company’s cybersecurity business in an interview with Jim Cramer on Monday.

“We have the super power of generative AI, which is helping us defend at machine speed and scale, especially given the cybersecurity talent shortage,” she said. “We also have to make sure that we leverage AI for real good, because it has this power to elevate the human potential, and it’s going to help us solve the most serious of challenges.”

According to Jakkal, the threat landscape is “unprecedent threat landscape,” with cybercriminals evolving into more skilled operators. She stated, for instance, that Microsoft receives 4,000 password attacks every second. She identified two categories of cybersecurity threats: financial cybercrime and geopolitical espionage. According to her, Microsoft can use data to train its AI models to recognize these threats.

According to Jakkal, fighting cybercriminals also requires cooperation among all members of the cybersecurity ecosystem. According to her, Microsoft has partnerships with fifteen thousand businesses and organizations, and three hundred security vendors are developing on the platforms of the company.

“We need deep collaboration and deep partnerships because the bad actors work together,” Jakkal said. “No one company can do this without others.”

With its rapid growth, Microsoft’s security division is currently valued at over $20 billion. On Monday, the company’s stock reached a record high of $378.61 at the close.

Continue Reading


Gen AI without the dangers




It’s understandable that ChatGPT, Stable Diffusion, and DreamStudio-Generative AI are making headlines. The outcomes are striking and getting better geometrically. Already, search and information analysis, as well as code creation, network security, and article writing, are being revolutionized by intelligent assistants.

Gen AI will play a critical role in how businesses run and provide IT services, as well as how business users complete their tasks. There are countless options, but there are also countless dangers. Successful AI development and implementation can be a costly and risky process. Furthermore, the workloads associated with Gen AI and the large language models (LLMs) that drive it are extremely computationally demanding and energy-intensive.Dr. Sajjad Moazeni of the University of Washington estimates that training an LLM with 175 billion or more parameters requires an annual energy expenditure for 1,000 US households, though exact figures are unknown. Over 100 million generative AI questions answered daily equate to one gigawatt-hour of electricity use, or about 33,000 US households’ daily energy use.

How even hyperscalers can afford that much electricity is beyond me. It’s too expensive for the typical business. How can CIOs provide reliable, accurate AI without incurring the energy expenses and environmental impact of a small city?

Six pointers for implementing Gen AI economically and with less risk

Retraining generative AI to perform particular tasks is essential to its applicability in business settings. Expert models produced by retraining are smaller, more accurate, and require less processing power. So, in order to train their own AI models, does every business need to establish a specialized AI development team and a supercomputer? Not at all.

Here are six strategies to create and implement AI without spending a lot of money on expensive hardware or highly skilled personnel.

Start with a foundation model rather than creating the wheel.

A company might spend money creating custom models for its own use cases. But the expenditure on data scientists, HPC specialists, and supercomputing infrastructure is out of reach for all but the biggest government organizations, businesses, and hyperscalers.

Rather, begin with a foundation model that features a robust application portfolio and an active developer ecosystem. You could use an open-source model like Meta’s Llama 2, or a proprietary model like OpenAI’s ChatGPT. Hugging Face and other communities provide a vast array of open-source models and applications.

Align the model with the intended use

Models can be broadly applicable and computationally demanding, such as GPT, or more narrowly focused, like Med-BERT (an open-source LLM for medical literature). The time it takes to create a viable prototype can be shortened and months of training can be avoided by choosing the appropriate model early in the project.

However, exercise caution. Any model may exhibit biases in the data it uses to train, and generative AI models are capable of lying outright and fabricating responses. Seek models trained on clean, transparent data with well-defined governance and explicable decision-making for optimal trustworthiness.

Retrain to produce more accurate, smaller models

Retraining foundation models on particular datasets offers various advantages. The model sheds parameters it doesn’t need for the application as it gets more accurate on a smaller field. One way to trade a general skill like songwriting for the ability to assist a customer with a mortgage application would be to retrain an LLM in financial information.

With a more compact design, the new banking assistant would still be able to provide superb, extremely accurate services while operating on standard (current) hardware.

Make use of your current infrastructure

A supercomputer with 10,000 GPUs is too big for most businesses to set up. Fortunately, most practical AI training, retraining, and inference can be done without large GPU arrays.

  • Training up to 10 billion: at competitive price/performance points, contemporary CPUs with integrated AI acceleration can manage training loads in this range. For better performance and lower costs, train overnight during periods of low demand for data centers.
  • Retraining up to 10 billion models is possible with modern CPUs; no GPU is needed, and it takes only minutes.
  • With integrated CPUs, smaller models can operate on standalone edge devices, with inferencing ranging from millions to less than 20 billion. For models with less than 20 billion parameters, such as Llama 2, CPUs can respond as quickly and precisely as GPUs.

Execute inference with consideration for hardware

Applications for inference can be fine-tuned and optimized for improved performance on particular hardware configurations and features. Similar to training a model, optimizing one for a given application means striking a balance between processing efficiency, model size, and accuracy.

One way to increase inference speeds four times while maintaining accuracy is to round down a 32-bit floating point model to the nearest 8-bit fixed integer (INT8). Utilizing host accelerators such as integrated GPUs, Intel® Advanced Matrix Extensions (Intel® AMX), and Intel® Advanced Vector Extensions 512 (Intel® AVX-512), tools such as Intel® Distribution of OpenVINOTM toolkit manage optimization and build hardware-aware inference engines.

Monitor cloud utilization

A quick, dependable, and expandable route is to offer AI services through cloud-based AI applications and APIs. Customers and business users alike benefit from always-on AI from a service provider, but costs can rise suddenly. Everyone will use your AI service if it is well-liked by all.

Many businesses that began their AI journeys entirely in the cloud are returning workloads to their on-premises and co-located infrastructure that can function well there. Pay-as-you-go infrastructure-as-a-service is becoming a competitive option for cloud-native enterprises with minimal or no on-premises infrastructure in comparison to rising cloud costs.

You have choices when it comes to Gen AI. Generative AI is surrounded by a lot of hype and mystery, giving the impression that it’s a cutting-edge technology that’s only accessible to the wealthiest companies. Actually, on a typical CPU-based data center or cloud instance, hundreds of high-performance models, including LLMs for generative AI, are accurate and performant. Enterprise-grade generative AI experimentation, prototyping, and deployment tools are rapidly developing in both open-source and proprietary communities.

By utilizing all of their resources, astute CIOs can leverage AI that transforms businesses without incurring the expenses and hazards associated with in-house development.

Continue Reading


According to a senior Google executive, the AI legal framework must foster innovation




As the European Union rushes to agree AI rules next month, Google’s chief legal officer Kent Walker said on Tuesday that regulations governing the use of AI should foster innovation. Walker’s remarks echoed those of a broad range of businesses and tech groups.

In an effort to reach a consensus by December 6, EU nations and legislators are currently ironing out the last details of a draft proposal by the European Commission.

A major problem is foundation models, like ChatGPT from OpenAI, which are AI systems trained on massive amounts of data and able to learn from fresh data to accomplish a range of tasks.

Walker said that rather than aiming for the first AI regulations, Europe should pursue the best ones.

“Technological leadership requires a balance between innovation and regulation. Not micromanaging progress, but holding actors responsible when they violate public trust,” he said in the text of a speech to be delivered at a European Business Summit.

“We’ve long said that AI is too important not to regulate, and too important not to regulate well. The race should be for the best AI regulations, not the first AI regulations.”

In order to build on current regulations and give businesses the confidence they need to continue investing in AI innovation, he called for proportionate, risk-based rules that make hard trade-offs between security and openness, data access and privacy, and explainability and accuracy.

The EU was forewarned last week by the business group DigitalEurope and 32 European digital associations not to overregulate foundation models.

Continue Reading


error: Content is protected !!