Connect with us

Technology

What The Strict AI Rule in The EU Means for ChatGPT and Research

Published

on

What The Strict AI Rule in The EU Means for ChatGPT and Research

The nations that make up the European Union are about to enact the first comprehensive set of regulations in history governing artificial intelligence (AI). In order to guarantee that AI systems are secure, uphold basic rights, and adhere to EU values, the EU AI Act imposes the strictest regulations on the riskiest AI models.

Professor Rishi Bommasani of Stanford University in California, who studies the social effects of artificial intelligence, argues that the act “is enormously consequential, in terms of shaping how we think about AI regulation and setting a precedent.”

The law is being passed as AI advances quickly. New iterations of generative AI models, like GPT, which drives ChatGPT and was developed by OpenAI in San Francisco, California, are anticipated to be released this year. In the meanwhile, systems that are already in place are being exploited for fraudulent schemes and the spread of false information. The commercial use of AI is already governed by a hodgepodge of rules in China, and US regulation is in the works. The first AI executive order in US history was signed by President Joe Biden in October of last year, mandating federal agencies to take steps to control the dangers associated with AI.

The European Parliament, one of the EU’s three legislative organs, must now officially approve the legislation, which was passed by the governments of the member states on February 2. This is anticipated to happen in April. The law will go into effect in 2026 if the text stays the same, as observers of the policy anticipate.

While some scientists applaud the policy for its potential to promote open science, others are concerned that it would impede creativity. Nature investigates the impact of the law on science.

How is The EU Going About This?

The European Union (EU) has opted to govern AI models according to their potential danger. This entails imposing more stringent laws on riskier applications and establishing distinct regulations for general-purpose AI models like GPT, which have a wide range of unanticipated applications.

The rule prohibits artificial intelligence (AI) systems that pose “unacceptable risk,” such as those that infer sensitive traits from biometric data. Some requirements must be met by high-risk applications, such as employing AI in recruiting and law enforcement. For instance, developers must demonstrate that their models are secure, transparent, and easy for users to understand, as well as that they respect privacy laws and do not discriminate. Developers of lower-risk AI technologies will nevertheless need to notify users when they engage with content generated by AI. Models operating within the EU are subject to the law, and any company that breaks the regulations faces fines of up to 7% of its yearly worldwide profits.

“I think it’s a good approach,” says Dirk Hovy, a computer scientist at Bocconi University in Milan, Italy. AI has quickly become powerful and ubiquitous, he says. “Putting a framework up to guide its use and development makes absolute sense.”

Some believe that the laws don’t go far enough, leaving “gaping” exemptions for national security and military needs, as well as openings for the use of AI in immigration and law enforcement, according to Kilian Vieth-Ditlmann, a political scientist at AlgorithmWatch, a non-profit organization based in Berlin that monitors how automation affects society.

To What Extent Will Researchers Be Impacted?

Very little, in theory. The draft legislation was amended by the European Parliament last year to include a provision exempting AI models created just for prototyping, research, or development. According to Joanna Bryson, a researcher at the Hertie School in Berlin who examines AI and regulation, the EU has made great efforts to ensure that the act has no detrimental effects on research. “They truly don’t want to stop innovation, so I’m surprised if there will be any issues.”

According to Hovy, the act is still likely to have an impact since it will force academics to consider issues of transparency, model reporting, and potential biases. He believes that “it will filter down and foster good practice.”

Physician Robert Kaczmarczyk of the Technical University of Munich, Germany, is concerned that the law may hinder small businesses that drive research and may require them to set up internal procedures in order to comply with regulations. He is also co-founder of LAION (Large-scale Artificial Intelligence Open Network), a non-profit dedicated to democratizing machine learning. “It is very difficult for a small business to adapt,” he says.

What Does It Signify For Strong Models Like GPT?

Following a contentious discussion, legislators decided to place strong general-purpose models in their own two-tier category and regulate them, including generative models that produce code, images, and videos.

Except for those used exclusively for study or those released under an open-source license, all general-purpose models are covered under the first tier. These will have to comply with transparency standards, which include revealing their training procedures and energy usage, and will have to demonstrate that they honor copyright rights.

General-purpose models that are considered to have “high-impact capabilities” and a higher “systemic risk” will fall under the second, much tighter category. According to Bommasani, these models will be subject to “some pretty significant obligations,” such as thorough cybersecurity and safety inspections. It will be required of developers to disclose information about their data sources and architecture.

According to the EU, “big” essentially means “dangerous”: a model is considered high impact if it requires more than 1025 FLOPs (the total number of computer operations) for training. It’s a high hurdle, according to Bommasani, because training a model with that level of computational power would cost between US$50 million and $100 million. It should contain models like OpenAI’s current model, GPT-4, and may also incorporate next versions of LLaMA, Meta’s open-source competitor. Research-only models are immune from regulation, although open-source models in this tier are.

Some scientists would rather concentrate on how AI models are utilized than on controlling them. Jenia Jitsev, another co-founder of LAION and an AI researcher at the Jülich Supercomputing Center in Germany, asserts that “smarter and more capable does not mean more harm.” According to Jitsev, there is no scientific basis for basing regulation on any capability metric. They use the example that any chemical requiring more than a particular number of person-hours is risky. “This is how unproductive it is.”

Will This Support AI That is Open-source?

Advocates of open-source software and EU politicians hope so. According to Hovy, the act encourages the replication, transparency, and availability of AI material, which is equivalent to “reading off the manifesto of the open-source movement.” According to Bommasani, there are models that are more open than others, and it’s still unknown how the act’s language will be understood. However, he believes that general-purpose models—like LLaMA-2 and those from the Paris start-up Mistral AI—are intended to be exempt by the legislators.

According to Bommasani, the EU’s plan for promoting open-source AI differs significantly from the US approach. “The EU argues that in order for the EU to compete with the US and China, open source will be essential.”

How Will The Act Be Put Into Effect?

Under the guidance of impartial experts, the European Commission intends to establish an AI Office to supervise general-purpose models. The office will create methods for assessing these models’ capabilities and keeping an eye on associated hazards. However, Jitsev wonders how a public organization will have the means to sufficiently review submissions, even if businesses like OpenAI follow the rules and submit, for instance, their massive data sets. They assert that “the demand to be transparent is very important.” However, there wasn’t much consideration given to how these operations needed to be carried out.

Continue Reading
Advertisement

Technology

AI Features of the Google Pixel 8a Leaked before the Device’s Planned Release

Published

on

A new smartphone from Google is anticipated to be unveiled during its May 14–15 I/O conference. The forthcoming device, dubbed Pixel 8a, will be a more subdued version of the Pixel 8. Despite being frequently spotted online, the smartphone has not yet received any official announcements from the company. A promotional video that was leaked is showcasing the AI features of the Pixel 8a, just weeks before its much-anticipated release. Furthermore, internet leaks have disclosed software support and special features.

Tipster Steve Hemmerstoffer obtained a promotional video for the Pixel 8a through MySmartPrice. The forthcoming smartphone is anticipated to include certain Pixel-only features, some of which are demonstrated in the video. As per the video, the Pixel 8a will support Google’s Best Take feature, which substitutes faces from multiple group photos or burst photos to “replace” faces that have their eyes closed or display undesirable expressions.

There will be support for Circle to Search on the Pixel 8a, a feature that is presently present on some Pixel and Samsung Galaxy smartphones. Additionally, the leaked video implies that the smartphone will come equipped with Google’s Audio Magic Eraser, an artificial intelligence (AI) tool for eliminating unwanted background noise from recorded videos. In addition, as shown in the video, the Pixel 8a will support live translation during voice calls.

The phone will have “seven years of security updates” and the Tensor G3 chip, according to the leaked teasers. It’s unclear, though, if the phone will get the same amount of Android OS updates as the more expensive Pixel 8 series phones that have the same processor. In the days preceding its planned May 14 launch, the company is anticipated to disclose additional information about the device.

Continue Reading

Technology

Apple Unveils a new Artificial Intelligence Model Compatible with Laptops and Phones

Published

on

All of the major tech companies, with the exception of Apple, have made their generative AI models available for use in commercial settings. The business is, nevertheless, actively engaged in that area. Wednesday saw the release of Open-source Efficient Language Models (OpenELM), a collection of four incredibly compact language models—the Hugging Face model library—by its researchers. According to the company, OpenELM works incredibly well for text-related tasks like composing emails. The models are now ready for development and the company has maintained them as open source.

In comparison to models from other tech giants like Microsoft and Google, the model is extremely small, as previously mentioned. 270 million, 450 million, 1.1 billion, and 3 billion parameters are present in Apple’s latest models. On the other hand, Google’s Gemma model has 2 billion parameters, whereas Microsoft’s Phi-3 model has 3.8 billion. Minimal versions are compatible with phones and laptops and require less power to operate.

Apple CEO Tim Cook made a hint in February about the impending release of generative AI features on Apple products. He said that Apple has been working on this project for a long time. About the details of the AI features, there is, however, no more information available.

Apple, meanwhile, has declared that it will hold a press conference to introduce a few new items this month. Media invites to the “special Apple Event” on May 7 at 7 AM PT (7:30 PM IST) have already begun to arrive from the company. The invite’s image, which shows an Apple Pencil, suggests that the event will primarily focus on iPads.

It seems that Apple will host the event entirely online, following in the footsteps of October’s “Scary Fast” event. It is implied in every invitation that Apple has sent out that viewers will be able to watch the event online. Invitations for a live event have not yet been distributed.
Apple has released other AI models before this one. The business previously released the MGIE image editing model, which enables users to edit photos using prompts.

Continue Reading

Technology

Google Expands the Availability of AI Support with Gemini AI to Android 10 and 11

Published

on

Android 10 and 11 are now compatible with Google’s Gemini AI, which was previously limited to Android 12 and above. As noted by 9to5google, this modification greatly expands the pool of users who can take advantage of AI-powered support for their tablets and smartphones.

Due to a recent app update, Google has lowered the minimum requirement for Gemini, which now makes its advanced AI features accessible to a wider range of users. Previously, Gemini required Android 12 or later to function. The AI assistant can now be installed and used on Android 10 devices thanks to the updated Gemini app, version v1.0.626720042, which can be downloaded from the Google Play Store.

This expansion, which shows Google’s goal to make AI technology more inclusive, was first mentioned by Sumanta Das on X and then further highlighted by Artem Russakoviskii. Only the most recent versions of Android were compatible with Gemini when it was first released earlier this year. Google’s latest update demonstrates the company’s dedication to expanding the user base for its AI technology.

Gemini is now fully operational after updating the Google app and Play Services, according to testers using Android 10 devices. Tests conducted on an Android 10 Google Pixel revealed that Gemini functions seamlessly and a user experience akin to that of more recent models.

Because users with older Android devices will now have access to the same AI capabilities as those with more recent models, the wider compatibility has important implications for them. Expanding Gemini’s support further demonstrates Google’s dedication to making advanced AI accessible to a larger segment of the Android user base.

Users of Android 10 and 11 can now access Gemini, and they can anticipate regular updates and new features. This action marks a significant turning point in Google’s AI development and opens the door for future functional and accessibility enhancements, improving everyone’s Android experience.

Continue Reading

Trending

error: Content is protected !!