Connect with us

Technology

What The Strict AI Rule in The EU Means for ChatGPT and Research

Published

on

What The Strict AI Rule in The EU Means for ChatGPT and Research

The nations that make up the European Union are about to enact the first comprehensive set of regulations in history governing artificial intelligence (AI). In order to guarantee that AI systems are secure, uphold basic rights, and adhere to EU values, the EU AI Act imposes the strictest regulations on the riskiest AI models.

Professor Rishi Bommasani of Stanford University in California, who studies the social effects of artificial intelligence, argues that the act “is enormously consequential, in terms of shaping how we think about AI regulation and setting a precedent.”

The law is being passed as AI advances quickly. New iterations of generative AI models, like GPT, which drives ChatGPT and was developed by OpenAI in San Francisco, California, are anticipated to be released this year. In the meanwhile, systems that are already in place are being exploited for fraudulent schemes and the spread of false information. The commercial use of AI is already governed by a hodgepodge of rules in China, and US regulation is in the works. The first AI executive order in US history was signed by President Joe Biden in October of last year, mandating federal agencies to take steps to control the dangers associated with AI.

The European Parliament, one of the EU’s three legislative organs, must now officially approve the legislation, which was passed by the governments of the member states on February 2. This is anticipated to happen in April. The law will go into effect in 2026 if the text stays the same, as observers of the policy anticipate.

While some scientists applaud the policy for its potential to promote open science, others are concerned that it would impede creativity. Nature investigates the impact of the law on science.

How is The EU Going About This?

The European Union (EU) has opted to govern AI models according to their potential danger. This entails imposing more stringent laws on riskier applications and establishing distinct regulations for general-purpose AI models like GPT, which have a wide range of unanticipated applications.

The rule prohibits artificial intelligence (AI) systems that pose “unacceptable risk,” such as those that infer sensitive traits from biometric data. Some requirements must be met by high-risk applications, such as employing AI in recruiting and law enforcement. For instance, developers must demonstrate that their models are secure, transparent, and easy for users to understand, as well as that they respect privacy laws and do not discriminate. Developers of lower-risk AI technologies will nevertheless need to notify users when they engage with content generated by AI. Models operating within the EU are subject to the law, and any company that breaks the regulations faces fines of up to 7% of its yearly worldwide profits.

“I think it’s a good approach,” says Dirk Hovy, a computer scientist at Bocconi University in Milan, Italy. AI has quickly become powerful and ubiquitous, he says. “Putting a framework up to guide its use and development makes absolute sense.”

Some believe that the laws don’t go far enough, leaving “gaping” exemptions for national security and military needs, as well as openings for the use of AI in immigration and law enforcement, according to Kilian Vieth-Ditlmann, a political scientist at AlgorithmWatch, a non-profit organization based in Berlin that monitors how automation affects society.

To What Extent Will Researchers Be Impacted?

Very little, in theory. The draft legislation was amended by the European Parliament last year to include a provision exempting AI models created just for prototyping, research, or development. According to Joanna Bryson, a researcher at the Hertie School in Berlin who examines AI and regulation, the EU has made great efforts to ensure that the act has no detrimental effects on research. “They truly don’t want to stop innovation, so I’m surprised if there will be any issues.”

According to Hovy, the act is still likely to have an impact since it will force academics to consider issues of transparency, model reporting, and potential biases. He believes that “it will filter down and foster good practice.”

Physician Robert Kaczmarczyk of the Technical University of Munich, Germany, is concerned that the law may hinder small businesses that drive research and may require them to set up internal procedures in order to comply with regulations. He is also co-founder of LAION (Large-scale Artificial Intelligence Open Network), a non-profit dedicated to democratizing machine learning. “It is very difficult for a small business to adapt,” he says.

What Does It Signify For Strong Models Like GPT?

Following a contentious discussion, legislators decided to place strong general-purpose models in their own two-tier category and regulate them, including generative models that produce code, images, and videos.

Except for those used exclusively for study or those released under an open-source license, all general-purpose models are covered under the first tier. These will have to comply with transparency standards, which include revealing their training procedures and energy usage, and will have to demonstrate that they honor copyright rights.

General-purpose models that are considered to have “high-impact capabilities” and a higher “systemic risk” will fall under the second, much tighter category. According to Bommasani, these models will be subject to “some pretty significant obligations,” such as thorough cybersecurity and safety inspections. It will be required of developers to disclose information about their data sources and architecture.

According to the EU, “big” essentially means “dangerous”: a model is considered high impact if it requires more than 1025 FLOPs (the total number of computer operations) for training. It’s a high hurdle, according to Bommasani, because training a model with that level of computational power would cost between US$50 million and $100 million. It should contain models like OpenAI’s current model, GPT-4, and may also incorporate next versions of LLaMA, Meta’s open-source competitor. Research-only models are immune from regulation, although open-source models in this tier are.

Some scientists would rather concentrate on how AI models are utilized than on controlling them. Jenia Jitsev, another co-founder of LAION and an AI researcher at the Jülich Supercomputing Center in Germany, asserts that “smarter and more capable does not mean more harm.” According to Jitsev, there is no scientific basis for basing regulation on any capability metric. They use the example that any chemical requiring more than a particular number of person-hours is risky. “This is how unproductive it is.”

Will This Support AI That is Open-source?

Advocates of open-source software and EU politicians hope so. According to Hovy, the act encourages the replication, transparency, and availability of AI material, which is equivalent to “reading off the manifesto of the open-source movement.” According to Bommasani, there are models that are more open than others, and it’s still unknown how the act’s language will be understood. However, he believes that general-purpose models—like LLaMA-2 and those from the Paris start-up Mistral AI—are intended to be exempt by the legislators.

According to Bommasani, the EU’s plan for promoting open-source AI differs significantly from the US approach. “The EU argues that in order for the EU to compete with the US and China, open source will be essential.”

How Will The Act Be Put Into Effect?

Under the guidance of impartial experts, the European Commission intends to establish an AI Office to supervise general-purpose models. The office will create methods for assessing these models’ capabilities and keeping an eye on associated hazards. However, Jitsev wonders how a public organization will have the means to sufficiently review submissions, even if businesses like OpenAI follow the rules and submit, for instance, their massive data sets. They assert that “the demand to be transparent is very important.” However, there wasn’t much consideration given to how these operations needed to be carried out.

Continue Reading
Advertisement

Technology

Biden, Kishida Secure Support from Amazon and Nvidia for $50 Million Joint AI Research Program

Published

on

As the two countries seek to enhance cooperation around the rapidly advancing technology, President Joe Biden and Japanese Prime Minister Fumio Kishida have enlisted Amazon.com Inc. and Nvidia Corp. to fund a new joint artificial intelligence research program.

A senior US official briefed reporters prior to Wednesday’s official visit at the White House, stating that the $50 million project will be a collaborative effort between Tsukuba University outside of Tokyo and the University of Washington in Seattle. A separate collaborative AI research program between Carnegie Mellon University in Pittsburgh and Tokyo’s Keio University is also being planned by the two nations.

The push for greater research into artificial intelligence comes as the Biden administration is weighing a series of new regulations designed to minimize the risks of AI technology, which has developed as a key focus for tech companies. The White House announced late last month that federal agencies have until the end of the year to determine how they will assess, test, and monitor the impact of government use of AI technology.

In addition to the university-led projects, Microsoft Corp. announced on Tuesday that it would invest $2.9 billion to expand its cloud computing and artificial intelligence infrastructure in Japan. Brad Smith, the president of Microsoft, met with Kishida on Tuesday. The company released a statement announcing its intention to establish a new AI and robotics lab in Japan.

Kishida, the second-largest economy in Asia, urged American business executives to invest more in Japan’s developing technologies on Tuesday.

“Your investments will enable Japan’s economic growth — which will also be capital for more investments from Japan to the US,” Kishida said at a roundtable with business leaders in Washington.

Continue Reading

Technology

OnePlus and OPPO Collaborate with Google to Introduce Gemini Models for Enhanced Smartphone AI

Published

on

As anticipated, original equipment manufacturers, or OEMs, are heavily integrating AI into their products. Google is working with OnePlus, OPPO, and other companies to integrate Gemini models into their smartphones. They intend to introduce the Gemini models on smartphones later this year, becoming the first OEMs to do so. Gemini models will go on sale later in 2024, as announced at the Google Cloud Next 24 event. Gemini models are designed to provide users with an enhanced artificial intelligence (AI) experience on their gadgets.

Customers in China can now create AI content on-the-go with devices like the OnePlus 12 and OPPO Find X7 thanks to OnePlus and OPPO’s Generative AI models.

The AI Eraser tool was recently made available to all OnePlus customers worldwide. This AI-powered tool lets users remove unwanted objects from their photos. For OnePlus and OPPO, AI Eraser is only the beginning.

In the future, the businesses hope to add more AI-powered features like creating original social media content and summarizing news stories and audio.

AndesGPT LLM from OnePlus and OPPO powers AI Eraser. Even though the Samsung Galaxy S24 and Google Pixel 8 series already have this feature, it is still encouraging to see OnePlus and OPPO taking the initiative to include AI capabilities in their products.

OnePlus and OPPO devices will be able to provide customers with a more comprehensive and sophisticated AI experience with the release of the Gemini models. It is important to remember that OnePlus and OPPO already power the Trinity Engine, which makes using phones incredibly smooth, and use AI and computational mathematics to enhance mobile photography.

By 2024, more original equipment manufacturers should have AI capabilities on their products. This is probably going to help Google because OEMs will use Gemini as the foundation upon which to build their features.

Continue Reading

Technology

Meta Explores AI-Enabled Search Bar on Instagram

Published

on

In an attempt to expand the user base for its generative AI-powered products, Meta is moving forward. The business is experimenting with inserting Meta AI into the Instagram search bar for both chat with AI and content discovery, in addition to testing the chatbot Meta AI with users in nations like India on WhatsApp.

When you type a query into the search bar, Meta AI initiates a direct message (DM) exchange in which you can ask questions or respond to pre-programmed prompts. Aravind Srinivas, CEO of Perplexity AI, pointed out that the prompt screen’s design is similar to the startup’s search screen.

Plus, it might make it easier for you to find fresh Instagram content. As demonstrated in a user-posted video on Threads, you can search for Reels related to a particular topic by tapping on a prompt such as “Beautiful Maui sunset Reels.”

Additionally, TechCrunch spoke with a few users who had the ability to instruct Meta AI to look for recommendations for Reels.

By using generative AI to surface new content from networks like Instagram, Meta hopes to go beyond text generation.

With TechCrunch, Meta verified the results of its Instagram AI experiment. But the company didn’t say whether or not it uses generative AI technology for search.

A Meta representative told TechCrunch, “We’re testing a range of our generative AI-powered experiences publicly in a limited capacity. They are under development in varying phases.”

There are a ton of posts available discussing Instagram search quality. It is therefore not surprising that Meta would want to enhance search through the use of generative AI.

Furthermore, Instagram should be easier to find than TikTok, according to Meta. In order to display results from Reddit and TikTok, Google unveiled a new perspectives feature last year. Instagram is developing a feature called “Visibility off Instagram” that could allow posts to appear in search engine results, according to reverse engineer Alessandro Paluzzi, who made this discovery earlier this week on X.

Continue Reading

Trending

error: Content is protected !!