Connect with us

Technology

What The Strict AI Rule in The EU Means for ChatGPT and Research

Published

on

What The Strict AI Rule in The EU Means for ChatGPT and Research

The nations that make up the European Union are about to enact the first comprehensive set of regulations in history governing artificial intelligence (AI). In order to guarantee that AI systems are secure, uphold basic rights, and adhere to EU values, the EU AI Act imposes the strictest regulations on the riskiest AI models.

Professor Rishi Bommasani of Stanford University in California, who studies the social effects of artificial intelligence, argues that the act “is enormously consequential, in terms of shaping how we think about AI regulation and setting a precedent.”

The law is being passed as AI advances quickly. New iterations of generative AI models, like GPT, which drives ChatGPT and was developed by OpenAI in San Francisco, California, are anticipated to be released this year. In the meanwhile, systems that are already in place are being exploited for fraudulent schemes and the spread of false information. The commercial use of AI is already governed by a hodgepodge of rules in China, and US regulation is in the works. The first AI executive order in US history was signed by President Joe Biden in October of last year, mandating federal agencies to take steps to control the dangers associated with AI.

The European Parliament, one of the EU’s three legislative organs, must now officially approve the legislation, which was passed by the governments of the member states on February 2. This is anticipated to happen in April. The law will go into effect in 2026 if the text stays the same, as observers of the policy anticipate.

While some scientists applaud the policy for its potential to promote open science, others are concerned that it would impede creativity. Nature investigates the impact of the law on science.

How is The EU Going About This?

The European Union (EU) has opted to govern AI models according to their potential danger. This entails imposing more stringent laws on riskier applications and establishing distinct regulations for general-purpose AI models like GPT, which have a wide range of unanticipated applications.

The rule prohibits artificial intelligence (AI) systems that pose “unacceptable risk,” such as those that infer sensitive traits from biometric data. Some requirements must be met by high-risk applications, such as employing AI in recruiting and law enforcement. For instance, developers must demonstrate that their models are secure, transparent, and easy for users to understand, as well as that they respect privacy laws and do not discriminate. Developers of lower-risk AI technologies will nevertheless need to notify users when they engage with content generated by AI. Models operating within the EU are subject to the law, and any company that breaks the regulations faces fines of up to 7% of its yearly worldwide profits.

“I think it’s a good approach,” says Dirk Hovy, a computer scientist at Bocconi University in Milan, Italy. AI has quickly become powerful and ubiquitous, he says. “Putting a framework up to guide its use and development makes absolute sense.”

Some believe that the laws don’t go far enough, leaving “gaping” exemptions for national security and military needs, as well as openings for the use of AI in immigration and law enforcement, according to Kilian Vieth-Ditlmann, a political scientist at AlgorithmWatch, a non-profit organization based in Berlin that monitors how automation affects society.

To What Extent Will Researchers Be Impacted?

Very little, in theory. The draft legislation was amended by the European Parliament last year to include a provision exempting AI models created just for prototyping, research, or development. According to Joanna Bryson, a researcher at the Hertie School in Berlin who examines AI and regulation, the EU has made great efforts to ensure that the act has no detrimental effects on research. “They truly don’t want to stop innovation, so I’m surprised if there will be any issues.”

According to Hovy, the act is still likely to have an impact since it will force academics to consider issues of transparency, model reporting, and potential biases. He believes that “it will filter down and foster good practice.”

Physician Robert Kaczmarczyk of the Technical University of Munich, Germany, is concerned that the law may hinder small businesses that drive research and may require them to set up internal procedures in order to comply with regulations. He is also co-founder of LAION (Large-scale Artificial Intelligence Open Network), a non-profit dedicated to democratizing machine learning. “It is very difficult for a small business to adapt,” he says.

What Does It Signify For Strong Models Like GPT?

Following a contentious discussion, legislators decided to place strong general-purpose models in their own two-tier category and regulate them, including generative models that produce code, images, and videos.

Except for those used exclusively for study or those released under an open-source license, all general-purpose models are covered under the first tier. These will have to comply with transparency standards, which include revealing their training procedures and energy usage, and will have to demonstrate that they honor copyright rights.

General-purpose models that are considered to have “high-impact capabilities” and a higher “systemic risk” will fall under the second, much tighter category. According to Bommasani, these models will be subject to “some pretty significant obligations,” such as thorough cybersecurity and safety inspections. It will be required of developers to disclose information about their data sources and architecture.

According to the EU, “big” essentially means “dangerous”: a model is considered high impact if it requires more than 1025 FLOPs (the total number of computer operations) for training. It’s a high hurdle, according to Bommasani, because training a model with that level of computational power would cost between US$50 million and $100 million. It should contain models like OpenAI’s current model, GPT-4, and may also incorporate next versions of LLaMA, Meta’s open-source competitor. Research-only models are immune from regulation, although open-source models in this tier are.

Some scientists would rather concentrate on how AI models are utilized than on controlling them. Jenia Jitsev, another co-founder of LAION and an AI researcher at the Jülich Supercomputing Center in Germany, asserts that “smarter and more capable does not mean more harm.” According to Jitsev, there is no scientific basis for basing regulation on any capability metric. They use the example that any chemical requiring more than a particular number of person-hours is risky. “This is how unproductive it is.”

Will This Support AI That is Open-source?

Advocates of open-source software and EU politicians hope so. According to Hovy, the act encourages the replication, transparency, and availability of AI material, which is equivalent to “reading off the manifesto of the open-source movement.” According to Bommasani, there are models that are more open than others, and it’s still unknown how the act’s language will be understood. However, he believes that general-purpose models—like LLaMA-2 and those from the Paris start-up Mistral AI—are intended to be exempt by the legislators.

According to Bommasani, the EU’s plan for promoting open-source AI differs significantly from the US approach. “The EU argues that in order for the EU to compete with the US and China, open source will be essential.”

How Will The Act Be Put Into Effect?

Under the guidance of impartial experts, the European Commission intends to establish an AI Office to supervise general-purpose models. The office will create methods for assessing these models’ capabilities and keeping an eye on associated hazards. However, Jitsev wonders how a public organization will have the means to sufficiently review submissions, even if businesses like OpenAI follow the rules and submit, for instance, their massive data sets. They assert that “the demand to be transparent is very important.” However, there wasn’t much consideration given to how these operations needed to be carried out.

Continue Reading
Advertisement

Technology

Google’s Isomorphic Labs Unveils AlphaFold 3, AI that Predicts Structures of Life’s Molecules

Published

on

The Google and DeepMind subsidiary Isomorphic Labs has created a new artificial intelligence model that is purportedly more accurate than existing methods at predicting the configurations and interactions of every molecule in life.

The AlphaFold 3 system, according to co-founder of DeepMind Demis Hassabis, “can predict the structures and interactions of nearly all of life’s molecules with state-of-the-art accuracy including proteins, DNA, and RNA.”

Protein interactions are essential for drug discovery and development. Examples of these interactions include those between enzymes that are essential for human metabolism and antibodies that fight infectious illnesses.

Published on May 8 in the academic journal Nature, DeepMind said that the findings might drastically cut down on the time and expense needed to create medicines that have the potential to save lives.

“We can design a molecule that will bind to a specific place on a protein, and we can predict how strongly it will bind,” Hassabis stated in a press release, utilizing these new powers.

Earlier, AlphaFold revolutionized research by making protein 3D structure prediction more straightforward. Nevertheless, prior to AlphaFold 3’s improvement, it was unable to forecast situations in which a protein bound with another molecule.

Despite being limited to non-commercial use, scientists are reportedly excited about its increased predictive power and ability to speed up the drug discovery process.

“AlphaFold 3 allows us to generate very precise structural predictions in a matter of seconds, according to a statement released by Isomorphic Labs on X.”

“This discovery opens up exciting possibilities for drug discovery, allowing us to rationally develop therapeutics against targets that were previously difficult or deemed intractable to modulate,” the blog post continued.

The AlphaFold Server Login Process

The AlphaFold Server, a recently released research tool, will be available to scientists for free, according to a statement made by Google DeepMind and Isomorphic Labs.

Isomorphic Labs is apparently collaborating with pharmaceutical companies to use the potential of AlphaFold 3 in drug design. The goal is to tackle practical drug design issues and ultimately create novel, game-changing medicines for patients.

Since 2021, a database containing more than 200 million protein structures has made AlphaFold’s predictions freely available to non-commercial researchers. In academic works, this resource has been mentioned thousands of times.

According to DeepMind, researchers may now conduct experiments with just a few clicks thanks to the new server’s simplified workflow.

Using a FASTA file, AlphaFold Server’s web interface will enable data entry for a variety of biological molecule types. After processing the task, the AI model displays a 3D overview of the structure.

Continue Reading

Technology

Phone.com Launches AI-Connect, a Cutting-Edge Conversational AI Service

Published

on

AI-Connect, a revolutionary conversational speech artificial intelligence (AI) service, was unveiled by Phone.com today. AI-Connect, the newest development in Phone.com’s commercial phone system, offers callers and businesses a smooth and effective contact experience.

AI-Connect is specifically designed to handle inbound leads and schedule appointments without the clumsiness of cookie-cutter call routing or the expense of a contact center. This is ideal for small and micro businesses that need to take advantage of every opportunity to convert interest into sales but lack the luxury of an administrative team or a call center to handle the influx of prospects or sales calls.

AI-Connect can effectively manage duties like call routing, schedule management, and FAQ responding since it is built to engage in genuine, free-flowing conversations with callers. Modern automatic voice recognition (ASR), large language model (LLM), text-to-speech (TTS), natural language understanding (NLU), and natural language processing (NLP) technologies are used to enable this capacity.

The real differentiator with AI-Connect is its capacity to provide goal-oriented, conversational communication. Excellent intent recognition is provided by the company’s creative use of LLM in conjunction with NLU/NLP hybrid infrastructure. Notable is also how the new service leverages machine learning to deliver customized suggestions and detailed call metrics for every engagement.

Phone.com CEO and Co-Founder Ari Rabban stated, “AI-Connect is much more than just a service or new iteration of AI-enabled CX; it’s a strategic game-changer that strips away the burden of expensive, complicated technology designed for small businesses.” “AI-Connect, a component of our UCaaS platform, dismantles conventional barriers and gives companies of all sizes access to a realm of efficiency and expertise that would normally require significant time and investment.”

A professional voice greets customers and provides them with a number of easy options when they initiate a call to an AI-Connect script. AI-Connect guarantees that Phone.com customers maximize every engagement, regardless of their availability to answer, from easily arranging, rescheduling, or canceling appointments to smoothly connecting with a specific contact or department.

AI-Connect effectively filters out spam and other undesirable calls by utilizing sophisticated call screening capabilities, saving both business owners and callers important time.

The discussion between callers and AI-Connect is facilitated by sophisticated conversational design, which also optimizes call flow and delivers real-time responses that are most effective. Businesses may easily modify and implement AI-Connect to meet their specific needs thanks to the intuitive user interface (UI).

“We look forward to embarking on the next chapter of communications with great anticipation as innovation is in our DNA,” said Alon Cohen, the acclaimed Chief Technology Officer of Phone.com, whose engineering prowess produced the first VoIP call ever. The FCC’s Pulver Order, which removed certain IP-based communication services from conventional regulatory restrictions, ushered in a new age and was implemented 20 years ago. With AI-assisted interactions, “we are now in a position to investigate their transformational potential. Our commitment to transforming communication is reaffirmed as we embark on a journey towards a future characterized by intelligent solutions.”

Phone.com is celebrating 15 years of consecutive year-over-year growth, driven by a strong clientele that includes more than 50,000 enterprises and an impressive increase in market share. Supported by an unwavering dedication to providing state-of-the-art services and technology at reasonable costs, the company’s approach works well for enterprises of all sizes, accelerating its trajectory of steady expansion.

Continue Reading

Technology

Biosense Webster Unveils AI-Driven Heart Mapping Technology

Published

on

Today, Biosense Webster, a division of Johnson & Johnson MedTech, announced the release of the most recent iteration of its Carto 3 cardiac mapping system.

Heart mapping in three dimensions is available for cardiac ablation procedures with Carto 3 Version 8. It is integrated by Biosense Webster into technology such as the FDA-reviewed Varipulse pulsed field ablation (PFA) system.

Carto Elevate and CartoSound FAM are two new modules that Biosense Webster added to the software. These modules were created by the company to be accurate, efficient, and repeatable when used in catheter ablation procedures for arrhythmias such as AFib.

Biosense Webster’s CartoSound FAM encompasses the first application of artificial intelligence in intracardiac ultrasound. In addition to saving time, the algorithm, according to the company, provides a highly accurate map by automatically generating the left atrial anatomy prior to the catheter being inserted into the left atrium. Through the use of deep learning technology, the module produces 3D shells automatically.

Incorporating multipolar capabilities with the Optrell mapping catheter is one of the new features of the Carto Elevate module. By doing so, far-field potentials are greatly reduced and a more precise activation map for localized unipolar signals is produced. The identification of crucial areas of interest is done effectively and consistently with Elevate’s complex signals identification. An improved Confidense module generates optimal maps, and pattern acquisition automatically monitors arrhythmia burden prior to and following ablation.

Jasmina Brooks, president of Biosense Webster, stated, “We are happy to announce this new version of our Carto 3 system, which reflects our continued focus on harnessing the latest science and technology to advance tools for electrophysiologists to treat cardiac arrhythmias.” For over a decade, the Carto 3 system has served as the mainstay of catheter ablation procedures, assisting electrophysiologists in their decision-making regarding patient care. With the use of ultrasound technology, better substrate characterization, and improved signal analysis, this new version improves the mapping and ablation experience of Carto 3.

Continue Reading

Trending

error: Content is protected !!