X

AI is Being Used For Hacking By North Korea and Iran, According to Microsoft

AI is Being Used For Hacking By North Korea and Iran, According to Microsoft

Microsoft stated on Wednesday that generative artificial intelligence is being used, mostly by Iran and North Korea, and to a smaller extent by Russia and China, to launch or coordinate offensive cyber operations.

Microsoft said that it worked with business partner OpenAI to identify and neutralize numerous attacks that tried to take advantage of AI technology they had created.

The corporation stated in a blog post that although the methods were “early-stage” and neither “particularly novel nor unique,” it was still vital to make them public given that US competitors were using large-language models to increase their capacity for network breaches and influence operations.

Machine learning has been utilized by cybersecurity companies for a long time in defense, mostly to identify unusual network activities. However, the usage of it by malevolent actors and hostile hackers has increased since large-language models, such as OpenAI’s ChatGPT, were introduced.

Microsoft has made billion-dollar investments in OpenAI, and the company released a paper on Wednesday indicating that generative AI is anticipated to improve malevolent social engineering, resulting in more advanced deepfakes and voice cloning. A danger to democracy in a year when elections are being held in more than 50 nations, amplifying misinformation and already happening,

Microsoft gave a few instances. It said that all generative AI assets and accounts of the specified organizations had been disabled in each case:

The models have been utilized by Kimsuky, a North Korean cyber-espionage group, to gather information from international think tanks that cover the nation and to produce content suitable for spear-phishing hacking operations.

Large-language models have been utilized by Iran’s Revolutionary Guard to help with social engineering, software issue troubleshooting, and even researching ways for hackers to avoid discovery in a hacked network. This involves sending out phishing emails, “one of which purports to be from an international development agency and another of which aims to entice well-known feminists to visit a feminism website created by the attacker.” The production of emails is accelerated and enhanced by AI.

The models have been used by the Russian GRU military intelligence organization, often known as Fancy Bear, to study radar and satellite technology that could be connected to the situation in Ukraine.

Targeting a wide spectrum of sectors, universities, and governments from France to Malaysia, the Chinese cyber-espionage outfit Aquatic Panda has communicated with the models “in ways that suggest a limited exploration of how LLMs can augment their technical operations.”

Maverick Panda, a Chinese organization that has been pursuing US defense contractors and other related industries for over ten years, seems to be assessing large-language models’ usefulness as a source of information regarding “potentially sensitive topics, high profile individuals, regional geopolitics, US influence, and internal affairs.”

OpenAI said that their current GPT-4 model chatbot offers “only limited, incremental capabilities for malicious cybersecurity tasks beyond what is already achievable with publicly available, non-AI powered tools” in a different blog post that was published on Wednesday.

Researchers in Cybersecurity Anticipate That to Alter

“There are two epoch-defining threats and challenges,” Jen Easterly, the director of the US Cybersecurity and Infrastructure Security Agency, stated to Congress in April of last year. Artificial intelligence is the other, and China is the first.

Easterly stated then that the US needs to make sure security was taken into consideration when developing AI.

The public release of ChatGPT in November 2022, as well as later releases by rivals like Google and Meta, have drawn criticism for being recklessly hurried, given that security was essentially an afterthought throughout development.

CEO of cybersecurity company Tenable Amit Yoran stated, “Of course bad actors are using large-language models—that decision was made when Pandora’s Box was opened.”

It would be more responsible for Microsoft to concentrate on making large-language models more secure rather than developing and marketing solutions to remedy flaws in them, as some cybersecurity experts have complained about.

“Why not create more secure black-box LLM foundation models instead of selling defensive tools for a problem they are helping to create?” asked Gary McGraw, a computer security veteran and co-founder of the Berryville Institute of Machine Learning.

While the use of AI and large-language models may not present an obvious threat right away, according to NYU professor and former AT&T chief security officer Edward Amoroso, they “will eventually become one of the most powerful weapons in every nation-state military’s offense.”

Categories: Technology
Komal:
X

Headline

You can control the ways in which we improve and personalize your experience. Please choose whether you wish to allow the following:

Privacy Settings

All rights received