You have not selected any currencies to display

OpenAI and Microsoft join forces to prevent state-linked cyberattacks

Date:

OpenAI, the team behind the AI chatbot ChatGPT, worked with Microsoft, its leading investor, to stop five cyberattacks. These attacks were connected to harmful groups tied to the military and governments of Russia, Iran, China, and North Korea.

A recent report by Microsoft identified that these groups have been trying to use AI technology in their hacking efforts. These AI systems, known as large language models (LLMs), can generate responses that mimic human writing.

The cyberattacks were traced back to two groups from China named Charcoal Typhoon and Salmon Typhoon, one from Iran called Crimson Sandstorm, one from North Korea named Emerald Sleet, and one from Russia known as Forest Blizzard.

These groups attempted to use ChatGPT-4 for various malicious activities, such as looking up information on companies and security tools, fixing errors in code, creating harmful scripts, running phishing attacks, translating technical documents, avoiding malware detection, and studying satellite and radar technology. OpenAI shut down their accounts as soon as these activities were detected.

After discovering these attacks, OpenAI also announced a policy to ban state-supported hacking groups from using AI products. The company admits that it’s hard to stop every harmful use of its technology.

Following an increase in AI-generated fraud, governments are paying more attention to AI technology. In response, OpenAI started a $1 million cybersecurity grant program in June 2023 to improve AI in cybersecurity.

Even with OpenAI’s efforts to make ChatGPT safe and prevent it from creating dangerous content, hackers have found ways to get around these protections.

Over 200 organizations, including OpenAI, Microsoft, and others, recently joined forces with the U.S. government to create the AI Safety Institute and the United States AI Safety Institute Consortium (AISIC). This initiative, launched by President Joe Biden’s executive order in October 2023, seeks to ensure AI’s safe development and tackle issues like AI-generated fakes and cybersecurity threats.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

Popular

More like this

Coinbase’s Base Transitions to Its Own Architecture With Focus on Streamlining Ethereum Layer-2 Upgrades

Base moves beyond Optimism’s tech stack Coinbase-backed Base, a decentralized...

Bitcoin ETFs See $133M Outflows as Crypto Sentiment Stays in ‘Extreme Fear’

US-listed spot Bitcoin ETFs are extending their losing streak,...

Tether USDt Hits Record $187B Market Cap in Q4 Despite Crypto Market Downturn

Tether’s dollar-backed stablecoin USDt continued to grow at full...