ESG Post

Companies

OpenAI reports increased misuse of AI for election disinformation

OpenAI reported that its artificial intelligence models have been exploited in several instances to generate fake content, including long-form articles and social media comments, with the intent to influence elections.

The startup noted that cybercriminals are increasingly leveraging AI tools, such as ChatGPT, for malicious purposes, including the creation and debugging of malware, as well as the generation of deceptive content for websites and social media platforms.

This year alone, OpenAI has thwarted over 20 such attempts. This includes a series of ChatGPT accounts identified in August that were used to write articles on subjects related to the U.S. elections. Additionally, in July, the company banned several accounts from Rwanda that were generating election-related comments for dissemination on the social media platform X.

OpenAI clarified that none of these activities aimed at swaying global elections gained significant traction or sustained audiences.

Since its last report on influence and cyber operations in May, OpenAI has “continued to develop new AI-powered tools that enable us to detect and analyze potentially harmful activities.” The company stated that while the investigative process still relies heavily on human judgment and expertise, these tools have significantly reduced some analytical steps from days to minutes.

Looking ahead, OpenAI plans to collaborate across its intelligence, investigations, security research, and policy teams to anticipate how malicious actors might exploit advanced models for harmful purposes and to devise appropriate enforcement strategies.

“We will continue to share our findings with our internal safety and security teams, communicate lessons to key stakeholders, and partner with industry peers and the broader research community to stay ahead of risks and enhance our collective safety and security,” the company added.

Concerns about the use of AI tools and social media to generate and disseminate fake content surrounding elections are mounting, especially as the U.S. prepares for presidential elections. The US Department of Homeland Security has reported an increasing threat from Russia, Iran, and China, which may seek to influence the 5th November elections by disseminating false or divisive information using AI.

Last week, OpenAI solidified its status as one of the world’s most valuable private companies following a $6.6 billion funding round, and ChatGPT now boasts 250 million weekly active users.