Tech

OpenAI Proposes Government Restrict AI Chips to Prevent Propaganda Explosion

As generative language models become more accessible, easier to scale, and write more persuasive text, they could be used to spread disinformation.
openai-government-restrict-ai-chips
picture alliance / Contributor via Getty Images

A cohort of researchers from OpenAI, Stanford, and Georgetown Universities are warning that large language models like the kind used by ChatGPT could be used as part of disinformation campaigns to help more easily spread propaganda. In a recent study that was published in January, the researchers wrote that as generative language models become more accessible, easier to scale, and write in more credible and persuasive text, they will be useful for influence operations in the future. 

Advertisement

The automation of propaganda presents a new competitive advantage, the researchers wrote, that will allow expensive tactics to become cheaper and less discoverable as each text generation is unique. Examples of ways people could use generative language models to create propaganda include sending out mass-messaging campaigns on social media platforms and writing long-form news articles online.

“Our bottom-line judgment is that language models will be useful for propagandists and will likely transform online influence operations,” the researchers wrote in the paper. “Even if the most advanced models are kept private or controlled through application programming interface (API) access, propagandists will likely gravitate towards open-source alternatives and nation states may invest in the technology themselves.”

The researchers argue that this idea is not just speculative, citing an example of a researcher who fine-tuned a language model on a dataset of 4chan posts and used it to post 30,0000 generated posts on 4chan, much of which was filled with offensive hate speech. The open-source code for the model was downloaded 1,500 times before it was taken down by HuggingFace, the site that hosted it. The ability of one person to generate such a massive campaign online using generative AI reveals the potential for people to easily wage influence operations without robust resources. The paper also states that models can be trained using targeted data, including modifying models so that they are more useful for persuasive tasks and produce slanted text that supports a particular mission.  

Advertisement

In addition to online posts and articles, the researchers warn that propagandists may even deploy their own chatbots that persuade users of the campaign’s message. The researchers cite a previous study that showed how a chatbot helped influence people to get the COVID-19 vaccine as evidence of the fact that a chatbot would serve as a powerful propagandist. 

The researchers propose a framework for mitigating the threat of generative models being used in influence operations, listing interventions that could take place in any of the four stages of the pipeline—model construction, model access, content dissemination, and belief formation. 

In the development stage, the researchers suggest AI developers build more fact-sensitive models with more detectable outputs. They also propose that governments could impose restrictions on training data collection and create access controls on AI hardware, such as semiconductors. 

“In October 2022, the US government announced export controls on semiconductors, SMEs, and chip design software directed at China,” the researchers wrote. “These controls could slow the growth in computing power in China, which may meaningfully affect their ability to produce future language models. Extending such controls to other jurisdictions seems feasible as the semiconductor supply chain is extremely concentrated.”

However, they acknowledge that “export controls on hardware are a blunt instrument and have far-reaching consequences on global trade and many non-AI industries.” In a blog post about the work, OpenAI stated that it does not explicitly endorse mitigations and is offering guidelines to lawmakers.

The researchers also propose stricter control over model access, including closing security vulnerabilities and restricting access to future models. On the content side, the researchers propose that platforms coordinate with AI providers to identify AI-written content and require all content to be written by a human. Finally, the researchers encourage institutions to engage in media literacy campaigns and provide consumer-focused AI tools. 

Though there have not been any recorded instances of a large language model being used to spread disinformation as of now, the open-access availability of models like ChatGPT has led to a number of people using it to cheat on school assignments and exams, for example. 

“We don’t want to wait until these models are deployed for influence operations at scale before we start to consider mitigations,” Josh A. Goldstein, one of the lead authors of the report and a researcher at the Center for Security and Emerging Technology, told Cyberscoop

OpenAI directed Motherboard to its blog post when reached for comment.