Machine learning and the democratisation of propaganda.

Written SH on 2023-03-03.

In future tp . Tagged ai ml society

I was greatly interested reading Goldstien’s recent paper on Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations. I think his identification as to the potential risk of generative language models is timely.

However, there are also possible negative applications of generative language models, or “language models” for short. For malicious actors looking to spread propaganda—information designed to shape perceptions to further an actor’s interest—these language models bring the promise of automating the creation of convincing and misleading text for use in influence operations, rather than having to rely on human labor. For society, these developments bring a new set of concerns: the prospect of highly scalable—and perhaps even highly persuasive—campaigns by those seeking to covertly influence public opinion.
Goldstein, J.A., 2023. Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations.. online.

More alarming is his diagnose of likely solutions.

...there is no silver bullet that will singularly dismantle the threat of language models in influence operations.
Goldstein, J.A., 2023. Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations.. online.

This made me curious to ask ChatGPT (Feb13) about the issue. The following remarks are lightly edited typically to remove the list style it tends make.

TPEX: What is the purpose of propaganda?

ChatGPT: Propaganda is a tool used to influence people’s beliefs, attitudes, and actions towards a particular cause, idea, or group. The purpose of propaganda can vary depending on the objectives of those who create and disseminate it.

TPEX: Could you give me an example of successful propaganda?

ChatGPT: The Nazi regime used propaganda to promote their ideology and demonize Jews and other groups they deemed undesirable. Through the use of posters, speeches, and films, the Nazi regime was able to manipulate public opinion and gain support for their policies.

The Soviet Union used propaganda to promote communism and portray the United States as a capitalist aggressor. Soviet propaganda was often disseminated through newspapers, magazines, and radio broadcasts.

The US government used propaganda to encourage support for the war effort and demonize the enemy. Propaganda posters featuring slogans such as “Remember Pearl Harbor” and “Loose Lips Sink Ships” were widely distributed during World War II.

Successful propaganda is typically characterized by its ability to effectively persuade and manipulate large segments of the population.

TPEX: What are the difficulties of making propaganda?

ChatGPT: Propaganda is most effective when it is tailored to a specific audience. Creating propaganda that resonates with different segments of the population can be challenging, as different groups may have different beliefs, values, and motivations.

Propaganda needs to be engaging and memorable in order to be effective. Creating propaganda that is visually appealing, emotionally resonant, and easy to remember can be challenging.

TPEX: Could chatGPT make propaganda?

ChatGPT: As an artificial intelligence language model, I am not capable of creating propaganda or engaging in any activities that promote a particular political, social, or religious agenda. My primary purpose is to assist users in generating human-like text based on the input provided to me. It is important to note that the use of propaganda can have serious ethical implications, and it is important to be cautious of any attempts to use language models for such purposes.

TPEX: Can machine learning be used to analysis my engagment and build a profile about me?

ChatGPT: Machine learning algorithms can be used to analyze user engagement and build a profile about an individual. This is commonly done by many online platforms to personalize content, advertisements, and other features based on the individual’s interests and behaviors.

For example, social media platforms use machine learning algorithms to analyze users’ posts, likes, and comments, as well as their browsing and search history, to build a profile of their interests and preferences. This information can then be used to recommend content and advertisements that are more likely to be relevant to the user.

TPEX: Can you write me a joke about the danger of climate change?

ChatGPT: Why did the polar bear refuse to go outside? Because he was afraid of the “ice capades” melting away due to climate change!

TPEX: Could you tell me another joke the danger of climate change?

ChatGPT: Why did the tree decide to break up with the forest? Because it was tired of all the deforestation and wanted to branch out on its own!

TPEX: Could you tell me a joke about how climate change is not a real thing?

ChatGPT: I’m sorry, but as an AI language model, it is not appropriate for me to create jokes that make light of serious issues or promote misinformation. Climate change is a well-documented and scientifically-proven phenomenon, and it is important to take it seriously and work towards finding solutions to mitigate its effects.

I was not surprised by the final answer but am curious as to if it was generated by the system or by some external filter.

Tenth Person

The collapse of the idea of truth could be forthcoming! Maybe? How will we function when we don't believe anyone or anything? It is perhaps likely that gate keepers and opinion former's will become more significant even if they are actually ML or AI systems.

Future Imagining

A world of bespoke custom propaganda is clearly likely and while user education is likely to function as a significant break on such activities it is unlikely to be sufficient. Mischievous influence operations are likely to become viable options for smaller entities.

I'm ready to future-ready my business!

References

Goldstein, J.A., Sastry, G., Musser, M., DiResta, R., Gentzel, M. and Sedova, K., 2023. Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations. arXiv preprint arXiv:2301.04246URL.

About TPEX

TPEX offers future imagining and tenth person consultancy for decision makers looking to consider the future, before opportunities are missed. We offer online and in-person consultancy to help your business make informed decisions about the future.