Weeks after its launch, Chat GPT* is already threatening to upend many forms of our daily communication, such as the way we write emails, university papers, and countless other forms of writing. NYT.
Developed by the company open The AI application is a robot designed for communication – it can automatically answer questions asked in writing in a manner often close to human, to the point where it is impossible to distinguish that it has been used. But beyond the surprise that machines can replace humans in discursive forms such as poetry and scripting, there is a much bigger threat: the replacement of humans by artificial intelligence in democratic processes – not through voting, but through lobbying.
GPT Chat can automatically compile comments submitted during the legislative process. He can write letters for publication to the chief editors of local newspapers. He can comment on news articles, blog posts, and social media posts millions of times a day. It can imitate the work of the Russian Internet Intelligence Agency in its attempt to influence the 2016 US elections, but without a multi-million dollar budget and hundreds of workers hired for this purpose.
Auto-generated comments are not a new problem. For some time we have been facing the threat of bots – machines that automatically post content. Five years ago, at least one million automatically generated comments are believed to have been submitted to the FCC in connection with proposed internet neutrality legislation. In 2019, a Harvard University student tried out an automated text generator to submit 1,001 comments in a public health consultation. At the time, posting comments was just a game of arithmetic.
Danger to members of Congress
Since then, platforms have gotten better at removing “artificial, non-authentic behavior”. Facebook, for example, deletes more than a billion fake accounts every year. But these messages are only the beginning. Instead of flooding legislators’ inboxes with messages of support or checking the Capitol switchboard with robocalls, an AI system — as advanced as the GPT Chat app and trained on data related to those processes — could target legislators and influencers. critical positions to identify the weakest points in the legislative process and manipulate them ruthlessly through direct communication.
When we, the people, engage in such activities, it is called lobbying. Successful representatives of this genre, that is, leading lobbyists, combine precision in writing messages with smart targeting tactics. Currently, the only thing preventing a Chat GPT-equipped lobbyist from doing something that could be compared in discourse to drone warfare is the lack of accurate targeting. However, AI has the potential to provide technologies for this purpose as well.
A system capable of interpreting political networks, combined with the application’s text-generating capabilities, can determine which member of Congress has the most weight in a particular policy, such as corporate taxation or military spending. Acting like lobbyists, such a system could target the country’s undecided representatives who sit on committees that decide policies that serve someone’s interests, and then direct their efforts to members of the ruling party when a bill is put to a vote.
Once the people and strategies are identified, a discussion bot like Chat GPT can generate text messages for use in emails, comments, and wherever text would be useful. Lobbyists can also target these individuals directly, and the combination is important here: comments on articles and social media have finite potential, and knowing which members of the legislature to target is not in itself a sufficient condition for manipulation.
Strong incentive to attack
This ability to understand and target individuals acting online will create a hacker AI tool that will exploit vulnerabilities in social, economic and political systems with incredible speed and reach. Legislative systems can be a particular target because the incentive to attack political decision-making systems is very strong. The data to train such systems is widely available, and the use of AI is very difficult to detect – especially when used strategically to manipulate human actions.
It is only a matter of time before the data necessary to formulate such strategic targeting systems are obtained. Open societies typically rely on transparency, and most legislators are willing – at least formally – to receive and respond to messages that appear to be coming from their own people.
It’s entirely possible that the AI system will be able to identify which members of Congress have a decisive influence on the leadership, but their public visibility is low enough that their attention is not often attracted. She can then identify the public interest group that has the most influence on the public positions of that member of the legislature. She might even be able to calculate the amount of donation needed to influence the organization or target ads that convey a strategic message to its members. For every goal that needs a political decision, the right audience, the right message at the right time. Which makes the threat posed by lobbyists with AI weapons greater than the threat posed by expensive Washington street lobbying firms. Lobbyists draw on decades of experience to find strategic solutions to successfully shape the outcome of a political decision. This experience is limited and therefore accurate.
Faster and cheaper
However, in theory, artificial intelligence can achieve the same result much faster and cheaper. The initial speed advantage is huge in an ecosystem where public opinion and media narratives can be quickly consolidated, just as much as they are subject to rapid change in response to chaotic events occurring on a global scale.
In addition, the flexibility of AI can help achieve influence across multiple policy-making processes and jurisdictions simultaneously. Imagine an AI-powered lobbying firm that can try to amend every bill that is submitted to the US Congress or even the legislatures of every state. Lobbying firms tend to operate in only one state, as there are complex differences in laws, procedures, and political structure. AI can make it easier to exert influence beyond conventional political boundaries.
Just as educators will have to change the way they administer exams and student assignments in light of the changes brought about by Chat GPT, governments will have to change their relationship with lobbyists.
Undoubtedly, this technology can bring benefits to the democratic environment, the main one being accessibility. An experienced lobbyist cannot be paid by everyone, but AI access software can be available to everyone. If we’re lucky, perhaps such strategy-generating AI could revitalize the democratization of democracy itself, giving similar influence to the weakest citizens.
However, larger and more powerful institutions are more likely to use any artificial intelligence techniques to make the most impact. In the end, getting the best influence strategy still requires people within the system who can navigate the corridors of the legislature, and money. Influence is not just about delivering the right message to the right person at the right time. And while a speech-generating robot can determine who should be the beneficiaries of these influencer campaigns, for the foreseeable future, humans will have to pay for it. So while it’s impossible to predict what a future full of lobbyists with AI weapons will look like, it will likely further increase the influence and power of those who already wield them.