By no means am I an authority on Artificial Intelligence (AI). Although, as I usually do often, pass on information or conversation that I find particularly fascinating about topics such as AI. The brief conversation below was posted on the Politico Digital newsletter and is worth reading below:
The future of artificial intelligence in politics is already here. You just haven’t noticed it.
That’s according to digital strategist Eric Wilson, managing partner at Startup Caucus, an incubator for Republican campaign tech.
Wilson, who served as digital director for Marco Rubio’s 2016 campaign, has been using AI tools one way or another for years, starting with machine learning algorithms for social media and email spam filters, and is now watching the current wave of generative AI shape up as a potential disruptive force in this election cycle.
Currently a senior vice president for strategy at the public-affairs firm Bullpen Strategy Group, Wilson says the handful of AI-enhanced ads that have run in the opening stages of the 2024 race are just the tip of the iceberg. There’s already a lot of AI being deployed beneath the surface of politics, in less dramatic ways.
But political operatives like Wilson also face a hurdle: Many of the most popular and powerful new generative AI models, such as OpenAI’s ChatGPT and Meta’s Llama, often draw a line at political content.
Though AI companies’ policies are still evolving, and the internal workings of their models are opaque, their applications regularly decline to engage with requests that seem overtly political. Asked Tuesday morning to generate a case for voting for Donald Trump over Joe Biden, the Llama-2-70b model responded, “[I]t goes against my programming rules to promote harmful, unethical, racist, sexist, toxic, dangerous, or illegal content.” Asked to write a fundraising email for Biden, it refused to “assist with drafting fundraising emails or any other form of political campaigning.”
In an interview, Wilson vented his frustration with the limits imposed by AI platforms on campaign applications — and also argued that Microsoft’s late-July decision to ban political ads from its in-house advertising platform bodes poorly for campaigns’ ability to access large tech firms’ offerings in general.
Wilson would like to see tech platforms loosen their restrictive approach to political use, at least when it comes to properly registered campaigns and committees. This interview was edited for length and clarity.
We’ve seen a lot of warnings about AI’s potential effect on politics. You’re not worried about opening the floodgates?
The moral panic around AI, especially in politics, is way overblown. The way it’s going to be integrated into this cycle is very mundane, which is like helping write press releases and social media copy, and just speeding up the process as we’re seeing it in any other industry.
The real applications are fundamentally boring and don’t live up to the panic around “Robots are going to take our democracy.”
How are you actually using it?
It’s definitely top of mind for me in all the projects that I’m working on. I use it personally to help with drafting blog posts, editing transcripts for my podcast, generating ideas for social media copy, and to generate the images I use for websites.
On a professional level, we’re investing in companies from Startup Caucus that have used AI for a while, but we’re also starting to look at what the possibilities are with large language models. We’re building products for clients that incorporate the latest in artificial intelligence to help them transcribe videos, and analyze videos. It’s a sea change.
What’s the biggest obstacle to injecting AI into the political process?
The biggest roadblock is corporate policies around politics. It’s always nerve-racking building software on dependencies that you might not be allowed to use in the future. Microsoft, for example, banned all political ads on their advertising network last week. You have OpenAI saying it can’t be used for these political use cases.
Are there alternatives to the big corporate platforms? Are people using open-source models?
Yeah, if you’re serious about building for the political space, you start with one of the open large language models, and then the important part is training it on your data sets.
How else are campaigns getting around these roadblocks?
You say, “Draft a marketing email about this topic” and then you go in and change it to have a fundraising call to action. Current AI models are set off by certain political terms or pitching candidates.
That sounds like a pain.
There are also purpose-built tools on the right and the left for writing emails on behalf of campaigns.
Democrats have invested in a company called Quiller that helps write fundraising emails and they’ve got a number of candidates doing that. There’s another platform called Localist that’s nonpartisan, that helps campaigns with copywriting.
Elon Musk has marketed his new xAI as a politically incorrect alternative to existing chatbots. Does it have the potential to become the preferred AI for Republican campaigns?
I haven’t followed it.
What is happening in the world of professional politics, talking about the actual campaigns and people supporting candidates, is more mundane. It’s the ways to make operations smarter, more effective, more efficient, and that doesn’t have to be partisan.
That’s part of the frustration with these tech companies shutting off legitimate political actors, campaigns that are registered with the FEC, the IRS, that should be able to use cutting-edge technology to reach voters and advance their message.
Is that the standard you think should apply to political use cases for AI? Permit political use by properly registered campaigns and committees?
That’s one place I would start. Addressing the challenges of political speech requires more nuance than an outright ban, which is the preferred policy approach of big tech right now.
What about deepfakes?
We haven’t had a single instance of a deepfake used by legitimate political actors, unless it’s been disclosed.
What about the concern that less scrupulous political actors will be able to use deepfakes to fool voters?
That risk exists in every industry, not just politics, and you just made a very compelling reason for why official candidates, campaigns and parties should be able to respond to the best of their abilities with all the tools that are available.
The way you stop the bad actors is not by punishing the good guys.