Can AI Enable More Effective Constituent Communications For Lawmakers? A New Study Says…Maybe
Each Member of the US House Representative speaks for 747,184 people - a staggering increase from 50 years ago. In the Senate, this disproportion is even more pronounced: on average each Senator represents 1.6 million more constituents than her predecessor a generation ago. That’s a lower level of representation than any other industrialized democracy.
As the population grows (over 60% since 1970), so, too, does constituent communications.
But that communication is not working well. According to the Congressional Management Foundation, this overwhelming communication volume leads to dissatisfaction among voters who feel their views are not adequately considered by their representatives.
It is no wonder. Members of Congress are primarily concerned with communicating their accomplishments and less focused on having an authentic interaction with members of the public. Such interaction is difficult in a world in which a handful of personnel have to contend with incoming messages via email, phone, fax, mail and constantly evolving social media platforms. They can’t keep up.
A pioneering and important new study published in Government Information Quarterly entitled “Can AI communication tools increase legislative responsiveness and trust in democratic institutions?” (Volume 40, Issue 3, June 2023, 101829) from two Cornell researchers is shedding new light on the practical potential for AI to create more meaningful constituent communication.
Bottom Line Up Front: generative AI increases constituent trust by enabling more personalized and responsive communication.
Sarah Kreps and Maurice Jakesch conducted two experiments. In their first experiment, Kreps and Jakesch compared constituents' trust in various types of replies: a standard canned reply, a personalized AI-assisted reply, and a completely AI-generated response, with and without disclosure of AI use.
Let’s dig in on the details in brief. A constituent received four out of five types of replies to their query:
a reply that a person wrote with AI support;
a reply generated by AI with strong human oversight;
a reply generated autonomously by AI.
And either
a regular human-written reply; or
a generic boilerplate reply.
Then they were asked to respond to the question: “the legislator can be trusted.”
Depending on the treatment group they either were or were not told when replies were AI-drafted.
Their findings are telling. Standard, generic responses fare poorly in gaining trust. In contrast, all AI-assisted responses, particularly those with human involvement, significantly boost trust. “Legislative correspondence generated by AI with human oversight may be received favorably.”
While the study found AI-assisted replies to be more trustworthy, it also explored how the quality of these replies impacts perception. When they conducted this study, ChatGPT was still in its infancy and more prone to linguistic hallucinations so they also tested in a second experiment how people perceived higher, more relevant and responsive replies against lower quality, irrelevant replies drafted with AI.
The authors caution against overstating these results. They suggest that AI-mediated communication could be a viable option for political leaders to manage increasing volumes of citizen correspondence and the expectation of timely responses.
If I were a Congressional member, I’d be eager to experiment with the 40 ChatGPT licenses made available by the Chief Administrative Office of the US House of Representatives in 2023. This technology could potentially improve both responsiveness and efficiency in handling constituent communications.
But an important question remains: Does enhancing the quality of the response with AI lead to a deeper understanding of and engagement with constituents’ needs?
Improved AI-generated communication might alter public perception of politicians, but it's unclear if it will change how politicians perceive and respond to public concerns.
The assumption is that Members of Congress lack authentic interaction due to time and resource constraints. Yet, even as AI makes responding faster and easier, it’s uncertain if this will translate into a greater willingness to genuinely absorb and act on public feedback. Until AI leads to better listening, we've only addressed half the challenge.