Generative AI has the extraordinary capability to frame texts in ways that can be understood and accepted by different audiences. This capability could be used for bad purposes, for example to persuade people to join political extremes, but there is also a positive opportunity now to use it as a tool to mediate between different groups and encourage successful compromises and agreements. One of the most important contributions of the latest generation of AI could be to reinvent and reinvigorate constructive, collective conversation and deliberation in our society, locally, nationally, and even internationally.
Why might this be possible now, despite the experience with previous tuning of social media that seems to have pushed in the opposite direction, creating echo chambers, and enhancing polarization? Social media algorithms are presumably optimized to build audiences that stay on sites longer so they will click on more advertising. Anger and outrage apparently do this. But it is plausible that people would be equally excited about participating in warm, constructive online activities.
The current generation of AI can be trained to optimize human satisfaction.
We don’t want to leave this to luck—and we don’t have to. The current generation of AI can be trained to optimize human satisfaction. There is already a layer designed for tuning to human responses, that could home in on positive emotions, communal good will, and constructive engagement, rather than anger and outrage.
If pro-social AI training became dominant, our society could begin to move from its current strongly polarized, and often paralyzed state with respect to problems and policies towards a more thoughtful, constructive mode of collective thought. Several possible routes could take us there. The most optimistic route would converge on this pro-social training of AI spontaneously: Organizations using constructive-engagement AI collective-thinking tools could become so breathtakingly successful that they outcompete all peers. In this scenario, every company, social group, and country that employed such pro-social, constructive-engagement AI would visibly thrive. (This echoes the long-held understanding of the advantages democracies have over authoritarian regimes.) In this scenario the demand for pro-social AI quickly leads to its dominance, in the same way that A/B testing apparently accelerated the dominance of polarizing social media.
Another route to pro-social AI can be observed in the regulatory regime that Europe is exploring with its Digital Services Act (DSA). Among the online platforms’ obligations and auditing requirements in the DSA are provisions that ask platforms to assess and mitigate “risks concern[ing] the actual or foreseeable negative effects on democratic processes, civic discourse and electoral processes….” It is not yet clear how the fulfilment of these obligations will be measured in the audits, but simple indices could be developed that might test, e.g., for increasing or decreasing polarization and/or shared understanding of the issues at stake in major societal debates and decisions. Simply identifying such auditable indices of constructive engagement may provide an incentive for major digital platforms to train their AI systems to foster pro-social modes of public engagement.
Such routes to pro-social AI are experiments. We can certainly imagine them failing to catch on or, worse, leading to an apparent social harmony but in fact converging on some distorted consensus around a misunderstanding of reality. And if we construct new indices of, e.g., constructive engagement with which to audit the online platforms we must ensure that they are fairly designed and their outcomes monitored. For both these reasons, representative citizen oversight of these experiments and indices must be established. And we should remember that it is not a question of whether we should perform social experiments—they are now occurring whether we want them or not. The question is how we set the terms of the experiments and assess and guide their outcomes.
These and other routes should be quickly explored; the stakes are high. If the world remains log-jammed by anti-social public conversation, all solutions to problems are likely to fail.
Every major problem of our time thus needs this advance in collective thinking.
Moreover, the rapid advance of more powerful AI has itself raised new concerns about a near-future misalignment of its goals and our goals. These problems, too, will demand yet more powerful collective thinking among us humans—but that is a topic for another deliberation.
Saul Perlmutter is a 2011 Nobel Laureate in physics. As Professor of Physics at UC Berkeley and a senior scientist at Lawrence Berkeley National Laboratory, he leads initiatives in cosmology, data science, and science education. He currently serves on the President’s Council of Advisors on Science and Technology.