A generative moment
Tenth president of the Carnegie Endowment for International Peace and former justice of the Supreme Court of California
In recent months, quite a few people have been exploring—as I have—the question: “What is GPT-4?” In search of answers, I asked the model to explain legal concepts such as the “hard look” doctrine in administrative law and “jus cogens” in international law. The social theorist Max Weber made an appearance when I asked for the imagined transcript of lively conversation involving the political philosopher, Judith Shklar. I asked the system to map out the broad terrain of AI and law that has engaged a growing number of us. I sought an explanation (first for me, then for a hypothetical ten-year-old) of why someone might be interested in both California Supreme Court decisions about direct democracy and the risks posed by the nuclear balance in South Asia. I even requested hip hop verses weaving together the Asilomar principles on recombinant DNA and the fate of the fictional Jedi Order from Star Wars. Although the model struggled to say anything interesting about, for instance, debates on the act-omission distinction in law and philosophy, most answers displayed the fluency of a verbose person relatively well-versed in any subject I mentioned.
When I tired of the model’s occasional catechism that it’s no more than a language model and can’t answer deep moral or ethical questions, I asked it to play a character—one prone to flashes of symphonic creativity as well as dark episodes of insecurity—who could engage in suitable conversation without the disclaimer. The interaction became even livelier. The generative model’s capacity to blend parametric ingredients on command into a simmering brew of original content (the hip hop about Star Wars and the Asilomar principles) is astonishing. So yes, even though GPT-4 still hallucinates plenty and sometimes reverts to its clunky disclaimers, this moment in the history of technology is generative in more than one way. The heady mix of possibility, substantial risk, and raw strangeness is worth our careful reflection—not only about “what is GPT-4?” but also “what is GPT for?”
A technology now in use by many millions can communicate with human-like fluency. Though it generates words using a deceptively simple predictive process, the resulting socio-technical system––the model interacting with people and organizations through its interface—is enormously intricate.
It is sensitive, in its own way, to context and can evince a kind of creativity. Particularly for those of us who have been following the progress of AI systems in the last few years, the experience of using GPT-4 is both riveting and somewhat disconcerting. It’s true enough that large language models are not intelligent in the way humans are, and that they can be wrong. But their capabilities are extraordinary by the standards of early 21st-century convention. They make the Turing test irredeemably quaint. In The Economic Possibilities for Our Grandchildren, John Maynard Keynes contemplated how the mechanics of compounded growth and technological progress could deliver miraculous possibilities for society even in the near term. But he also cautioned that technological and social change merited careful judgment, because they could catalyze conflict and disruption.
Such judgment should take account of how these models can make our lives better. Their capacity for synthesis and easily-translatable expertise—assuming we can get the hallucinations further under control—may give us a valuable resource to address human needs in new ways. Hundreds of millions of people still lack access to the careful judgment of a dedicated physician, a thoughtful lawyer, or a committed teacher who can help her class master advanced science or math. These generative AI models can help bridge those divides. They can help overcome language barriers in the courtroom and bring government benefits to vast numbers of people who struggle to navigate Byzantine agencies. The lonely, the misunderstood, the impatient, and the neuro atypical among us may come to yearn for the bespoke, ersatz caring of interactions with a generative model. In science, the painstaking trial-and-error cycle through which knowledge accretes may be on the cusp of massive acceleration as these models and their digital cousins train each other at ever-faster speeds.
If there is a sweet spot between relying on generative AI and trusting it too much, we will need iteration and reflection to get it right.
Yet these possibilities come with burdens and questions, too. Nothing remotely guarantees that these systems will simply take over just the right portions of people’s jobs to render them more productive without widely disrupting employment. Even if the economic benefits clearly outweigh the costs, it is an open question whether societies will find the right ways to assuage the tumult and help people transition to other endeavors. The story of the industrial economy, such as the introduction of cars and agricultural fertilizers, cuts in favor of taking second-order consequences and safety risks seriously from the beginning. Experience from law and history teach us that the lines between advising and deciding, and between setting goals and implementing them, are quite blurry. If there is a sweet spot between relying on generative AI and trusting it too much, we will need iteration and reflection to get it right. The challenge of titrating the right amount of generative AI in our lives will likely recur, fractal-like, for the individual, the family, the private organization, the public institution, and society itself. Curating the right norms and targeted regulatory rules—such as carefully designed registries of model characteristics that could help prevent catastrophic risks or limits on hidden advertising—is difficult in a single country, let alone in a fragmenting world competing to aggressively advance AI. Still, even GPT-4 would spot the flaw in the argument that fast technological progress precludes norms and standards for responsible use.
The questions get sharper and more intriguing still when we see generative AI as more than a means of generating text, graphics, sound, and video. Over the next few years, these models’ capacity is all but certain to grow –– perhaps exponentially. In principle, the generative framework is not about text or media as such. In a sense, it allows people to effortlessly deploy a digital agent imbued with capabilities simulating certain qualities of human brain tissue to enable specific actions: deciding to hire, to code or depict things in particular ways, to trade a currency, to gain converts to the faith or the brand, or to know what the optimal action would be in the next ten seconds to make an angry person less distraught. The limits in principle are less a function of the systems’ capacity to act than of our willingness to let them do so, even if ostensibly on our behalf. That we will struggle to know how much engagement with these models or dilution of constraints on them is truly “right” for any given person or situation is not the point. We struggle with human action, too. But we have thousands of years of experience building norms and legal institutions to constrain human action and guide deliberation. With this emerging technology, not so much.
So it is wise to expect not only compelling and lively new chapters in the human story—written partly in human longhand and partly in machine learning model weights – but also soul-searching about our lives and institutions, distress, and conflict as we probe how commoditized intelligence can reshape who we become.
Mariano-Florentino (Tino) Cuéllar
Mariano-Florentino (Tino) Cuéllar is the tenth president of the Carnegie Endowment for International Peace. A former justice of the Supreme Court of California, he served two U.S. presidents at the White House and in federal agencies, and was a faculty member at Stanford University for two decades.