What do we want from machines?
It is imperative to ask this question as we become mesmerized by the advances of large language models (LLMs) such as ChatGPT and GPT-4. Clearly, it would be foolish to embrace new technologies just because they are impressive—nuclear weapons were impressive. It would be equally problematic to concentrate only on production or performance. What if new technologies increase gross domestic product but we, the humans, do not enjoy the fruits of this advance?
It seems natural, then, to focus on a more inclusive notion of what humans should seek from technological progress.
Aristotle could not have foreseen these bewildering changes, but he did worry a lot about the question of “how shall I live?”
He proposed the notion of eudaimonia, which is often translated as “human flourishing.” Such flourishing goes beyond enjoying more goods and services or merely better health and comfort. It includes higher-order ideas, such as living a meaningful life, moral virtue, and a degree of human agency. Though difficult to define precisely, most people have an intuitive idea of what flourishing means.
Consider a new drug, let’s call it fantasia, that simultaneously extends life, makes us healthier and stronger, and boosts our happiness. It also massively reduces most of our material and calorie needs. With this drug, we could all lie in bed all day or play video games and find it pleasurable. But it would also induce people to withdraw from human interactions, political decisions, and economic activities. Most people, though impressed with this pharmaceutical achievement, would not want to take fantasia or unleash it on humanity. The loss of agency and diminished social meaning that this Matrix-like environment would bring does not correspond to their notion of human flourishing.
LLMs are impressive. I count myself as a skeptic concerning the capabilities of current AI and dreams of artificial general intelligence. And yet, after several hours of interacting with GPT-4, I was impressed. One can debate whether AI tools are exhibiting emergent intelligence, but there is little doubt that they are performing tasks that require higher-order reasoning. They can draw analogies as well as create output that has no direct analogue on the web, such as explaining mathematical proofs in the style of Shakespearean sonnets. These algorithms have a level of natural language communication that is truly astounding, enabling them to engage in human-like conversation on many topics.
Nonetheless, it is only fair that we should question whether LLMs will contribute to human flourishing.
There is a scenario in which they do not. Like fantasia, they could remove many of the meaningful tasks we perform. Many of us could lose our jobs, with our analytical, writing, and even creative tasks transferred to LLMs. We could withdraw from human interactions, our real-world social networks, and political and social activity. We could lose much of our human agency.
This scenario might strike some as fanciful. But advances in LLMs will continue and may accelerate. Other artificial intelligence (AI) technologies will also make rapid progress in the years to come. In fact, more rudimentary digital tools—such as numerically controlled machinery, industrial robots, office software, and basic social media algorithms— are already in place and have boosted inequality. Millions of workers have lost their jobs to automation, and in some cases withdrawn from social activities. Our political and social realities have become more unreal thanks to social media.
Doom and gloom are not preordained. LLMs could contribute to human flourishing. But to do so, we need a change of focus, narrative, and architecture.
We need to focus on the right notion of human flourishing. We must abandon the conceit that top-down design of more powerful digital tools targeted at automating work and disempowering workers and citizens is a spectacular advance. We need to shift the tremendous energy of the tech industry towards creating new tasks for human workers, new ways in which humans can continue to productively work while benefiting from more powerful algorithms, and pathways in which citizens can be more active and engaged with other humans. This last category includes democratic participation and engagement in real social networks, rather than just virtual ones. New tasks and new human productivities may sound like a tall order. But this is exactly what technological progress did in the first eight decades of the 20th century. Economic growth was rapid and its fruits broadly shared. Novel communication technologies facilitated democratic participation.
Rather than an abstract notion of intelligence, we need to embrace “machine usefulness.”
We need to change the narrative. The whole “machine intelligence” framing, going back at least to Alan Turing, has been a mistake. Rather than an abstract notion of intelligence, we need to embrace “machine usefulness”—a concept Simon Johnson and I discuss in our new book, Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity. As early computer science pioneers such as Douglas Engelbart understood and practiced, we should strive to make digital tools more useful to humans. That involves, in essence, empowering humans.
Beyond creating new productive tasks, AI technologies can improve the retrieval, filtering, and processing of information for decision-making tasks. That means, for example, shifting the emphasis from “replacing radiologists” towards “providing better information and better representation of data to radiologists”. It means providing tools to enable workers with unique human skills, as well as those with various limitations, to perform more complex tasks. Nurses could be empowered to diagnose patients and prescribe medications. Electricians could solve more complex problems thanks to expert diagnostic help from AI tools.
We can also employ AI to create new platforms for human interactions. The algorithm underlying Airbnb, which brings together tens of millions of people to exchange apartments and accommodations, is much simpler than most modern AI programs. But it has created economic value and meaningful social behavior by enabling people to trade, travel, and have new experiences.
Perhaps most controversially, I would argue that we also need to change the architecture of LLMs. Their current architecture is optimized to elevate machine intelligence and prioritizes automation. Thanks to this architecture, LLMs have impressed tens of millions of users by appearing intelligent and sounding authoritative (even when providing faulty information).
But this architecture fails to provide the tools for better human flourishing. Users should remain the decision-makers while receiving accurately filtered and curated information. LLMs should play the role of an advanced and personalized version of Wikipedia, not that of a guru. Rather than giving canned answers and striving for authoritativeness, LLMs should make recommendations to humans, together with the nuances of context, reliability, and an accurate account of the diversity of contrasting opinions and findings. This means, first and foremost, a commitment to keeping humans in the driving seat.
Changing the architecture does imply an overhaul of the tech industry and less riches for the captains of that industry. And that is perhaps our greatest challenge in ensuring that AI helps humans flourish.
I am grateful to Simon Johnson, Asu Ozdaglar, John Tasioulas, and Glen Weyl for discussions and comments.
Daron Acemoglu Institute Professor at MIT; fellow of NAS, APS, BAS, AAAS; winner of BBVA Frontiers of Knowledge Award, Nemmers Prize, Global Economy Prize; and author of The New York Times bestseller Why Nations Fail (with James Robinson), and Power and Progress: Our Thousand-Year Struggle over Technology and Prosperity (with Simon Johnson).