pixel

Everything everywhere all at once

By Sonja Lyubomirsky
Distinguished Professor of Psychology, University of California, Riverside
A close-up image of a digitally designed series of cubes that range from colorless to glowing vibrant shades of green. This visual represents how an idea ignites the potential of AI to contribute to human flourishing, magnifying endless possibilities in this rapid revolution.

As an experimental social/personality psychologist, I have devoted nearly 35 years to studying happiness and flourishing. My laboratory’s bread-and-butter approach involves randomly assigning human participants to regularly practice positive strategies such as writing gratitude letters, performing random acts of kindness, and trying to behave in a more extroverted manner. These behaviors share a common thread. They almost always make people feel more connected to others. Reflecting on all the ways that your mom made your life easier, helping a colleague recover lost data, and chatting up the barista on a Sunday morning all promote a sense that we are all in it together and that someone else understands and appreciates us, if only for a fleeting moment. After decades of inquiry, the science of well-being has landed me on what sounds like a cliché: The secret to happiness lies in human-to-human connection.

In the past weeks, while exploring the seemingly magical outputs of GPT-4, my mind kept returning to a single question:

Will this rapidly evolving and revolutionary technology promote human connection or harm it?

On the positive side, AI has the potential to build, sustain, and improve our social connections, and to accomplish this in ways that are more immersive than Meta’s Mark Zuckerberg ever dreamed. The most exciting possibility is that GPT-4 and its descendants could serve as a 24/7 personal “life coach,” encouraging us to act, prompting us to reframe our thinking patterns, and offering social support, therapy, or timely happiness hacks. It could prompt us to practice a gratitude exercise when we are feeling dejected, tutor and role play with us to have better conversations, and teach us strategies (from meditation to empathy) at the moment we need them. To accomplish this, our AI coach will ultimately have access to the world’s greatest wisdom (and also its greatest foolishness), as well as all of the personal and physiological data we are willing or able to provide.

Examples of requests that we may ask of our future AI coach include:

  • Here are all my texts, social media posts, and emails; my journal entries; my Fitbit data; my sleep data; and my cortisol, immune marker, glucose, and heart rate levels for the past two years. Tell me how to live in a way that promotes superior physical health. Or construct a set of flourishing-enhancing goals or values for me. Or figure out what characteristics and activities represent the most authentic me.
  • Explain to me why I behaved as I just did (e.g., started that argument, quit that job). Or explain the motivations or agendas behind particular actions of others in my life (e.g., my best friend, my kid, my supervisor, POTUS).
  • Help me communicate with strangers, acquaintances, and loved ones so that the conversations make me (and them) feel understood, valued, and cared for.
  • Here are my diary entries and transcripts of all my conversations during the past week. Help me rethink my worst days in ways that fix the cognitive distortions I demonstrate. Reframe my last seven days as being more upbeat, more grateful, more awe-inspiring, or imbued with spiritual meaning.
  • Help me tell more compelling stories to my children at bedtime and to my friends, colleagues, and first dates, to engage, entertain, and educate them.
  • Write first drafts or improve my first drafts of important texts, apologies, birthday wishes, and love letters.
  • Using role playing or “AI goggles,” guide me through difficult conversations that reduce defensiveness (in myself or my partner), enhance or bring back chemistry to an intimate relationship, or end a friendship, romance, or business partnership when it is time.
  • If I am deeply lonely or have social difficulties (e.g., due to social anxiety, depression, or autism), help me to build or deepen my connections, make more friends, find relationship partners, and become more sociable.
  • Here is the entire corpus of my written output produced. Create a Sonja Lyubomirsky that is enough like me for my kids to interact with—share jokes, tell stories about their own grandkids, and ask for advice—after I am gone.

I’m an optimist, so the positive side is longer!


Will individuals benefit by treating their AI friends as a bonus source of joy, comfort, and support that facilitates their connections in the real world?

Yet it would be foolhardy not to consider possible harms. On the negative side, AI also has the potential to impair or ruin social connections in myriad ways.

The technology may lead us to rely on AI friends (like Replika) to the detriment of human relationships. Will the fully-customized pornography that is surely on our horizon replace real-life sex? Will some of us become so attached to our AI friends that we prioritize them with our time, energy, and money, weakening or completely losing our human connections?

On the other hand, will individuals benefit by treating their AI friends as a bonus source of joy, comfort, and support that facilitates their connections in the real world? Will a strong stigma emerge for spending too much time with AI that counteracts some of the above-mentioned harms?

AI might impair our relationships by reducing authenticity. If I learn that a heartfelt love letter (or condolence note) was 99 percent AI-authored, will this news harm a budding romance or long-term friendship? More broadly, will AI amplify my suspicions about every communication or product I receive? Is this a real person? Who wrote or sketched or constructed this?

In light of financial incentives of corporations that train, distribute, or manipulate AI, could I ever truly trust my AI coach’s advice? Indeed, I might end up suspecting many of their prompts as covers for advertisement, persuasion, or product placement.

And those possibilities are only the beginning! AI experts are predicting that the world is about to change in ways that the smartest minds cannot fully fathom (much like the invention of the internet led to more than just online bookstores but also to apps, smartphones, and social media).

My own forecast anticipates a bifurcation or, more precisely, a trifurcation of effects, often seen in other areas, from the latest large language models like GPT-4 and its offspring. Let’s call it the Rule of Threes. I predict that roughly one-third of humans will benefit, blossom, and thrive; one third will experience both positive and negative repercussions; and one third (likely those without the financial or psychological resources to take advantage of it) will languish and even crash. Furthermore, who ends up in what tertile may shift depending on whether one is tracking the short-term or the long-term impacts. Scientists and mental health practitioners today might do well to start developing profiles—based on demographics, personality, mental health, etc.—that predict who will fall in which tertile and how we can help lift all boats.

An acquaintance described her first experience with GPT-4 as “seeing the face of God.” To say that AI is having a moment is an understatement. It is everything everywhere all at once. We need to come together now and ponder how to maximize flourishing and prevent calamity.

Lyubomirsky did not use AI to assist with the writing or revising of this essay. However, she did consult with friends and colleagues, who should take credit for the best ideas herein.

The view, opinion, and proposal expressed in this essay is of the author and does not necessarily reflect the official policy or position of any other entity or organization, including Microsoft and OpenAI. The author is solely responsible for the accuracy and originality of the information and arguments presented in their essay. The author’s participation in the AI Anthology was voluntary and no incentives or compensation was provided.

Lyubomirsky, S. (2023, June 12). Everything Everywhere All at Once. In: Eric Horvitz (ed.), AI Anthology. https://unlocked.microsoft.com/ai-anthology/sonja-lyubomirsky


Sonja Lyubomirsky

Sonja Lyubomirsky is Distinguished Professor of Psychology at the University of California, Riverside, and author of The How of Happiness (published in 39 countries). Her research on the science of happiness has received the Diener Award, Peterson Gold Medal, and an Honorary Doctorate from U. Basel.

A portrait of Sonja Lyubomirsky