pixel

AI’s Striding Edge

By Gillian Hadfield
Professor of Law and Economics, University of Toronto
A close-up image of a digitally designed series of cubes that range from colorless to glowing vibrant shades of green. This visual represents how an idea ignites the potential of AI to contribute to human flourishing, magnifying endless possibilities in this rapid revolution.

My father loved to hike in the English Lake District, where he had spent time as a young boy during the Second World War. His favorite climb was Striding Edge to England’s third-highest peak, Helvellyn (the name he gave his last sailboat). Striding Edge looks like it sounds: a narrow rocky ridge falling off steeply on either side. Many make the hike to glorious views without incident, but sometimes there is tragedy. Even experienced hikers lose their footing in low visibility or sudden bad weather and fall to their deaths.

I think of Striding Edge as we witness the recent leap in large language models such as GPT-4. As a species, we are ascending on this narrow rocky path in hope of reaching great peaks. Visibility is low and the powerful winds that buffet our societies–geopolitics, economics, inequality—can make us lose our footing.

So why climb?

Surely not just to entertain ourselves with disquisitions in the style of the King James Bible on how to dislodge a peanut butter sandwich from a VCR. And not, given the risks, just to make it easier to book a hotel room, plan a dinner party, or come up with a better workout plan. We bookers of hotel rooms and planners of parties and workouts really don’t need world-altering help.

But lots of people in the world do. Despite significant reductions in global poverty and despair in past decades, we are still phenomenally unequal.

According to World Bank data, nearly 4 billion people live on less than $6.85 a day. Even in wealthy countries, millions live in poverty and insecurity. According to the Federal Reserve, forty percent of Americans do not have enough on hand to cover a $400 emergency expense. Poverty makes life fragile and at risk from all the other sources of inequality in our societies.

What could a large language model mean to these billions of people—around the globe and here at home? Well it won’t sprinkle magic fairy dust to transform our world into an egalitarian utopia.

But it could help us make life better for those who really need it. And that could justify the climb.

Achieving a goal of figuring out how to make sure AI makes the least well-off better off will require resisting the gravitational pull of our current legal and economic systems—systems that unchecked will simply add these tools to the ledger of the wealthy. But I believe this is possible.

Let me give an example based on the work I have been doing for a few decades on access to justice. This is a global crisis. We live in a law-thick world, where just managing daily life requires constant engagement with legal rules, processes, requirements, and risks. Law is supposed to level the playing field between those with and those without power—making workplaces safer, environments cleaner, markets fairer and ensuring people are paid what they are owed and receive the protections and benefits to which they are entitled. But our overly complex legal systems box people out and trip them up as often as not. According to the U.N., over 4 billion people live “outside of the rule of law,” with little to no access to basic legal protections. They have no legal means to ensure they are paid for their work or protected against theft of their land or destruction of the livelihoods by the few with power and resources.


But technology, including now advanced language models, offers the promise of providing what billions in legal and economic aid have not.

As I explored in my 2017 book Rules for a Flat World, this lack of basic legal infrastructure is a key reason that countries remain poor. But technology, including now advanced language models, offers the promise of providing what billions in legal and economic aid have not.

Imagine, for example, a smartphone-based method for registering simple contracts, fine-tuned on local standards and values to fairly adjudicate disputes. Technologies to build effective and locally responsive tools to support contracting, credit, and property protections are currently distant but on the horizon. We just need to make sure that these tools don’t simply boost the legal resources of the powerful corporations that already dominate the global economy.

And not so far off is the opportunity to substantially address the grotesquely unequal access to legal advice—corporations have it and individual workers, consumers, and citizens don’t— that we tolerate. For a decade, I’ve been holding up my iPhone at law conferences on access to justice and asking, “Why don’t I have an app on this phone that can take a picture of any legal document I get—a contract, a subpoena, a notice to appear, a collection notice—and in seconds help me understand, in my own language, what it means and what I should or could do next?”

On its own GPT-4 is not a lawyer in everyone’s pocket. Trained on the public internet, GPT-4 is at risk of reproducing online errors, making up cases and principles of law, or confusing jurisdictions. But GPT-4 could legally aid individuals if trained on actual cases and statutes in relevant jurisdictions and then check and cite reliable up-to-date legal sources. Now. At least three companies have been working with OpenAI to build exactly this functionality: Casetext, Harvey, and Ironclad. The only problem is that their technology is currently available only to lawyers and their corporate clients. But with regulatory reform and design modifications, it could be available to everyone. We need appropriate licensing standards for these legal technologies and to fine-tune them to address the legal challenges of ordinary people. But we can build those regulatory regimes rapidly. I worked with the Supreme Court in Utah to develop a blueprint for this in 2019, implemented in 2020.

What does legal help have to do with human flourishing?
Lots.

Human flourishing depends on economic security and reasonable protection from oppression—from the mundane to the life-critical. People with easy and reliable access to legal help can say ‘no’ when their employer says they must work overtime for free or sign an illegal contract. They have recourse if local officials refuse to issue a business permit. They can get help writing an effective letter to dissuade a landlord from pursuing an unlawful eviction. They will know what to tell the judge and what evidence to bring to court to challenge the eviction. They will be able to identify and know what to do about bogus notices to appear in court under threat of arrest unless they pay high fees. These abuses may sound petty, but they are consequential. In 2015, the U.S. DOJ investigated the killing of Michael Brown in Ferguson, Missouri, one of the deaths that sparked the Black Lives Matter movement. The investigators found systemic legal abuses, such as bogus notices, to explain the extraordinary number of outstanding arrest warrants in the city—16,000 for a population of 21,000.

Think of the different life chances for a young person arrested on a misdemeanor drug charge who has a wealthy parent with a lawyer compared to a poor one without expensive legal help. One ends up with community service while the other ends up in jail and loses eligibility for college funds. Reliable legal help won’t solve economic inequality and racism, but it will take away a key tool that advantages the powerful.

Strong economic forces have brought us to where we are today, with awe-inspiring technologies set to transform everything. Those same forces put us at risk of being propelled off the narrow path ahead, destabilizing an already rocky global terrain, and preventing everyone from reaching higher peaks.

There’s no point building this technology if we cannot find our way. But I think we can.

The view, opinion, and proposal expressed in this essay is of the author and does not necessarily reflect the official policy or position of any other entity or organization, including Microsoft and OpenAI. The author is solely responsible for the accuracy and originality of the information and arguments presented in their essay. The author’s participation in the AI Anthology was voluntary and no incentives or compensation was provided.

Hadfield, G. (2023, May 30). AI’s Striding Edge. In: Eric Horvitz (ed.), AI Anthology. https://unlocked.microsoft.com/ai-anthology/gillian-hadfield


Gillian K. Hadfield

Gillian K. Hadfield holds the Schwartz Reisman Chair in Technology and Society at the University of Toronto, where she is a professor of law and economics, and a CIFAR AI Chair at the Vector Institute for Artificial Intelligence. She is also the inaugural Director of the Schwartz Reisman Institute for Technology and Society. She has been a senior advisor to the policy research team at OpenAI since 2018 and is the author of Rules for a Flat World: Why Humans Invented Law and How to Reinvent It for a Complex Global Economy.

A black and white photo of a woman smiling.