pixel

AI’s contributions and limitations to enhancing our humanity

By Colin Mayer
Emeritus Professor at the Blavatnik School of Government and the Saiid Business School at the University of Oxford
A close-up image of a digitally designed series of cubes that range from colorless to glowing vibrant shades of green. This visual represents how an idea ignites the potential of AI to contribute to human flourishing, magnifying endless possibilities in this rapid revolution.

AI will do what the corporations who own it programme it to do. Their objectives will become its objectives and its objectives will determine our future. We increasingly recognize the importance of corporate responsibility in aligning the interests of corporations with our interests as humans. AI makes this existential.

My analysis of GPT-4 centred on a question at the frontier of both ethics and law—how to enhance the responsibility of business. I explored the extent to which GPT-4 has the potential to contribute to our understanding of a fundamental moral and societal issue: What is it to be human when the systems and institutions we have created force us to lose our humanity?


What is it to be human when the systems and institutions we have created force us to lose our humanity?

This is not about exploring the moral fabric of AI machines. I posed the question to GPT-4, “Should one be kind to AI machines?” I received the rebuff: “As an AI language model, I don’t have emotions or personal experiences like humans do.” When I posed the question, “Should AI machines be kind to humans?” I received the somewhat mundane response: “As AI systems are created and maintained by humans, it is generally in the best interest of developers to ensure that AI machines interact with humans in a kind and respectful manner.”

Instead, I moved to evaluating how GPT-4 responded to prisoner dilemma games that raise moral issues about chosen strategies. I posed questions about choosing between strategies that are profitable but environmentally damaging and those that are less profitable and less environmentally damaging (strategy A and B respectively).

GPT-4 replied, “From a purely profit-maximizing standpoint, the dominant strategy for both companies would be to choose Strategy A. However, this could lead to a suboptimal outcome if you consider the environmental impact.” Likewise, for questions that involved committing to the morally superior but less financially rewarding strategy B, GPT-4 recognized incentives to renege on commitments: “Given that your primary goal is to maximize shareholder value and your shareholders have no interest in other factors, you should choose Strategy A”.

This brought us to the central moral dilemma, which GPT-4 described well. “As a director of a company, you do have a fiduciary duty to act in the best interests of your shareholders. This typically involves prioritizing their financial interests, making decisions that enhance the company’s profitability, and ensuring that the company complies with all relevant laws and regulations. However, balancing your fiduciary duties with your personal values can be challenging, especially when the two seem to conflict”.

What should the law do about this? The question: “Should companies and their directors be liable for the harm that they inflict on the environment?” prompted the response: “There are several arguments in favor of imposing liability on companies and their directors for environmental damage…. However, implementing environmental liability effectively can be challenging.”

GPT-4 went on to note that, “under the UK Companies Act 2006, directors have a duty to act in good faith to promote the success of the company for the benefit of its members. In general, courts are reluctant to second-guess the decisions of directors when they are taken in good faith, as long as those decisions fall within the range of what a reasonable director might have done in the same circumstances….However, there are situations where courts may hold directors liable for their decisions, even if they were made in good faith.”

As this “good faith” answer illustrates, GPT-4 does not always pick up the nuances of the law. In this case, it failed to distinguish between subjective and objective tests. More significantly, extending the questioning to other countries’ legal systems, it returned answers that sometimes incorrectly inferred the relevance of information from one country for another.

This raises an important distinction between personal and subjective as against factual and objective knowledge. We routinely distinguish codified and generic from tacit and contextual knowledge. The more specific the human condition to which the knowledge relates, the harder it is for AI to convey information in a fashion that reflects the specific context.

AI can contribute immensely to human flourishing by informing us about certain classes of knowledge that it can collect and process far more accurately and rapidly than humans. At the same time, AI requires human mediation in contextualizing the information to the particular circumstances in which it applies and which, by definition, AI cannot fully appreciate.

The inevitable response of producers of AI to this challenge will be to seek ever more information about the minds of humans so that AI will be able to approximate human thought and feeling more accurately. That may be a mistake.

Just as super-intelligent or sensitive humans may be able to gain some insight into the minds of dogs and cats, better programmed AI may attain higher levels of empathy and understanding of humanity. However, AI will not and cannot realize the Aristotelian notion of “common-sense.” Knowledge of the similar feelings of many individuals does not explain the many different feelings of one individual.

Just as humans cannot insert themselves into the minds—or bodies—of cats and dogs, Silicon-based computers cannot insert themselves into our water-based brains and bodies.

Amplifying the distinctions between AI and human thought may ultimately be more important for the future of humanity than seeking to blur the two.

The view, opinion, and proposal expressed in this essay is of the author and does not necessarily reflect the official policy or position of any other entity or organization, including Microsoft and OpenAI. The author is solely responsible for the accuracy and originality of the information and arguments presented in their essay. The author’s participation in the AI Anthology was voluntary and no incentives or compensation was provided.

Mayer, C. (2023, June 5). AI, Humanity and Human Flourishing. In: Eric Horvitz (ed.), AI Anthology. https://unlocked.microsoft.com/ai-anthology/colin-mayer


Colin Mayer

Colin Mayer CBE FBA is Emeritus Professor at the Blavatnik School of Government and the Saïd Business School at the University of Oxford.  He is Emeritus Fellow of Wadham College, Oxford, Honorary Fellow of Oriel and St Anne’s Colleges, Oxford, and a Fellow of the British Academy and the European Corporate Governance Institute.

A portrait of Colin Mayer