AI and Governance: to what end?

Denis Balaguer
3 min readNov 28, 2023

--

Asimov and the positronic board (by Stable Diffusion)

Asimov was wrong, and this explains what happened at OpenAI last week — and brings lessons for the broader game of AI governance.

I am a big fan of science fiction author Isaac Asimov, and my choice of undergraduate course was directly influenced by everything I read from him when I was younger.

But one thing that has always bothered me about the connection he made in his two major works, the Robot series and the Foundation series (minimizing spoilers), is Asimov’s addition of the “Zeroth Law” of Robotics, in addition to the famous three laws formulated in his classic stories about positronic brains. This Law supersedes all others: a robot cannot harm humanity or, through inaction, allow humanity to come to harm.

Beyond the aesthetic aspect, this narrative device seems like a “deus ex machina” (unintentional irony). It is an arbitrary solution that embeds a serious foundational problem: how to objectively and unequivocally define “the good of humanity”.

This is a phenomenon discussed in economics, especially connected to Kenneth Arrow’s Impossibility Theorem, winner of the 1972 Nobel Prize in Economics.

The theorem, also known as the paradox of social choice, states that it is impossible to find a voting system that satisfies certain desirable criteria of social justice and collective choice simultaneously. The implication is that there is no perfect method to transform individual preferences into a coherent social choice (under technically feasible conditions and considering a democracy).

As the economist Thomas Sowell once put it, “There are no solutions; there are only trade-offs.”

This context raises significant doubts about the objective analysis of “alignment.” To make a simple assessment: even though the principles of caution are essential in AI research and application (as they are in many scientific and technological domains), how do we balance this with the impact of “slowing down” the development of these solutions, even if it means reducing potential GDP per capita growth?

To take a historical example: the steam engine, a fundamental technology of the Industrial Revolution, is at the root of the climate issue centuries later. Would it have been better to halt this development in the 18th century, even if it meant foregoing the largest leap in economic and social development in human history?

If in the realm of social decisions this is a potentially controversial and difficult-to-resolve issue, the events that occurred at OpenAI, read in light of the information available at the moment, highlight a more specific and particular dimension: that of corporate governance.

Companies have boards of directors in their governance systems that act as representatives of shareholders. Board members have well-defined legal duties, such as the duties of care, loyalty, fiduciary responsibility, and confidentiality.

In a for-profit company, these duties are applied in the interest of investors. This does not necessarily imply short-term economic outcomes, but it certainly influences the sustainable competitive success of the company — economic outcomes included.

In the governance structure of OpenAI, there is an unusual model with a for-profit company subordinate to a nonprofit organization, with the board of directors linked to it. What principles should guide decision-making in this structure? In a traditional company, there is already considerable ambiguity and dissent regarding strategic guidance — even though the “objective function” is crystal clear: this company needs to continue generating returns for capital for the next decades.

How can the direction be defined in an institution like OpenAI, given that the declared fiduciary duty is humanity — as stated in the “OpenAI Charter” document?

Regardless of the discussed issues regarding the experience of the board members, it seems that they found themselves at the heart of the paradox of social choice when exercising their fiduciary duty.

Asimov once said that we have a problem because “knowledge advances faster than wisdom.” He was right, as usual — despite the initial line of this text.

Let the events of the past weeks serve as an opportunity to add a little more wisdom to all the knowledge we have accumulated in this one year since ChatGPT was launched.

While caution is always a good principle, it is worth considering this episode as a “cautionary tale” for discussions on AI governance. Lack of objectivity serves more to obscure than to clarify — and ultimately hinders good governance.

--

--

No responses yet