What history can teach us about A.I.’s Great Leap Forward

Qilai Shen - Bloomberg - Getty Images

The first in-person meeting between China’s Mao Zedong and Soviet leader Nikita Khrushchev in 1957 shows us how awesome potential can generate awful policy. It was the 40th anniversary of the October Revolution. Stalin was dead and had been denounced by Khrushchev the previous year. For Communists around the world, it was time to look forward, and so over 60 national parties met in Moscow to discuss the future of communism in the wake of the Second World War. Of all the delegations to come to Russia, only one, the Chinese delegation, was lodged in the Kremlin–in the rooms once belonging to Catherine the Great.

Mao came ready to make a point: demographics made it certain that China would be a world power, soon. So, at dinner one evening when Khrushchev bragged that the Soviet Union would eclipse US agricultural production in 15 years, Mao could not resist: “I can tell you that in 15 years, we may well catch up with or overtake (Britain’s production of steel).” Tragically, this became policy–the Great Leap Forward. The resulting collectivization and abrupt shift from farming to the production of steel was a disaster. Millions died.

Today, we stand at the threshold of another great potentiality–the advent of generative A.I. But history shows the start of a brave new adventure–whether it's the industrialization of China or the development of generative A.I.–is not the best time for projections. Thus, McKinsey’s recent estimate that generative A.I. could add “the equivalent of $2.6 trillion to $4.4 trillion annually should prompt healthy suspicion (the U.K.’s entire GDP in 2021 was $3.1 trillion).

We find ourselves at the top of a mountain with a particularly scenic view. Everything is possible for A.I. because, actually, so little has happened. And like the Chinese demographic potential of the 1950s, the possibility for growth (in all senses) appears unbounded. Yet so much is unknown. Indeed, it would appear the most creative enterprises man has yet conceived may be disrupted first—writing, art, especially music. This would not have been anyone’s guess 20 years ago. They would have picked accounting.

Leaders must engage with this new technology, mindful that projections made from atop mountains are often errant, and sometimes dangerous.

First, there is the issue of existing law. Regulations such as the EU’s GDPR and even some state omnibus privacy laws in the U.S. require companies to provide opt-outs from “automated decision-making.”

Any decision that affects the legal or privacy rights of an individual that is made exclusively by a machine or an algorithm must be accurate, fair, and subject to appeal. There must be a methodology for the review of individual cases. In some cases, individuals must be able to opt out, ask for their data, understand the conclusion reached by the A.I., and ultimately have their personal data deleted.

This means not only evaluating the A.I. programs themselves but also (and perhaps more so) their integration into and throughout existing programs and processes.

Then there is the question of future regulation, which will likely follow one of two paths. Regulations could be balkanized and politically erratic, as has been the case with cryptocurrencies. What will be possible in one jurisdiction will be prohibited in another. This will include both inputs (what data can we use to train/build/develop) and outputs (what can we do with the A.I.). Thus, the selection of jurisdictions (and datasets) at the outset will be critical. Here, predictive, strategic, and indeed political thought will be paramount. This appears the more likely path right now.

Alternatively, major world powers could harmonize their regulatory efforts. Rishi Sunak, the U.K. Prime Minister, recently announced that the U.K. will host a global summit on artificial intelligence–the clear goal of the event is harmonization, and in fact, his foreign secretary echoed these calls when chairing an A.I.-focused UN meeting that took place on Jul. 18. But a cursory review of the current state of legislation around the world indicates there is much work to be done.

The EU continues to consider an A.I. Act that would impose significant ex-ante obligations on purveyors of any high-risk A.I. system, an obligation that could have the effect of virtually halting A.I. innovation in the region.

The U.S. has been more cautious and is yet to propose federal legislation addressing the issue, although narrower bills have been proposed and a smattering of states and localities have addressed the use of A.I. in limited contexts.

China has so far prevented access to ChatGPT and very recently announced updated guidelines for generative A.I. But as China’s reaction to cryptocurrencies should have made clear, such regulations should not be considered the final word as China’s interests shift. Russia indicated at the Jul. 18 meeting that the issue was complex, and the UN might not be the best place to tackle it.

Few phenomena can claimto revolutionize security, the economy, worker productivity, thought, art, discourse, and the very fate of man–but that is exactly what is claimed about A.I.

In terms of impact, it is being compared to the advent of electricity, the telegraph, and the printing press, and that may well understate the matter. The difference is that A.I. is inherently more unpredictable because, at a fundamental level, the arc of its development is beyond human–and to a degree, beyond our control.

We are at an inflection point. History will judge us, and judge us harshly, should we fail to appreciate the dangers in this vital moment, or conversely, stifle some great potential. We should remember the Great Leap Forward–great potential can deceive as much as it can excite. We must approach this new moment with humility, be ready to reassess our assumptions, and constructively engage with earnestly held criticisms–even if that means abandoning our aspirations in the face of danger.

Christian Auty is a partner with Bryan Cave Leighton Paisner and a leader of the firm’s U.S. Global Data Privacy and Security Team. He can be reached at christian.auty@bclplaw.com

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

More must-read commentary published by Fortune:

This story was originally featured on Fortune.com

More from Fortune:
5 side hustles where you may earn over $20,000 per year—all while working from home
Looking to make extra cash? This CD has a 5.15% APY right now
Buying a house? Here's how much to save
This is how much money you need to earn annually to comfortably buy a $600,000 home

Advertisement