Artificial intelligence is taking the world by storm. ChatGPT and other new generative AI technologies have the potential to revolutionize the way people work and interact with information and each other. At best, these technologies allow humans to reach new frontiers of knowledge and productivity, transforming labor markets, remaking economies, and leading to unprecedented levels of economic growth and societal progress.
At the same time, the pace of AI development is unsettling technologists, citizens, and regulators alike. Even ardent techno-enthusiasts—including figures such as OpenAI CEO Sam Altman and Apple co-founder Steve Wozniak—are issuing warnings about how unregulated AI can lead to uncontrollable harms, posing severe threats to individuals and societies. The direst predictions concern AI’s ability to obliterate labor markets and make humans obsolete or—under the most extreme scenario—even destroy humanity.
With tech companies racing to advance artificial intelligence capabilities amid intense criticism and scrutiny, Washington is facing mounting pressure to craft AI regulation without quashing innovation. Different regulatory paradigms are already emerging in the United States, China, and Europe, rooted in distinct values and incentives. These different approaches will not only reshape domestic markets—but also increasingly guide the expansion of American, Chinese, and European digital empires, each advancing a competing vision for the global digital economy while attempting to expand its sphere of influence in the digital world.
As the race for AI dominance heats up, how states choose to govern artificial intelligence will have a profound impact on the future of technology and society. With the AI regulation debate in Washington at a critical juncture, the United States cannot afford to sit on the sidelines while China and Europe decide these fundamental issues for the world.
DIGITAL EMPIRES
When it comes to digital regulation, the United States is following a market-driven approach, China is advancing a state-driven approach, and the EU is pursuing a rights-driven approach. The U.S. model reflects an uncompromising faith in markets and reserves a limited role for the government. It centers on protecting free speech, the free Internet, and incentives to innovate. Washington views digital technologies as a source of economic prosperity and political freedom and, consequently, a tool for societal transformation and progress—a view that is reflected in its reluctance to put constraints on AI. The U.S. approach to AI regulation is profoundly informed by a deep-seated techno-optimism and a relentless pursuit of innovation and technological progress, with U.S. tech companies revered as drivers of that progress.
Washington views AI as an opportunity to drive economic growth and solidify U.S. tech and military supremacy in the midst of an intensifying U.S.-Chinese tech competition and mounting geopolitical tensions. Washington’s single-minded focus on economic and geopolitical primacy has made regulation an afterthought. As a result, the United States has crafted no substantive federal AI legislation and simply suggested voluntary standards that tech companies can choose to adopt or ignore. The Blueprint for an AI Bill of Rights, for instance—a handbook published by the White House in October 2022—offers guidance for developers and users of AI on how to safeguard the American public’s rights in the AI age but ultimately places its trust in tech companies’ self-regulation. Prominent policymakers including Lina Khan, the chair of the Federal Trade Commission, have warned that leaving AI regulation in the hands of businesses could come at a steep cost, and they have argued that government regulation will be critical to ensuring that artificial intelligence technology benefits all. But comprehensive AI regulation remains a distant prospect in the United States, given the political dysfunction in Congress and persistent concerns among decision-makers that any such regulation would likely compromise innovation and undermine American technological leadership.
China, in contrast, has adopted a state-driven approach toward digital regulation as part of an ambitious effort to make China the world’s leading technology superpower. Beijing’s hands-on approach to the digital economy is also aimed at tightening the political grip of the Chinese Communist Party (CCP) by deploying digital technologies as a tool for censorship, surveillance, and propaganda. The Chinese government facilitated the growth of the country’s tech industry early on. In recent years, however, Beijing has undertaken a harsh and proactive crackdown on its tech sector in the name of advancing “common prosperity”—and to ensure that tech giants do not overpower the Chinese state.
Recognizing the potential economic and political benefits of AI, the Chinese government is heavily subsidizing new tools that will improve its ability to conduct mass surveillance of its citizens in the name of preserving social stability. China’s authoritarian technology regime gives it incentive to regulate AI: although AI-powered facial recognition can aid Beijing’s efforts to exert political control, ChatGPT-type generative AI can undermine that control. Generative AI relies on large quantities of data, and the technology continues to evolve as it is being deployed. This poses a novel challenge to the Chinese censorship regime, which may struggle to keep up.
In the face of these potential challenges, Beijing is determined to maintain a tight grip on the country’s AI capabilities. In 2022, the Chinese government introduced landmark regulations targeting deepfake technologies and recommendation algorithms, which risk undermining Chinese citizens’ basic rights and trust in digital technologies—and also threaten the CCP’s control of China’s digital economy. In April, the government issued draft regulations on generative AI that hold developers responsible for prohibited or illegal content—including content that deviates from the political values of CCP. These legislative developments suggest that the Chinese government is committed to steering the country’s AI future with a heavy hand, encouraging technological progress while ensuring that AI will not undermine social stability and the political control of the CCP.
The European Union has departed from both the United States and China in pioneering its own regulatory model that focuses on the rights of users and citizens. In the European view, AI heralds a digital transformation with such disruptive potential that it cannot be left to the whims of tech companies but must instead be firmly anchored in the rule of law and democratic governance. In practice, this means that governments need to step in to uphold the fundamental rights of individuals, preserve the democratic structures of society, and ensure a fair distribution of the benefits from the digital economy.
This European rights-driven approach is already reflected in pathbreaking EU regulations such as the General Data Protection Regulation, which protects citizens’ data privacy. It has also recently adopted the Digital Markets Act, which imposes obligations on so-called digital gatekeepers, including U.S. tech giants, to curtail their dominance and protect competition; and the Digital Services Act, which establishes rules holding online platforms accountable for the content they host. Advancements in AI are pushing Europe even further in this direction. EU lawmakers recently passed a comprehensive draft law known as the AI Act, which seeks to mitigate risks posed by AI and ensure that individuals’ fundamental rights are protected. Under the draft legislation, which is expected to be finalized by the end of this year, AI systems that exploit individuals’ vulnerabilities or manipulate human behavior will be prohibited. Predictive policing will be outlawed, as will the use of real-time facial recognition in public places, as it compromises fundamental rights and freedoms and places large parts of the population under constant surveillance. AI systems that can lead to discrimination in people’s access to employment or public benefits will also be tightly regulated.
The EU was on the verge of adopting the AI Act when OpenAI introduced ChatGPT to the public in November 2022, presenting European lawmakers with a thorny challenge on how to regulate a general-purpose AI that can be deployed toward both risky and safe ends. This question will likely dominate the last stages of the legislative process, but the European Parliament has already indicated that generative AI must comply with various transparency requirements and be designed in ways that do not violate fundamental rights or generate illegal content. Once this binding legislation is finalized, it will become the first comprehensive AI regulation in the world.
NEITHER MARKET NOR STATE
Washington’s, Beijing’s, and Brussels’s different visions for an age of artificial intelligence are a major step in the ongoing construction of three separate “digital empires” that compete for control over the future of technology and attempt to expand spheres of influence in the digital world as other countries look for guidance on AI legislation.
AI’s promise to fuel technological progress and economic growth, together with the challenges of regulating a fast-evolving technology, will likely cause some governments to opt for U.S.-style voluntary guidance. The American market-driven model has generated tremendous wealth and fueled enviable technological progress. At the same time, it is increasingly clear that the lack of regulation on U.S. tech companies has come with a price. Washington has remained blind to many market failures, revealing the repeated abuse of market power by leading tech companies such as Google monopolizing digital advertising technologies to the detriment of its rivals. These market failures, together with the proliferation of disinformation and revelations regarding the tech companies’ exploitation of users’ personal data, are fueling widespread and growing distrust of tech companies. To curtail the outsized power of U.S. tech companies over overseas Internet users, governments across the world are now seeking to reassert control over their digital markets and rein in leading tech companies. Even the American public and U.S. lawmakers across the political spectrum are now calling for greater government oversight of the tech industry.
As the appeal of the United States’ approach wanes, the Chinese state-driven model is gaining ground. China is already building a Digital Silk Road, exporting AI-driven surveillance technologies and other digital infrastructure to governments around the world. Authoritarian governments find China’s model appealing given Beijing’s apparent ability to combine thriving innovations with political control. But generative AI may shift their views by revealing that with stricter control comes less innovation. Despite leading the world in AI-driven surveillance technologies, China continues to lag behind the United States in developing generative AI systems. This is in part because of the country’s censorship rules that limit the data that can be used to train foundation models—demonstrating that Internet freedom may better serve innovation and economic growth, at least in this class of digital technology.
If the U.S. market-driven model seems too permissive, and the Chinese state-driven model seems too restrictive, the European approach might represent a “Goldilocks” alternative: a third way that seeks to check corporate power while protecting fundamental rights and preserving democratic institutions. Amid mounting backlash against U.S. tech companies, governments across the world—including those in Australia, Brazil, Canada, and South Korea—are moving away from a market-driven framework and instead increasingly emulating European digital regulations to regain control over their digital economies.
The EU may shape global AI development regardless of whether other governments follow its regulatory approach. Tech companies often extend the EU’s stringent regulations across their global business operations to standardize their products and services worldwide—a phenomenon known as the Brussels Effect. AI developers who want to use European data to train algorithms, for instance, will be bound by the EU’s AI Act even beyond the EU’s borders. Should they want to escape the EU’s regulatory constraints, they will need to develop entirely new algorithms without European data. In this way, Europe may well have a hand in shaping AI regulation abroad as well as at home, globalizing Europe’s rights-driven norms and extending the EU’s digital sphere of influence in the process. In a telling example, in May OpenAI’s Sam Altman threatened to pull ChatGPT out of the EU, citing emerging regulatory constraints—only to reverse his threat days later amidst sharp criticism from European lawmakers. The dominance of Europe’s regulatory approach would likely generate a mixed response; some foreign citizens and governments may welcome Europe’s efforts and take comfort in knowing that the EU’s digital protections extend to them. Other foreign stakeholders, however, may well accuse the EU of regulatory imperialism, arguing that the Brussels Effect risks undermining innovation, economic growth, and societal progress everywhere—in addition to compromising foreign governments’ ability to regulate AI according to their own values and interests.
THE FUTURE OF THE AI REVOLUTION
As the EU and China lead the race to regulate AI, Washington needs to decide whether it wants to have a role in building the digital world of the future. The salience of the U.S.-Chinese tech competition may persuade Washington to continue to err on the side of unconstrained AI development, with U.S. policymakers insisting on the virtues of free markets and continuing to place trust in tech companies’ ability and desire to mitigate any harms associated with AI. The United States may also find itself incapable of regulating AI technology simply because of the dysfunction of the U.S. political process, which has already stymied meaningful AI legislation. Relentless lobbying by tech companies has further contributed to the endurance of the status quo.
Three recent developments, however, suggest that the United States may abandon its techno-libertarian approach and embrace AI regulation, aligning itself closer with the EU. First, the domestic support for regulation is reaching a tipping point, with prominent AI experts and developers such as Altman and the AI pioneer Geoffrey Hinton joining lawmakers and the general public in their support for regulation. In this new political environment, regulatory inaction is harder to defend. Second, the United States may conclude that it prefers to set regulations jointly with the EU rather than have the EU regulate the U.S. market unilaterally via the Brussels Effect. Third, the United States’ and the EU’s shared concern about the China’s growing global influence provides a strong impetus for closer transatlantic cooperation.
The United States has repeatedly stressed its desire to partner with the EU and other democratic allies to promote norms that are consistent with civil rights and democratic values and to consolidate a united democratic front against China and its digital authoritarian allies. The Biden administration increasingly draws ideological battle lines in the fight for tech dominance, framing competition as a battle of techno-democracies against techno-autocracies. If that is indeed how Washington sees the contest, then the case for U.S.-EU cooperation is overwhelming. The United States and the EU could put aside their differences and develop joint standards on AI, designed to promote innovation, protect fundamental rights, and preserve democracy. However, as many parts of the world slide toward authoritarianism, Washington and Brussels will struggle to curtail the growing demand for Chinese surveillance technologies, risking the possibility that AI will often be deployed as a tool to undermine democracy.
In the coming years, there will be clear winners and losers not only in the race to develop AI technologies but also in the competition among the regulatory approaches that govern those technologies. These competing models will empower tech companies, governments, or digital citizens in different ways, with far-reaching economic and political consequences. How governments go about those choices will determine whether the unfolding AI revolution will serve democracy and deliver unprecedented prosperity, or lead to grave societal harms—or even unforeseeable catastrophe.
Anu Bradford is Professor at Columbia Law School and the author of the forthcoming book Digital Empires: The Global Battle to Regulate Technology.