Can global technology governance anticipate the future?

 Reproducing the atmosphere of a large data centre using an infinity chamber at the Futurium in Berlin, a space for dialogue between governments, technology scientists, industry, and society. Photo by JOHN MACDOUGALL/AFP via Getty Images.
Reproducing the atmosphere of a large data centre using an infinity chamber at the Futurium in Berlin, a space for dialogue between governments, technology scientists, industry, and society. Photo by JOHN MACDOUGALL/AFP via Getty Images.

Technology governance is beset by the challenges of how regulation can keep pace with rapid digital transformation, how governments can regulate in a context of deep knowledge asymmetry, and how policymakers can address the transnational nature of technology.

Keeping pace with, much less understanding, the implications of digital platforms and artificial intelligence for societies is increasingly challenging as technology becomes more sophisticated and yet more ubiquitous.

To overcome these obstacles, there is an urgent need to move towards a more anticipatory and inclusive model of technology governance. There are some signs of this in recent proposals by the European Union (EU) and the UK on the regulation of online harms.

Regulation failing to keep up

The speed of the digital revolution, further accelerated by the pandemic, has largely outstripped policymakers’ ability to provide appropriate frameworks to regulate and direct technology transformations.

Governments around the world face a ‘pacing problem’, a phenomenon described by Gary Marchant in 2011 as ‘the growing gap between the pace of science and technology and the lagging responsiveness of legal and ethical oversight that society relies on to govern emerging technologies’.

This ever-growing rift, Marchant argues, has been exacerbated by the increasing public appetite for and adoption of new technologies, as well as political inertia. As a result, legislation on emerging technologies risks being ineffective or out-of-date by the time it is implemented.

Effective regulation requires a thorough understanding of both the underlying technology design, processes and business model, and how current or new policy tools can be used to promote principles of good governance.

Artificial intelligence, for example, is penetrating all sectors of society and spanning multiple regulatory regimes without any regard for jurisdictional boundaries. As technology is increasingly developed and applied by the private sector rather than the state, officials often lack the technical expertise to adequately comprehend and act on emerging issues. This increases the risk of superficial regulation which fails to address the underlying structural causes of societal harms.

The significant lack of knowledge from those who aim to regulate compared to those who design, develop and market technology is prevalent in most technology-related domains, including powerful online platforms and providers such as Facebook, Twitter, Google and YouTube.

For example, the ability for governments and researchers to access the algorithms used in the business model of social media companies to promote online content – harmful or otherwise – remains opaque so, to a crucial extent, the regulator is operating in the dark.

The transnational nature of technology also poses additional problems for effective governance. Digital technologies intensify the gathering, harvesting, and transfer of data across borders, challenging administrative boundaries both domestically and internationally.

While there have been some efforts at the international level to coordinate approaches to the regulation of – for example – artificial intelligence (AI) and online content governance, more work is needed to promote global regulatory alignment, including on cross-border data flows and antitrust.

Reactive national legislative approaches are often based on targeted interventions in specific policy areas, and so risk failing to address the scale, complexity, and transnational nature of socio-technological challenges. Greater attention needs to be placed on how regulatory functions and policy tools should evolve to effectively govern technology, requiring a shift from a reactionary and rigid framework to a more anticipatory and adaptive model of governance.

Holistic and systemic versus mechanistic and linear

Some recent proposals for technology governance may offer potential solutions. The EU publication of a series of interlinked regulatory proposals – the Digital Services Act, Digital Markets Act and European Democracy Action Plan – integrates several novel and anticipatory features.

The EU package recognizes that the solutions to online harms such as disinformation, hate speech, and extremism lie in a holistic approach which draws on a range of disciplines, such as international human rights law, competition law, e-commerce, and behavioural science.

It consists of a combination of light touch regulation – such as codes of conduct – and hard law requirements such as transparency obligations. Codes of conduct provide flexibility as to how requirements are achieved by digital platforms, and can be updated and tweaked relatively easily enabling regulations to keep pace as technology evolves.

As with the EU Digital Services Act, the UK’s recent proposals for an online safety bill are innovative in adopting a ‘systems-based’ approach which broadly focuses on the procedures and policies of technology companies rather than the substance of online content.

This means the proposals can be adapted to different types of content, and differentiated according to the size and reach of the technology company concerned. This ‘co-regulatory’ model recognizes the evolving nature of digital ecosystems and the ongoing responsibilities of the companies concerned. The forthcoming UK draft legislation will also be complemented by a ‘Safety by Design’ framework, which is forward-looking in focusing on responsible product design.

By tackling the complexity and unpredictability of technology governance through holistic and systemic approaches rather than mechanistic and linear ones, the UK and EU proposals represent an important pivot from reactive to anticipatory digital governance.

Both sets of proposals were also the result of extensive multistakeholder engagement, including between policy officials and technology actors. This engagement broke down silos within the technical and policy/legal communities and helped bridge the knowledge gap between dominant technology companies and policymakers, facilitating a more agile, inclusive, and pragmatic regulatory approach.

Coherence rather than fragmentation

Anticipatory governance also recognizes the need for new coalitions to promote regulatory coherence rather than fragmentation at the international level. The EU has been pushing for greater transatlantic engagement on regulation of the digital space, and the UK – as chair of the G7 presidency in 2021 – aims to work with democratic allies to forge a coherent response to online harms.

Meanwhile the OECD’s AI Policy Observatory enables member states to share best practice on the regulation of AI, and an increasing number of states such as France, Norway, and the UK are using ‘regulatory sandboxes’ to test and build AI or personal data systems that meet privacy standards.

Both sets of proposals were also the result of extensive multistakeholder engagement, including between policy officials and technology actors. This engagement broke down silos within the technical and policy/legal communities and helped bridge the knowledge gap between dominant technology companies and policymakers, facilitating a more agile, inclusive, and pragmatic regulatory approach.

Coherence rather than fragmentation

Anticipatory governance also recognizes the need for new coalitions to promote regulatory coherence rather than fragmentation at the international level. The EU has been pushing for greater transatlantic engagement on regulation of the digital space, and the UK – as chair of the G7 presidency in 2021 – aims to work with democratic allies to forge a coherent response to online harms.

Meanwhile the OECD’s AI Policy Observatory enables member states to share best practice on the regulation of AI, and an increasing number of states such as France, Norway, and the UK are using ‘regulatory sandboxes’ to test and build AI or personal data systems that meet privacy standards.

Harriet Moynihan, Senior Research Fellow, International Law Programme and Marjorie Buchser, Executive Director, Digital Society Initiative.

Deja una respuesta

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *