New variants mean we’re only as strong as the weakest national health systems. Identifying them is hard

The news about omicron, the new coronavirus variant of concern, will cast a shadow on World Health Organization debates that were already due to take place next week over what a new pandemic treaty should include. The agenda for these discussions focuses on how to strengthen the systems for monitoring emergent pathogens, and for scrutinizing countries’ infrastructures for dealing with outbreaks.

As United Nations Secretary General Antonio Guterres often repeats, “We are only as strong as the weakest health system in our interconnected world.” Global health funders are already channeling money into helping low- and middle-income countries prepare for health preparedness. The World Bank has already set up a financing vehicle to provide to support to low-income countries, which has thus far disbursed over $41 million on health preparedness projects.

There is one big problem in helping countries prepare to deal with future health emergencies. We don’t have good data, which would help funders and policymakers understand which countries are most or least prepared, and which policy areas need strengthening. Different countries have wildly different policy environments. Without sufficient knowledge, it is hard to make informed decisions over where the money will go, how fast and to what precise end.

This isn’t just an issue in the health sector — constructing good cross-national data on policy is inherently complicated, as indicated by the recent World Bank decision to scrap its Doing Business indicators, which sought to rank different countries’ business environments. But the continued threat of new variants, and the likelihood of entirely new infections developing and spreading in the future, means that pandemic preparedness is especially politically salient.

In our research, we analyzed three cross-national health emergency preparedness indicators that are available, looking at how they had been put together. Each of these represents a different way of figuring out which countries are well prepared, and which are badly situated to respond to a new pandemic. Each is biased in its own way.

Countries can evaluate their own policies — but this can be misleading

One approach to measuring health emergency preparedness across countries is simply to ask the countries themselves how well-prepared they are to deal with a crisis. This was what the international community agreed in the form of the binding International Health Regulations, last revised after the SARS outbreak of the early 2000s. Under this system, countries score their own preparedness infrastructures, and then simply send the data to the WHO, which publishes it online.

Even though countries’ self-scores are published by the WHO, the WHO has no role in the scoring process and no mandate to second-guess the data that countries provide. It simply accepts them at face value. Inevitably, this means that these scores contain inaccuracies and guesstimates. Countries can exaggerate their capacities to deal with crises, if they want to look good. Alternatively, they can underreport how well-prepared they are, if they want to make a case to health donors for additional funding.

The WHO does its own assessments — but they are subject to groupthink

After the Ebola crisis, the WHO initiated the Joint External Evaluations after pressure from the United States. These evaluations were delegated to outside experts, who would fly in and conduct the assessments, on the assumption that countries could not always be counted on to accurately report how well-prepared they were for a health crisis. The outcomes — hefty reports with detailed qualitative evaluations and quantitative scores — would be unmarred by political gaming and could inform evidence-based policy design.

This only worked up to a point. The actual roster of experts used by the WHO was dominated by individuals from high-income countries. Their analyses had to follow a rigid template that was initially designed and piloted by the United States and its collaborators. These factors meant that the analysis was insulated from alternative perspectives. There were fewer perspectives from experts outside rich countries, and the template imposed strict limits on what was seen as relevant knowledge, that didn’t necessarily fit well with conditions on the ground.

Third-party evaluations have blind spots

The Global Health Security Index is the latest indicator to be developed. Launched in 2019 to extensive media coverage, it was the outcome of a collaboration between think tank Nuclear Threat Initiative, the Johns Hopkins University Center for Health Security and the private forecaster Economist Intelligence Unit. The ambition was to use publicly available data from all countries to develop objective scores of preparedness for health crises.

This approach has obvious merits, as it diminishes the scope for subjective judgment and political pressure. However, this scoring is only as accurate as the questions asked of the data. And the scores only had limited success in predicting which countries would have relatively good coronavirus outcomes and which would have relatively bad ones. This has led the index’s creators to acknowledge that the effectiveness of countries’ policy responses was significantly affected by their political arrangements, the degree of political will to do something about the crisis, and the communication strategy. But all of these are highly difficult — if not impossible — to objectively quantify.

Scores across countries have clear limits

As the international policy community converges on the notion that countries with weak health systems are an economic and social threat, the demand for more cross-national indicators will grow. Efforts are apace to develop and quantify social and governance indicators from those mentioned above. At the same time, there is a push to use pandemic preparedness indicators into decision-making processes of the likes of the Gates Foundation, World Bank and others. The biggest risk here is accentuating political and economic imbalances on the basis of wobbly data, making those perceived as weak even more vulnerable.

Alexander Kentikelenis (@Kentikelenis) is associate professor of political economy and sociology at Bocconi University in Milan. Leonard Seabrooke (@LenSeabrooke) is professor of international political economy and economic sociology at Copenhagen Business School and research professor at the Norwegian Institute of International Affairs

Deja una respuesta

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *