Saturday, May 4, 2019

The UN and Prevention in the Era of AI

As AI convergence with dual-use technologies intrudes increasingly into the political, social, economic, and security spheres, creating new potential for systemic vulnerabilities and distributive inequalities, the multilateral system needs to be better understand and anticipate risks.
ELEONORE PAUWELS contributes to developing a common understanding of the emerging impacts of AI convergence on the United Nations’ prevention agenda. In ‘The New Geopolitics of Converging Risks: The UN and Prevention in the Era of AI’, published by the United Nations University Center for Policy Research, she provides: an analysis of current trends in AI convergence; scenarios that examine emerging opportunities and risks; principles to guide how innovation should be deployed responsibly by actors in the multilateral system; and a recommendation for a foresight capacity housed within the UN and shared across key communities.
“The combined convergence and decentralization of AI and other emerging technologies is not only disrupting war and conflicts, but politics, social cohesion and human well-being,” PAUWELS states. “With enough technical and political preparedness, AI convergence could be harnessed to prevent human trafficking, reduce civilian casualties, anticipate and mediate conflicts.”
With the collaboration of in-depth human and political expertise, algorithms will help combat hate speech and investigate forgeries, election fraud and violent crimes. Some of the most promising uses of AI and converging technologies will materialize to optimize human health, preventing famine and epidemics, with tools in precision biotech and agriculture. “If humans can learn how to anticipate and mitigate unintended consequences, future lives could be empowered on our burdened planet.”
Governance actors, including States and the private sector, will need to adopt techniques of inclusive foresight to be resilient and adaptive enough in the face of hybrid security threats and emerging risks.
A second implication is that States in the Global South will be the first vulnerable targets in this new geopolitical landscape of virtual conflicts and cyber colonization, the author states. Urgent support to develop foresight and responsible innovation is needed for those States that are struggling to compete, build and secure capacity in the development and deployment of AI and converging technologies. They risk becoming vulnerable links or ‘ungoverned cyberspaces’ and may turn more susceptible to dynamics of data-predation and value exfiltration. Such vulnerabilities may fuel and intensify a fierce competition for supremacy in technological convergence rather than foster digital cooperation.
Developing countries may lack the power, influence and foresight tools to shape responsible governance of converging technologies towards social benefits and away from political disruptions and weaponization. These fragile States could become a liability for a whole region.
Preventing such growing inequalities will rest on incentives coming from the multilateral system, PAUWELS writes. States interested in fostering responsible AI convergence could enter into mechanisms of digital cooperation with countries in the Global South to partner around mutually beneficial transfers of data, talent, technologies and security practices.
A third implication concerns the human side of global security risks, the author states. “Technological risks could have a powerful, long-term and corrosive impact on human security and wellbeing.” In contexts where uncertainty about future job security is rising, when facing complex technological and social transformations, larger subsets of the world population may face fears of becoming useless and irrelevant classes.
Global human psychological and emotional wellbeing could be receding. Underserved groups in societies may suffer from new forms of disempowerment. In turn, disempowerment and the lack of opportunity to participate in innovation will impact trust and social cohesion. Without proper citizen engagement and a means to contain the power of minority interests, technological development will proceed unhindered, for better or, quite possibly, worse. “The sheer speed of change will assuredly result not just in people who surrender their lives to intelligent and connected machines, but in societal disruptions that will be difficult to mitigate.”
PAUWELS states that it is time for governments to build a new social contract for the era of AI convergence. For instance, they could invest in measures that can reinforce networks of social cohesion and resilience. Such an effort would enhance the capacity of individuals, communities and systems to survive, learn, adapt and even transform in the face of political and socio-economic shocks and stresses.
“In the end, the most important message from this report is the need to harness the full force of the UN Charter and the multilateral system to shape technological progress according to a diversity of values and life experiences.”
The author proposes a path for the UN to build, guide and lead a Global Foresight Observatory for AI Convergence. The Observatory would be a constellation of key public and private sector stakeholders convened by a strategic foresight team within the UN to implement a shared foresight methodology. The Observatory would equip the UN to articulate tailored and robust scenarios from which innovative strategies can emerge; map and involve key stakeholders that reflect the unique ways in which technologies are converging; and develop coherent and responsible approaches to leverage innovation and technology for prevention.

No comments:

Post a Comment

The United Nations and the Protection of Civilians: Sustaining the Momentum

The protection of civilians (PoC) concept remains contested twenty-three years after the first PoC mandate.  Current PoC frameworks used by ...