OpenAI, Google and an “digital anthropologist”: the UN forms a high level board to explore AI governance
The halls of power are waking up to the potentials and pitfalls of artificial intelligence. The big question will be how much of an impact they will have on the march of progress if (and when) there are mis-steps. Yesterday, the United Nations announced a new AI advisory board — 38 people from across government, academia and industry — with an aim “to undertake analysis and advance recommendations for the international governance of AI.”
The advisory board will hold operate as a bridging group, covering any other initiatives that are put together around AI by the international organization, the UN said. Indeed, in forming a strategy and approach on AI, the UN has been talking for the better part of a month with industry leaders and other stakeholders, from what we understand. The plan is to bring together recommendations on AI by the summer of 2024, when the UN plans to hold a “Summit of the Future” event. The new advisory board is meeting for the first time today.
The UN said that the body will be tasked with “building a global scientific consensus on risks and challenges, helping harness AI for the Sustainable Development Goals, and strengthening international cooperation on AI governance.”
What is most notable about the board is, in these early days, its generally positive positioning. Right now, there are a number of people speaking out about the risks in AI, whether that comes in the form of national security threats, data protection or misinformation; and next week a number of global leaders and experts in the space will be converging in the UK to try to address some of this at the AI Safety Summit. It’s not clear how these and other initiatives formed on national and international levels will work together, or indeed enforce anything beyond their jurisdictions.
But in keeping with the ethos of the UN, the group of 38 — a wide-ranging list that includes executives from Alphabet/Google and Microsoft, a “digital anthropologist”, a number of professors and government officials — is high-level and taking more of a positive-to-constructive position with a focus on international development.
“AI could power extraordinary progress for humanity. From predicting and addressing crises, to rolling out public health programmes and education services, AI could scale up and amplify the work of governments, civil society and the United Nations across the board,” UN Secretary General António Guterres said of the aim of the group. “For developing economies, AI offers the possibility of leapfrogging outdated technologies and bringing services directly to people who need them most. The transformative potential of AI for good is difficult even to grasp.”
The UN refers to the group’s “bridging” role and it may be that it gets involved in more critical explorations beyond “AI for good.” Gary Marcus, who took part in a fireside chat at Disrupt in September to talk about the risks of AI, arrived for our conference in San Francisco on a red-eye from New York, where he was meeting with UN officials. While new innovations in areas like generative AI have definitely put the technology front and center in the mass market, Marcus’ framing of the challenges underscores some of the more concerning aspects that have been voiced:
“My biggest short-term fear about AI is that misinformation, deliberate misinformation, created at wholesale quantities is going to undermine democracy and all kinds of things are going to happen after that,” he said last month. “My biggest long-term fear is we have no idea how to control the AI that we’re building now and no idea how to control the AI that we’re building in the future. And that lays us open to machines doing all kinds of things that we didn’t intend for them to do.”