South Korea is set to host a mini-summit this week on risks and regulation of artificial intelligence, following up on an inaugural AI safety meeting in Britain last year that drew a diverse crowd of tech luminaries, researchers and officials.
Quick Read
Here’s a bullet point summary of key details regarding the upcoming AI safety summit in Seoul:
- Continued Global Dialogue: Following an inaugural AI safety summit in the UK, South Korea will host a mini-summit in Seoul to further address risks and regulation of artificial intelligence. This event builds on global efforts to establish safety protocols for AI technologies.
- International Participation: The summit will see involvement from leaders like South Korean President Yoon Suk Yeol and British Prime Minister Rishi Sunak, along with digital ministers from several countries including the U.S., China, Germany, France, and Spain. Industry giants such as OpenAI, Google, Microsoft, and Anthropic will also participate.
- Agenda and Discussions: Discussions at the Seoul summit will focus on updates from global industry leaders on their progress since the UK summit, sharing best practices, and drafting action plans. Key topics include mitigating AI’s negative impacts on energy use, the workforce, and the spread of misinformation.
- Challenges in Reaching Consensus: Despite the global nature of the summit, reaching a consensus on AI governance remains challenging due to varying interests and technological capabilities among participating countries.
- Safety and Ethical Concerns: The summit will address existential risks posed by powerful AI models, including their potential to exacerbate fraud, misinformation, and bias across various sectors. The need for global governance and norms around AI is a primary focus.
- Leadership and Infrastructure: While South Korea aims to lead in formulating global AI governance, critiques suggest the country may not have sufficiently advanced AI infrastructure to assume a prominent role in these discussions.
The Associated Press has the story:
Things to know about an AI safety summit to be held in Seoul this week
Newslooks- SEOUL, South Korea (AP) —
South Korea is set to host a mini-summit this week on risks and regulation of artificial intelligence, following up on an inaugural AI safety meeting in Britain last year that drew a diverse crowd of tech luminaries, researchers and officials.
The gathering in Seoul aims to build on work started at the U.K. meeting on reining in threats posed by cutting edge artificial intelligence systems.
Here is what you need to know about the AI Seoul Summit and AI safety issues.
WHAT INTERNATIONAL EFFORTS HAVE BEEN MADE ON AI SAFETY?
The Seoul summit is one of many global efforts to create guardrails for the rapidly advancing technology that promises to transform many aspects of society, but has also raised concerns about new risks for both everyday life such as algorithmic bias that skews search results and potential existential threats to humanity.
At November’s U.K. summit, held at a former secret wartime codebreaking base in Bletchley north of London, researchers, government leaders, tech executives and members of civil society groups, many with opposing views on AI, huddled in closed-door talks. Tesla CEO Elon Musk and OpenAI CEO Sam Altman mingled with politicians like British Prime Minister Rishi Sunak.
Delegates from more than two dozen countries including the U.S. and China signed the Bletchley Declaration, agreeing to work together to contain the potentially “catastrophic” risks posed by galloping advances in artificial intelligence.
In March, the U.N. General Assembly approved its first resolution on artificial intelligence, lending support to an international effort to ensure the powerful new technology benefits all nations, respects human rights and is “safe, secure and trustworthy.”
Earlier this month, the U.S. and China held their first high-level talks on artificial intelligence in Geneva to discuss how to address the risks of the fast-evolving technology and set shared standards to manage it. There, U.S. officials raised concerns about China’s “misuse of AI” while Chinese representatives rebuked the U.S. over “restrictions and pressure” on artificial intelligence, according to their governments.
WHAT WILL BE DISCUSSED AT THE SEOUL SUMMIT?
The May 21-22 meeting is co-hosted by the South Korean and U.K. governments.
On day one, Tuesday, South Korean President Yoon Suk Yeol and Sunak will meet leaders virtually. A few global industry leaders have been invited to provide updates on how they’ve been fulfilling the commitments made at the Bletchley summit to ensure the safety of their AI models.
On day two, digital ministers will gather for an in-person meeting hosted by South Korean Science Minister Lee Jong-ho and Britain’s Technology Secretary Michelle Donelan. Participants will share best practices and concrete action plans. They also will share ideas on how to protect society from potentially negative impacts of AI on areas such as energy use, workers and the proliferation of mis- and disinformation, according to the organizers.
The meeting has been dubbed a mini virtual summit, serving as an interim meeting until a full-fledged in-person edition that France has pledged to hold.
The digital ministers’ meeting is to include representatives from countries like the United States, China, Germany, France and Spain and companies including ChatGPT-maker OpenAI, Google, Microsoft and Anthropic.
WHAT PROGRESS HAVE AI SAFETY EFFORTS MADE?
The accord reached at the U.K. meeting was light on details and didn’t propose a way to regulate the development of AI.
“The United States and China came to the last summit. But when we look at some principles announced after the meeting, they were similar to what had already been announced after some U.N. and OECD meetings,” said Lee Seong-yeob, a professor at the Graduate School of Management of Technology at Seoul’s Korea University. “There was nothing new.”
It’s important to hold a global summit on AI safety issues, he said, but it will be “considerably difficult” for all participants to reach agreements since each country has different interests and different levels of domestic AI technologies and industries.
The gathering is being held as Meta, OpenAI and Google roll out the latest versions of their AI models.
The original AI Safety Summit was conceived as a venue for hashing out solutions for so-called existential risks posed by the most powerful “foundation models” that underpin general purpose AI systems like ChatGPT.
Pioneering computer scientist Yoshua Bengio, dubbed one of the “ godfathers of AI,” was tapped at the U.K. meeting to lead an expert panel tasked with drafting a report on the state of AI safety. An interim version of the report released on Friday to inform discussions in Seoul identified a range of risks posed by general purpose AI, including its malicious use to increase the “scale and sophistication” of frauds and scams, supercharge the spread of disinformation, or create new bioweapons.
Malfunctioning AI systems could spread bias in areas like healthcare, job recruitment and financial lending, while the technology’s potential to automate a big range of tasks also poses systemic risks to the labor market, the report said.
South Korea hopes to use the Seoul summit to take the initiative in formulating global governance and norms for AI. But some critics say the country lacks AI infrastructure advanced enough to play a leadership role in such governance issues.