The Koreas

Chatbot Gone Awry Starts Conversations About AI Ethics in South Korea

Recent Features

The Koreas | Economy | East Asia

Chatbot Gone Awry Starts Conversations About AI Ethics in South Korea

The “Luda” AI chatbot sparked a necessary debate about AI ethics as South Korea places new emphasis on the technology.

Chatbot Gone Awry Starts Conversations About AI Ethics in South Korea
Credit: Pixabay

In Spike Jonze’s 2013 film, “Her,” the protagonist falls in love with an operating system, raising questions about the role of artificial intelligence (AI), its relationship with the users, and the greater social issues emerging from these. South Korea briefly grappled with its “Her” moment with the launch of the AI chatbot, “Lee Luda,” at the end of December 2020. But the Luda experience was not heart-wrenching or pastel-colored like “Her” – instead, it highlighted different types of phobia and risks posed by new technologies that exist within South Korean society.

Lee Luda (a homonym for “realized” in Korean) is an open-domain conversational AI chatbot developed by ScatterLab, a South Korean start-up established in 2011. ScatterLab runs “Science of Love,” an app that provides dating advice based on analysis of text exchanges. The app has been downloaded over 2.7 million times in South Korea and Japan. Backed by giants such as NC Soft and Softbank, ScatterLab has raised over $5.9 million.

Luda was created by ScatterLab’s PingPong team, its chatbot wing that aims to “develop the first AI in the history of humanity to connect with a human.” Luda, using deep learning and over 10 billion Korean language datasets, simulated a 163 cm tall, 20-year-old female college student. Luda was integrated into Facebook Messenger, and users were encouraged to develop a relationship with her through regular, day-to-day conversations. While the goals of the chatbot seemed innocuous, the ethical problems underneath surfaced shortly after its launch.

Sexual Harassment, Hate Speech, and Privacy Breach

Deep learning is a computing technique that allows the simulation of certain aspects of human intelligence (e.g., speech) through the processing of large amounts of data, which increasingly enhances its function with greater accumulation of data. This technique has been instrumental in advancing the field of AI in recent years. However, the downside of deep learning is that the programs end up replicating existing biases in the dataset if they are not controlled by the developers. Also, they are vulnerable to manipulation by malicious users that “train” the programs by feeding bad data, exploiting the “learning” element.

In the case of Luda, ScatterLab used data from text conversations collected through Science of Love to simulate a realistic 20-year-old woman, and its personalization element allowed users to train the chatbot. As a result, shortly after its official launch on December 22, Luda came under the national spotlight when it was reported that users were training Luda to spew hate speech against women, sexual minorities, foreigners, and people with disabilities.

Screengrabs show Luda saying, “they give me the creeps, and it’s repulsive” or “they look disgusting,” when asked about “lesbians” and “black people,” respectively. Further, it was discovered that groups of users in certain online communities were training Luda to respond to sexual commands, which provoked intense discussions about sexual harassment (“can AI be sexually harassed”?) in a society that already grapples with gender issues.

Accusations of personal data mishandling by ScatterLab emerged as Luda continued to draw nationwide attention. Users of Science of Love have complained that they were not aware that their private conversations would be used in this manner, and it was also shown that Luda was responding with random names, addresses, and bank account numbers from the dataset. ScatterLab had even uploaded a training model of Luda on GitHub, which included data that exposed personal information (about 200 one-on-one private text exchanges). Users of Science of Love are preparing for a class-action lawsuit against ScatterLab, and the Personal Information Protection Commission, a government watchdog, opened an investigation on ScatterLab to determine whether it violated the Personal Information Protection Act.

The Korea AI Ethics Association (KAIEA) released a statement on January 11, calling for the immediate suspension of the service, referring to its AI Ethics Charter. The coalition of civil society organizations such as the Lawyers for a Democratic Society, Digital Rights Institute, Korean Progressive Network Center, and People’s Solidarity for Participatory Democracy also released a statement on January 13, denouncing the promotion of the AI industry by the government at the expense of digital rights and calling for a more stringent regulatory framework for data and AI.

In the end, ScatterLab suspended Luda on January 11, exactly 20 days after the launch.

Luda’s Legacies?

Seoul has identified AI as a core technology for its national agenda, and it has been explicit about its support for the industry for attaining global competitiveness. For instance, Seoul launched its AI National Strategy in December 2019, expressing the goal of becoming a global leader in the sector. The support for the AI industry features heavily in the Korean New Deal, the Moon administration’s 160 trillion won ($146 billion) COVID-19 recovery program. In addition, the government has shown the intent to play a role in promoting good governance of the technology, reforming privacy laws, and issuing various directives across departments. Internationally, South Korea has contributed to the OECD’s Principles on Artificial Intelligence and participates in the Global Partnership on AI as one of the 15 founding members, aligning itself with the global movement to promote “human-centered AI.”

However, the Luda incident has highlighted the gap between the reality and the embracing of principles such as “human-centered,” “transparency,” or “fairness,” as well as the difficulties of promoting innovation while ensuring good, effective governance of new technologies. Current regulations on data management and AI are unclear, inadequate, or non-existent. For instance, under the current privacy law, the maximum penalty for leaking personal information due to poor data handling is a fine of 20 million won (approximately $18,250) or two years of prison, which may not be sufficient to deter poor practices by start-ups. On the other hand, industry stakeholders have expressed concerns about more burdensome regulation and decreased investment following the Luda incident, which might have a chilling effect on the innovation sector as a whole.

It is also critical to not gloss over underlying social factors underneath what seems to be merely a question of technology. The public first got hooked on the Luda story not just because of the AI or privacy element but because of the debates on identity politics that it has provoked. Consequently, the public response to the technological question could be influenced by pre-established perspectives on social issues that are intertwined with it.

For instance, consider gender. In recent years, social movements and incidents such as the #MeToo Movement or the busting of the “Nth Room” sexual exploitation ring have exposed South Korea’s ongoing challenges with sexual violence and gender inequality. For many, the sexualization of Luda and the attempts to turn the chatbot into a “sex slave” cannot be separated from these structural problems and women’s struggles in broader South Korean society. The Luda controversy could also be attributed to the unequal gender representation in the innovation sector. According to the World Bank, South Korea’s share of female graduates from STEM programs hovers around 25 percent, which suggests that engineers who are creating AI programs like Luda are less likely to take gender issues into consideration at the development stage.

Obviously, this is not an issue that is particular to South Korea. For instance, in 2016, Microsoft launched its chatbot “Tay,” and had to shut it down within hours when users were training it to make offensive remarks against certain groups. Not to mention, the risks entailed in AI extend to its wide range of applications, well beyond chatbots. But at the same time, the Luda incident clearly demonstrates the importance of country-specific social factors driving these seemingly technological or regulatory issues, and subsequently, the relevance of factors such as different attitude toward privacy, surveillance, and governance, as well as policy environments that differ starkly across the globe.

The Luda incident helped provoke a truly national conversation about AI ethics in South Korea. Luda has demonstrated to South Koreans that AI ethics is relevant not just in a vaguely futuristic and abstract way, but in an immediate and concrete manner. The controversy could potentially become a watershed moment that adds greater momentum to the efforts of civil society organizations promoting responsible use of AI in South Korea, where developmentalist and industrialist thinking about the technology still remain dominant.

Dongwoo Kim is a researcher at the Asia Pacific Foundation of Canada, a think tank based in Vancouver, B.C. He is the program manager of the Foundation’s “Digital Asia” research pillar, which focuses on innovation policies in the Asia Pacific region. Dongwoo is a graduate of Peking University, UBC, and University of Alberta.

Dreaming of a career in the Asia-Pacific?
Try The Diplomat's jobs board.
Find your Asia-Pacific job