As AI’s impact grows, new research reveals why the input of people all over the world is essential
For many outside the tech world, “data” means soulless numbers. Perhaps it causes their eyes to glaze over with boredom. Whereas for computer scientists, data means rows upon rows of rich raw matter, there to be manipulated.
Yet the siren call of “big data” has been more muted recently. There is a dawning recognition that, in tech such as artificial intelligence, “data” equals human beings.
AI-driven algorithms are increasingly impinging upon our everyday lives. They assist in making decisions across a spectrum that ranges from advertising products to diagnosing medical conditions. It’s already clear that the impact of such systems cannot be understood simply by examining the underlying code or even the data used to build them. We must look to people for answers as well.
Two recent studies do exactly that. The first is an Ipsos Mori survey of more than 19,000 people across 28 countries on public attitudes to AI, the second a University of Tokyo study investigating Japanese people’s views on the morals and ethics of AI usage. By inviting those with lived experiences to participate, both capture the mood among those researching the impact of artificial intelligence.
The Ipsos Mori survey found that 60 per cent of adults expect that products and services using AI will profoundly change their daily lives in the next three to five years. Latin Americans in particular think AI will trigger changes in social needs such as education and employment, while Chinese respondents were most likely to believe it would change transportation and their homes.
The geographic and demographic differences in both surveys are revealing. Globally, about half said AI technology has more benefits than drawbacks, while two-thirds felt gloomy about its impact on their individual freedom and legal rights. But figures for different countries show a significant split within this. Citizens from the “global south”, a catch-all term for non-western countries, were much more likely to “have a positive outlook on the impact of AI-powered products and services in their lives”. Large majorities in China (76 per cent) and India (68 per cent) said they trusted AI companies. In contrast, only 35 per cent in the UK, France and US expressed similar trust.
In the University of Tokyo study, researchers discovered that women, older people and those with more subject knowledge were most wary of the risks of AI, perhaps an indicator of their own experiences with these systems. The Japanese mathematician Noriko Arai has, for instance, written about sexist and gender stereotypes encoded into “female” carer and receptionist robots in Japan.
The surveys underline the importance of AI designers recognising that we don’t all belong to one homogenous population, with the same understanding of the world. But they’re less insightful about why differences exist. “This is really necessary to understand because of the gap that often exists between the demographics developing AI and those impacted by it,” says Reema Patel, Ipsos Mori’s incoming head of deliberative engagement.
She is alluding to the fact that recent innovation in tech has been very much top-down, with AI systems designed largely by male computer scientists in Silicon Valley and China. To identify harms and improve benefits, Patel argues, developers and policymakers need to think more about how to involve people in the design and life cycle of algorithms.
Tabitha Goldstaub, the chair of the UK government’s AI council, says the studies are a “call to arms” for companies and governments building AI systems. “AI designers need to understand what people want, on a fundamental human level, not just what they think they need,” she tells me.
Wondering what a citizen-centred approach might look like, I find one answer. In 1982, the British philosopher Mary Warnock was appointed to lead an ethical committee debating the implications of the era’s most futuristic technology: in vitro fertilisation.
In her recommendations, which included the personal, religious and moral perspectives of more than 600 members of the public and hundreds of citizen groups, she wrote: “Feelings among the public at large run very high in these matters . . . Reason and sentiment are not opposed to each other in this field . . . We were therefore bound to take very seriously the feelings expressed in the evidence.”
Warnock’s guidance led to the first independent legislative body of its kind, the Human Fertilisation and Embryology Authority. It still exists today, a symbol of the power of the people. Madhumita Murgia is the FT’s European tech correspondent