Ethiopian-born cognitive scientist Abeba Birhane among the TIME100 Most Influential People in AI
Addis Ababa, September 8, 2023 (FBC) – Ethiopian-born cognitive scientist Abeba Birhane and Eritrean Ethiopian-born computer scientist Timnit Gebru have been listed among the TIME100 Most Influential People in Artificial Intelligence, the TIME news magazine confirmed.
Both Ethiopian-born women have joined an elite list of most influential people in the Artificial Intelligence sphere, including the founder and CEO of SpaceX and the world’s richest person, Elon Musk. Of the categories identified in the TIME news magazine, Abeba and Timnit have been listed under the category of “Thinkers”
A cognitive scientist by training, Abeba Birhane started on the path toward AI research when she noticed a vital task that almost nobody was doing. AI models were being trained on larger and larger datasets—collections of text and images—gathered from the internet, including from its darkest corners. But Abeba realized that as those datasets were ballooning from millions to billions of individual pieces of data, few people were systematically checking them for harmful material, which can result in AIs becoming structurally racist, sexist, and otherwise biased.
With a small group of fellow researchers, Abeba has forged the beginnings of a new discipline: auditing publicly accessible AI training datasets. The work can be taxing. “Most of the time, my screen is not safe for work,” says Abeba, who is now a senior adviser in AI Accountability at the Mozilla Foundation and an adjunct assistant professor at Trinity College Dublin. “I used to love working in a café; now I can’t.”
In a recent paper, currently under peer review, Abeba and her co-authors come to a startling conclusion: AI models trained on larger datasets are more likely to display harmful biases and stereotypes. “We wanted to test the hypothesis that as you scale up, your problems disappear,” Abeba says. Their research indicated the reverse is true. “We found that as datasets scale, hateful content also scales.” (This interview has been condensed and edited for clarity.)
Meanwhile, Founder and Executive Director, Distributed AI Research Institute, Timnit Gebru co-wrote one of the most influential AI ethics papers in recent memory, a journal article arguing that the biases so present in large language models were no accident—but rather the result of an intentional choice to prioritize speed over safety.
Since she left Google, Timnit has become a torchbearer in the world of responsible AI. As founder and executive director of the Distributed AI Research Institute (DAIR), she has built a space for a kind of interdisciplinary AI research that is rare to see at Big Tech companies. DAIR’s hybrid model of research and community building has so far focused on two parallel tracks. On one track, Timnitand her colleagues interrogate the tech industry’s dependence on poorly paid, precarious workers, many of them from the Global South. “A lot of people want to imagine the machine is sentient and that there are no humans involved,” Timnit says. “That’s part of a concerted effort to hide what’s going on.”
On the other track, Timnit has dedicated herself to researching the ideological roots of some of the technologists attempting to build artificial general intelligence. Those ideologies—Timnit Gebru and her colleague Émile P. Torres coined the acronym TESCREAL as shorthand for a long list of obscure “isms”—not only have unsavory links to debunked pseudosciences like eugenics, the pair argue, but also predispose their followers to tolerate almost any means (rampant inequality, poverty, and worse) so long as the ends (humanity safely creating AI, for example) are made fractionally more likely. “Young computer-science students might feel like they have to follow this trajectory,” Timnit says, referring to tech companies further concentrating wealth and power, extracting natural resources and human labor, and widening surveillance-based business models. “It’s important to look at: What is the ideological underpinning? If you know that, you can ask, ‘What if I had a different ideological underpinning? How would technology be developed in that case?’”