Rediet Abebe: A Rising Star in AI

This is Rediet Abebe, an AI researcher and new faculty member at the University of California at Berkeley (picture by Anoushnajarian). Berkeley seems very proud of their new assistant professor, and with good reason. Her insights about the way inputs to artificial intelligence are chosen have huge implications for social justice and discrimination, and are foundational to progress in the application of AI. Occupy Math has chronicled a number of people from under-represented groups that made huge contributions in math, but most of those posts were obituaries. Professor Abebe is a current phenom. She is a co-founder of Black in AI, an organization that is trying to increase the number of black people working in artificial intelligence. The post will look at how this remarkable woman got where she is today and discuss some of the reasons why increasing the representation of underrepresented groups in AI is both important and a really good idea.

Rediet Abebe was born in Adis Ababa, the capitol of Ethiopia, on the Red Sea near the Horn of Africa. She was an excellent student with a great interest in mathematics. In Ethiopia they have a highly tracked education system that would have turned her into a medical doctor, because she was in the top tier of students. Occupy Math has written about women being locked out of mathematics before, but being forced to go to medical school is sort of a unique barrier to studying mathematics. Being sharp as a tack, the future Dr. Abebe won a scholarship to Harvard, avoiding her home country’s prescriptive educational system.

After completing her undergraduate degree, Ms. Abebe was accepted to a doctoral program in math and then changed her mind, after a one-year intensive course in math, and decided to go into artificial intelligence instead. This change was motivated by the fact that AI is more outward-looking and would give her tools to solve some serious problems she had found. This does not mean she had lost respect for math, as the following quote from a Quanta Magazine interview shows.

“Math and theoretical computer science force you to be precise. Ambiguity is a bug in mathematics. If I give you a proof and it’s vague, then it’s not complete. On the algorithmic side of things, it forces you to be very explicit about what your goals are and what the input is.”

While at Harvard, Ms. Abebe noticed that the Cambridge, MA public school system was plagued by resource shortages and a substantial achievement gap for its Black and Latine students. This is absurd, because Cambridge is a wealthy suburb of a wealthy city. She dived in and started trying to solve the problem. Occupy Math has posted before that math can be used to document the impact of poverty and Dr. Abebe is using AI to do exactly that. When a government is shortchanging its minority students or citizens, one of their big tools is to obfuscate the matter. Artificial intelligence can go through the details in a creative and inhumanly original manner to de-obfuscate matters. It can even be helpful in designing solutions. This is not only good for the victims of discrimination, but all of society. As Occupy Math has noted, Poverty is Expensive for Everyone.

I’ve just read several articles about Rediet Abebe. Her leading characteristics include great intelligence, but the characteristic that stands out is that she dives into and tries to solve problems. She is a co-founder of Mechanism Design for Social Good which, in their own words is:

…a multi-institutional initiative using techniques from algorithms, optimization, and mechanism design, along with insights from other disciplines, to improve access to opportunity for historically undeserved and disadvantaged communities.

Occupy Math has complained in the past about attempts to create a place for underrepresented groups in university faculty that were based on a market commodity view of faculty. While Occupy Math was at Iowa State the federal government threatened to cut grant funding to more than fifty universities if they did not hire black faculty, who, for the most part did not exist. Any solution of the problems caused in the academy by racism must address the whole situation. The first step in earning a PhD is usually a good K-12 school system, and that is missing in action for many black and Latine Americans. Note that Dr. Abebe did not go to an American K-12 school system and the first social injustice she identified and set to work on was exactly this pillar of racial and class discrimination. That said, Dr. Abebe is a co-founder of her black advocacy organization, she has allies and colleagues and they seem to have a viable community. It may be that we are starting, with help from Dr. Abebe and her friends, to turn the corner on this horrible situation.

Why do we need “Black in AI”?

Because AI is a tool that is shaped by its practitioners. Google has an app that tries to caption photos automatically. An early release of this app classified black people as gorillas. At the time a lot of media stories, including the linked piece from the BBC, called the app “racist”. This is arrant nonsense: the app is not racist. In spite of being artificial “intelligence”, the app lacks the cognitive scope to be racist — rather the app was created by people operating in an environment of structural racism. This incident is a good example of anthropomorphizing a computer program, and calling the app racist somewhat conceals the real problem.

Occupy Math does not claim the app’s behavior was not racist — it was — but the racism should not be credited to mindless software. Google’s image search algorithm is trained on a large set of examples and, from them, does a really good job of learning those examples. Here is the racism: there were no pictures of black people in the training data. This is one good reason we need the organization Black in AI. The organization is actively performing research, which is a sign of being a serious effort. When Occupy Math wants to bring out a new idea in his research community, he organizes workshops, special sessions at conferences, and special issues of research journals. This gets the idea out there in the community, and the idea that black people can be and are AI researchers is best presented by having their research appear in the academic world.

Turning back to Mechanism Design for Social Good, Dr. Abebe has noticed that things like the choice of training data, and other inputs to AI, are places where people have a great deal of power, power they may not even notice they have. Her explanation, also from the Quanta Magazine story, is “Mechanism design is like if you had an algorithm designed, but you were aware that the input data is something that could be strategically manipulated. So you’re trying to create something that’s robust to that.” This is a wonderful approach, incorporating robustness into the process of designing inputs. Occupy Math is developing a new course on data manipulation and visualization for a graduate program in data science, and Dr. Abebe’s ideas, that the process of data science must be robust against choices of data that cause problems, will be a foundational principle in the course. Dream situation? Bringing in Professor Abebe as a guest lecturer.

The problem of bias in AI is a pervasive one. Using case law as training data for predictive policing and sentencing yields racist AIs in policing. This we do not need on top of the existing, pervasive racism in law enforcement. Medical AIs have the same problem, favoring white folks for medical care. Data-driven AI are very good at capturing the racism already in the system and enshrining it behind a digital wall. Dr. Abebe’s ideas are a potential antidote that could unleash the potential of AI to do wonderful things. If, for example, your doctor fed all your medicines through the right AI, then 40,000 drug cross-reactions discovered by AI data mining of medical cases would be at the doctor’s fingertips. Occupy Math strongly advocates for unleashing the potential of AI, and a big part of that is taking Dr. Abebe’s approach on board. It is wonderful to find a person like her, improving the world.

I hope to see you here again,
So remember to wear your mask!
Daniel Ashlock,
University of Guelph
Department of Mathematics and Statistics

Leave a comment