Looking Through The Eye of AI

“More times than I can remember, people have told me and my friends that were not good enough to be an engineer and pursue STEM, or that my robotics team, the FeMaidens, was not as good as other teams because we are all-girls.” said Megan Groppe’ 20. For many young women, it is hard to achieve a place in machine learning, yet the challenge does not stop there. Groppe combats this everyday by working hard and leading FeMaidens to success.

Afifa Areya

“More times than I can remember, people have told me and my friends that we’re not “good enough” to be an engineer and pursue STEM, or that my robotics team, the FeMaidens, was not as good as other teams because we are all-girls.” said Megan Groppe’ 20. For many young women, it is hard to achieve a place in machine learning, yet the challenge does not stop there. Groppe combats this everyday by working hard and leading FeMaidens to success.

Humans are always running on judgments. Their ability to assess a situation allows them to make decisions in their routine lifestyle. However, their perception over people can lead to prejudice. This can create toxic relationships between different groups of people. It is only natural for humans to impose their imperfections onto their creations. As a result, bias is often mirrored in Artificial Intelligence (AI).

AI is becoming a target product for industries due to their utility. AI is able to replace menial jobs and can lead to an increase in productivity. As a result, they are in high demand in locations requiring service, such as in hospitals, medical offices, business applications, automation, and programming. AI in these locations can improve the efficiency and lessen the time it takes for certain tasks to be completed.

Artificial Intelligence is created using algorithms, which uses data to calculate different processes. Data used for certain algorithms often lack versatility, which means they may be narrowed to a certain bias. Since data can only be collected over a certain region, a group of people or gender, algorithms that reflect these data tend to only favor those specific groups. Therefore, AI created using such algorithms cannot be implemented over wide use.

Similar to humans, an AI can be trained to tackle a certain obstacle. Artificial Intelligence are able to develop traits. These traits help solve problems that the machine was intended to resolve. For example, Dr. Koller, a professor from Stanford University specializing in AI, explains that in some cases, an algorithm can be attached to finding results that does little to help solve certain problems. She explains that an AI created to predict bone fractures from X-Ray images across multiple hospitals can narrow down its results to a certain hospital to find the percentage of fractures. This can be detrimental, because the algorithms uses the hospital to identify the number of bone fractures, and clearly the number of bone fractures does not depend on the hospital it is from. Rather, it depends on the bone. Therefore, the AI cannot be utilized, because the algorithm detects the incorrect cause to finding the effects of bone fractures, thus explaining the bias over data.

Afifa Areya
“I think that the first step to creating unbiased AI systems would be to encourage a diverse spectrum of people to be interested in AI and ensure equal opportunities for all students of all kinds of backgrounds and identities to pursue STEM and AI,” said Lauren Siu ’20.

However, the bias is more well known across the field that AI is studied in. Human bias is often the root cause of the bias existing within the system AI is built upon. Dr. Russakovsky, an assistant at Princeton University, who co-founded a program to diversify the field of Artificial Intelligence said, “A.I. Researchers are primarily people who are male, who come from certain racial demographics, who grew up in high socioeconomic areas, primarily people without disabilities. We’re a fairly homogeneous population, so it’s a challenge to think broadly about world issues.” Bias in AI can only reflect what it was created from, and such social foundations can often harm the results. “The root of these problems is not only technological. It’s social. Using technology with this underlying social foundation often advances the worst possible things that are happening. In order for technology not to do that, you have to work on the underlying foundation as well. You can’t just close your eyes and say, ‘Oh, whatever, the foundation, I’m a scientist. All I’m going to do is math,’” said Timnit Gebru, a scientist working at Google. “Science has to be situated in trying to understand the social dynamics of the world because most of the radical change happens at the social level,” said Gebru. 

It seems that Artificial Intelligence was established with the objective of making our work more efficient and accurate. However, reality speaks otherwise. Women studying in this field have had first-hand encounters with issues underlying AI’s developments. There can only be an extent to perfecting technology, when humans are incapable of living without judgement. As Dr. Russakovsky puts it, “I don’t think it’s possible to have an unbiased human, so I don’t see how we can build an unbiased A.I. system. But we can certainly do a lot better than we’re doing.”

“The root of these problems is not only technological. It’s social. Using technology with this underlying social foundation often advances the worst possible things that are happening. In order for technology not to do that, you have to work on the underlying foundation as well. You can’t just close your eyes and say: ‘Oh, whatever, the foundation, I’m a scientist. All I’m going to do is math,’” said Timnit Gebru, a scientist working at Google.