Opinion

You’re being watched: How AI learns who you are, uses it against you

Lily Huynh/The Cougar

On Thursday, Oct. 16, Dr. Safiya U. Noble delivered a presentation titled “AI and Our Future: How AI and Search Engine Algorithms Reinforce Oppression.” 

Before the presentation, I knew two true things about artificial intelligence: AI data centers extract water from low-income communities to sustain operations, leaving residents to bear the financial burden and secondly, artificial intelligence’s power creates exploitative digital media and perpetuates faulty information. 

After leaving, however, I learned one more thing. It became clear that artificial intelligence carries a critical undertone that shapes every institution in the country, highlighting the urgency of addressing this issue.

Surveillance Capital

Artificial intelligence operates as a system of surveillance. It gathers personal information, behaviors and identities as data points that fuel larger structures of control. Companies such as Waymo, Friend and Palantir Technologies are marketed as meaningful contributions to society when they are indubitably not.

Take Waymo, for instance. It offers a driverless robotaxi with the brand slogan “The World’s Most Trusted Driver.” The irony in this phrase is that Waymo is an algorithm, and the paradox behind the word ‘trust’ is that it is a human emotion; therefore, it shouldn’t be extended to an algorithm. 

What is most cynical about this play on words is that Waymo is a surveillance algorithm. The danger of companies like Waymo is that their surveillance often goes unheard of because it is sold amongst corporations. Noble makes a point of how surveillance companies set up in urban areas; Black communities are often invisibly surveilled. 

This monitoring is highly consequential; It facilitates the gentrification of urban spaces. Gentrification is the leading factor in the loss of neighborhood identity and heritage, as local history is overwritten or commodified vastly in Black and minority communities.

Leila Ullmann states in her article “What Does Waymo See? that an increase in acquiring data through digitization manifests a trend towards privatization, surveillance and capital extraction.

This translates to people becoming more data and less human. When a society is reduced to data points, human lives become just numbers on a ledger, easily counted and easily discarded. This reinforces inequities embedded within both data analytical and societal institutions. 

Weapons of Recognition

Weapon recognition tools are deployed in schools, supermarkets, airports and to racially profile communities. Notice anything off about that? Facial recognition algorithms are used inhumanely, helping propel racism through their outputs. 

In the 2019 article, “What is Facial Recognition and How Sinister is It?” author Ian Sample explains how facial recognition is primarily trained on white males. As a result, the system is prone to misidentifying the faces of women and people of color, meaning it is less accurate. Minimal accuracy leads to more people being profiled and questioned. 

Although this article was published over six years ago, its concepts remain highly relevant. As facial recognition algorithms have evolved, are we foolish enough to believe their biases haven’t? 

Systematic injustice is fueled by systems that reinforce a stereotypical racism that most institutions are founded on. Through facial recognition mishaps, those who are already systematically oppressed are welcomed with another layer to their oppression by being sold digitally and marketed globally. 

In the 2020 documentary Coded Bias, it was determined that the more melanated you are, the less an algorithm will be able to confirm your identity. This is not simply a technical issue, as it reflects deeper social and institutional biases that are now embedded in technology design. 

“Search engines are not neutral because nothing is neutral. Algorithms are also political,” said Noble. 

Algorithms trained primarily on lighter-skinned faces or male-centric datasets systematically exclude large population segments. This renders the excluded population virtually invisible and unaccounted for. 

If these algorithms are intentionally not trained to identify women and people of color, then the racism and sexism that govern their data analytics are actualizing harmful outcomes. Bias is built into their design, and the companies that create and deploy them complicate the outcomes.

Final Remarks 

Unsurprisingly, AI companies have found ways to embed systemic oppression into their algorithms. The weaponization of facial recognition and the constant surveillance of urban communities for capitalistic gain raise urgent questions about how society values human life and the ethical implications of reducing people to data points. 

Such practices deepen the layers of oppression that have long shaped social and economic hierarchies in the U.S. Despite the severity of this issue, the methods to combat this type of power are overwhelmingly unresolved. 

Nonetheless, I left Noble’s presentation with a grounded perception of AI and one additional claim: algorithms and their data projects demand immediate accountability as they wreak irreversible damage and deepen systemic injustice. 

opinion@thedailycougar.com

Leave a Comment