It is no secret that artificial intelligence has become increasingly integrated into our everyday lives in the past few years. While the use of bots such as ChatGPT may have started as a last-minute fix for forgotten assignments, its skills are now being stretched over multiple industries. From logging data to spell-checking reports, it seems the possibilities are endless for this quickly developing software.
Unfortunately — like most technology — when AI falls into the wrong hands, the consequences can be devastating.
As AI has continued to advance, so have rates of online sexual harassment and deepfakes. A 2019 study revealed that 96% of deepfakes were classified as nonconsensual pornography. Numbers as high as these reflect a terrifyingly obsessive culture rooted in exploiting women.
When interviewed, Nina Jankowicz — former executive director of the Disinformation Governance Board of the United States — addressed the inherently misogynistic structure that so many of these deepfakes are built on.
“These models are trained on women’s bodies,” Jankowicz said. “Even if you feed a male image into the kind of face-swap tools that exist, it’s not going to work as well, because they have been created by men for the purpose of either demeaning women or pleasuring themselves.”
Get The Daily Illini in your inbox!
The fact that this technology has been designed for men to control women should spark concern in anyone who undermines crimes such as these.
A case that has captured the attention of the general public most recently centered around arguably one of the most powerful celebrities of our generation: Taylor Swift.
This past week, The New York Times reported on the AI-generated explicit images of the singer that circulated on X, formerly known as Twitter — one of which was viewed over 47 million times. It is suspected that the images were the product of NFL fans’ increased frustration with Swift for her presence at her significant other’s football games — pointedly another case in which male fragility resulted in an outburst of vitriol towards a woman, but that’s the beginning of a whole other conversation.
If an individual with as much social and monetary capital as Taylor Swift could not prevent this from happening, then there is no doubt that AI could be weaponized against any woman — or girl, as in the tragic case of Mia Janin.
Janin, a 14-year-old student from London, died by suicide in March 2021 after allegedly being cyberbullied by a group of boys from her school who had photoshopped her face onto different pornography performers.
This is the severity that should be taken into account when trying the culprits of this type of sexual harassment — especially when younger and more vulnerable victims are involved. Currently, less than 10 states criminalize nonconsensual deepfake pornography or have granted the victims of these crimes the right to sue their perpetrators.
As a whole, technological advancement within society is undoubtedly good. The tools that we use to promote said advancements, however, must be considered morally neutral; a pendulum that could swing in either direction depending on who is pushing it. If humanity is to keep up with this ever-changing technology that we created, we need to do a better job at regulating it and protecting people from the consequential horrors that sometimes slight progress.
Hailey is a sophomore in Business.