Opinion | Artificial intelligence problem isn’t computers; it’s humanity

A+female+robot+named+Sophia+is+presented+at+a+press+conference+during+Web+Summit+2019+in+Lisbon%2C+Portugal+on+Nov.+6%2C+2019.+Columnist+Andrea+Martinez+argues+that+the+abilities+of+artificial+intelligence+is+over+exaggerated+by+fear+mongers.+

Photo Courtesy of Stephen McCarthy / Flickr

A female robot named Sophia is presented at a press conference during Web Summit 2019 in Lisbon, Portugal on Nov. 6, 2019. Columnist Andrea Martinez argues that the abilities of artificial intelligence is over exaggerated by fear mongers.

By Andrea Martinez, Columnist

The turn of the century has solidified computer science as the premier scientific field, with strides being made in it every single day and related fields such as math, logic, physics, psychology and philosophy. 

As time goes on and technology continues to advance, the fearful sentiment regarding artificial intelligence, or AI, among the general public remains largely the same. 

Popular figures like Elon Musk fear monger about humanity’s technological creations turning against it, but experts in the field are quick to solidify the fact that AI is actually not quite as intelligent as pop culture paints it to be.

AI is severely limited not only by the physical capabilities computers face trying to store large amounts of data but also by the lack of technical ability of computers to work exactly like a human brain. 

In reality, society needn’t worry about AI like Sophia the Robot taking over any more than it should be worried about humanity running itself into the ground — an argument that computer scientist and philosopher Brian Christian poses.

Get The Daily Illini in your inbox!

  • Catch the latest on University of Illinois news, sports, and more. Delivered every weekday.
  • Stay up to date on all things Illini sports. Delivered every Monday.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Thank you for subscribing!

Explained succinctly in his book, “The Alignment Problem,” Christian illustrates the issue with AI and machine learning. Hint: the problem is not AI, the problem is humans.

He describes the Alignment Problem as a major issue that computer systems face — their lack of a moral compass means computers will never be able to perform at the level of human beings.

Christian parallels Google Photos’ facial recognition software to that of the mid-1950s’ “Shirley Cards.” These Shirley Kodak Cards were based on prints from models with porcelain skin, baby blue eyes and no one else. As a result, Kodak film was unable to properly capture images of people of color.

That period of history is in the rearview — however, history repeats itself in new ways. A similar issue presented itself again when Google Photos’ experimental machine learning algorithm was found to be misidentifying black people as gorillas. 

While Google was quick to respond and correct the issue, it uncovered a major flaw in the field of machine learning — the sources used to compile training data express a lack of diversity and thus the data is inherently biased.

Christian’s comparison demonstrates that inherent bias is ingrained in technological systems. Shirley cards were not made for non-white people because Kodak chose not to work with models of color. 

Similarly, facial recognition software was not made for non-white people because technology manufacturers have failed to compile enough training data containing people of color. 

When technologies are built by small niches of a multicultural society, the result is an inaccurate picture of society as a whole. The world is not getting the advanced technology it was promised and people were so worried about. 

As a result of biased systems, people of color are consistently left out in the cold and lost to the societal discrimination of technologies designed to keep the dominant niches on top.

In the same way, Shirley Cards were instrumental in creating a racist foundation for photo imaging, training data sets continue to fall victim to racist foundations in an increasingly diverse community. These algorithms do nothing but reflect what humanity presents to it — a biased society.

Discriminatory facial recognition technology is representative of the ever so pervasive racism that has been ingrained in society since the Age of Exploration. As a reflection of society itself, computer algorithms cannot simply be “fixed” with the tweak of a line of code. 

Computer systems only give back what humans provide them. Correcting the outputs of artificial intelligence and machine learning software means correcting the inputs of society. 

Andrea is a junior in LAS.

[email protected]