Opinion | Coexistence with machine intelligence requires opening “black box”

Photo Courtesy of Xu Yu/Xinhua/Sipa USA/TNS

Chinese Go player Ke Jie, third from right, and other guests attend the opening ceremony of the Future of Go Summit before a match between him and Google’s artificial intelligence program AlphaGo in China on May 23, 2017. Columnist Clint argues we must be careful when developing artificial intelligence.

By Clint Dozier, Senior Columnist

The year 2020 was, well, terrible … in many ways. The COVID-19 pandemic brought the entire world to a near standstill, and millions have either been infected or killed by the virus. The economies of many major countries are now in recession, with millions having lost their jobs in the United States alone. And, to top it all off, 2020 was the year we lost one of the greatest musical geniuses of all time — and my personal guitar hero — Eddie Van Halen to a lengthy battle with cancer.

This is not to say that nothing good at all happened in 2020, and there was perhaps one area in particular that experienced more progress in a single year than ever before: the development of artificial intelligence (AI).

Among the most notable of these AI advancements was GPT-3, created by the company OpenAI, and AlphaFold, created by the AI subsidiary of Google, Deepmind.

To be frank, GPT-3 is cooler and more interesting than AlphaFold, but AlphaFold is, for lack of a better term, a way bigger deal. When I say GPT-3 is “cool,” I mean it’s very impressive upon first glance. GPT-3 is what’s known as a natural language model, which means that it can interpret and process natural human language, and it does so shockingly well. 

GPT-3 can do just about everything, from writing essays, answering questions, impersonating historical figures, writing computer code, designing websites and now generating images only from a text description. The YouTube channel ColdFusion has an excellent video on GPT-3 with even more detail. There is a good deal of debate among experts, however, around exactly how impressive GPT-3 really is.

Get The Daily Illini in your inbox!

  • Catch the latest on University of Illinois news, sports, and more. Delivered every weekday.
  • Stay up to date on all things Illini sports. Delivered every Monday.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Thank you for subscribing!

AlphaFold, however, is an amazing breakthrough, and that idea is not really being disputed by experts. Simply put, AlphaFold has “solved” the decades-old problem in Biology of protein folding. The protein folding problem is (or was) the endeavor to determine a protein’s 3D shape from the information of its amino acid sequence. 

This problem being “solved” means that AlphaFold has achieved a score of 87 in the Critical Assessment of Structure Prediction (CASP) competition, with a score of 90 or better being considered roughly equivalent to the experimentally determined structure. This will have far-reaching implications, especially for the speed and efficiency with which medical research and drug development can be completed. AI researcher and podcast host Lex Fridman has an excellent YouTube video on this topic.

However, a major problem with these AI breakthroughs is that researchers don’t yet fully understand how the algorithms within these systems work. That is to say, these AI systems are making decisions to produce an output, but researchers don’t know why those decisions are being made. This is the problem of AI algorithms being a “black box.”

An algorithm is nothing more than a set of instructions. Think of an algorithm like a recipe. An algorithm could be as simple as: “multiply these two numbers together” or as complicated as the human brain. Any form of intelligence, biological or non-biological, consists of nothing more than an algorithm or set of algorithms in order to produce a result or solve a problem. To complete any task from driving a car to doing algebra homework or even just watching Netflix, algorithms are being run and deployed on the hardware that is your brain.

So why is that a problem with AI? Computer science researchers have gotten so good at developing complex algorithms, like GPT-3 and AlphaFold, but also for things like Facebook and YouTube, that they are increasingly unable to explain why a system is making the decisions that it does. Thus, these algorithms are a “black box.”

Max Tegmark, AI researcher, physicist and author of “Life 3.0: Being Human in the Age of Artificial Intelligence,” recently appeared on the Lex Fridman Podcast and discussed the issue. 

Tegmark said of technologies like AlphaFold and GPT-3: “… you know what all of those have in common besides being powerful is we don’t fully understand how they work… they’re all basically black boxes…” Tegmark further discussed the research he is conducting at MIT to try and “demystify” the processes behind these AI systems. He stressed the need to solve this problem before something goes awry, not after.

This is an existentially relevant goal, and it is one that will need to be realized before AI is deployed in ever more important and potentially dangerous situations, not after the damage is already done. 

Facebook’s algorithm potentially leading to echo chambers and polarization is bad, but it can at least be remedied. An algorithm that’s driving your car, flying your plane or inventing new drugs? It doesn’t have any room for error. That is why the black box problem is so important and why it must be overcome before an AI system does something catastrophic, and the clock is ticking.

Clint is a senior in LAS.

[email protected]