Whatever sci-fi ideas one might have of ChatGPT, how it really functions may make you think twice about using it for schoolwork — or at least how you use it.
Although convenient, its low accuracy rate in answering both humanities and STEM prompts has raised concerns as to what extent ChatGPT can be used as a learning tool. As the world continues to find the place artificial intelligence has in society, higher learning institutions have had to grapple with a flood of academic integrity violations.
“The problem with chatbots is that they’re fundamentally just a gimmick,” said David Sepkoski, professor in LAS and Thomas M. Siebel chair in History of Science. “I mean, there’s nothing intelligent about this at all — it’s just algorithms.”
Sepkoski opposes the University’s bylaws on generative AI usage, concerned about the lack of clarity regarding AI use as academic misconduct for students and faculty.
ChatGPT functions by generating the next most probable word to mimic human speech without considering accuracy. Assigning “tokens” to words that might go with the one before, it chooses from a group of words with the highest probabilities.
Get The Daily Illini in your inbox!
This range of probability is quite large. ChatGPT can select the next word with a 99% probability; the next can be 82% probable, and the next can be 70%, Sepkoski said.
This mechanism is responsible for the factual errors in ChatGPT answers. The corpus, the collection of information the program is trained on, is a major issue. In scanning data from across the internet, generative AI cannot discern what is true, false or even sarcastic — it simply uses what exists.
As many people have begun using ChatGPT as they would Google, there are concerns over how ChatGPT will contribute to alternative facts in what many have termed a “post-truth world.” One 2023 study by disinformation researchers found ChatGPT easily wrote out conspiracy theories as if they were fact, using “incredibly convincing human syntax.” The researchers reported there was no available tactic to mitigate the issue.
“It’s designed not to be accurate but rather to be entertaining,” Sepkoski said. “The entertainment undercuts the reliability and accuracy of something like GPT, and until people develop more specialized models, maybe that are specifically for the use in certain kinds of classes or whatever, students really shouldn’t trust, really, any of the results that they get from a GPT prompt.”
Outside of ChatGPT, Sepkoski said, there is a place for AI in education. For example, JSTOR now uses AI to improve usability by pulling up articles in response to a request provided in natural language by the user. This AI, trained specifically in a “one-directional” way, can be extremely useful and convenient in providing reliable information.
This solves a clearly identified problem for undergraduates unfamiliar with research materials and websites like JSTOR. Proximity searching, said Sepkoski, has become a “holy grail in database searching.” Much more complicated than ChatGPT, it can intelligently assess whether terms and topics are relevant to the request.
On the other hand, ChatGPT is not as specialized; it often cannot identify the source of its information. Sepkoski, among other professors, reported students submitting fake bibliographies made by ChatGPT with falsified sources.
“What problem is that solving?” Sepkoski said. “The only problem it seems to be solving is that people are lazy, and they would like to be lazier.”
OpenAI and Perplexity AI are being slammed with class action lawsuits from major news publications, YouTubers and Pulitzer-winning writers for taking their material to train ChatGPT without permission or crediting.
Despite the compromising issues in how ChatGPT pulls information from sources, APA, MLA and the Chicago Style Manuals have made ChatGPT citable. The University’s bylaws on academic integrity allow for the usage of ChatGPT so long as its usage is acknowledged.
This sudden acceptance of ChatGPT in academia has left many in the humanities departments at the University feeling a sense of “bewilderment and hopelessness,” Sepkoski said.
“I’m very disappointed with the University’s overall lack of guidelines and support for instructors who are struggling with students who are misusing AI. I mean, they just want to pretend that the issue isn’t a big deal, and as I found out in the class, it was a big deal,” Sepkoski said.
The University’s policy on academic integrity consists of a three-strikes law, the third strike being grounds for expulsion. The University Office for Student Conflict Resolution acts as the enforcer of disciplinary measures for offending students.
Bob Wilczynski, director of the Office for Student Conflict Resolution, reported 1,194 academic integrity violations within the past year, with 70-80 being repeat offenders. This is up from 1,000 the year before.
Admittedly, due to the overwhelming numbers, Wilczynski said the office is more concerned about handing out warning letters than dealing with individual cases.
Sepkoski sympathized with the administration for its lax disciplinary measures. This is due to the barrage of parents who will undoubtedly complain because of a relatively new sentiment where “everyone feels entitled to an A in the class,” he said.
According to Wilczynski, those working at the Office for Student Conflict Resolution can share this hopelessness. Discipline for the second grade of infraction involves writing an essay reflecting on plagiarism. Wilczynski’s office has witnessed multiple instances of students using AI to write this very essay.
The University admits many students cheat due to the sheer time constraints that make workloads difficult to handle. Still, this is not an excuse, said Sepkoski, who admits to already assigning much less than professors from his time earning his degrees.
Sepkoski believes tasks such as brainstorming and writing outlines should not be taken on by ChatGPT due to the likelihood of spitting out incorrect information.
“It produces text that sounds like the way humans talk, but its content is actually really bad,” said V. N. Vimal Rao, professor in LAS. “So, (as) professors, (it’s) generally pretty easy for us to tell when somebody just wrote it themselves or when they use ChatGPT because if you know what you’re talking about, it just sounds really stupid.”
A dedicated researcher on the topic, Rao still feels AI has a place in the classroom. In his statistics course, he requires students to use ChatGPT to create graphs. In addition, his course utilizes Arist AI, a program that — albeit not related to OpenAI — uses the interface of ChatGPT to reference and summarize course material directly from lectures. This is unlike ChatGPT, which generates information scraped from all over the internet that may or may not be correct.
According to Rao, any course involving computer programming could involve ChatGPT, testifying to the chatbot’s ability to write code.
Yet ChatGPT may not be an option even for STEM. In its latest version, ChatGPT achieved a 64% accuracy rate in answering math problems and only the 89th percentile in the math portion of the SAT. For coding, one Purdue study revealed ChatGPT could only solve programming problems with 52% accuracy.
With the world increasingly integrating ChatGPT into its function, this will spell consequences for the future of labor and the workforce, Sepkoski said.
Sepkoski worries about skills such as critical analysis, writing in one’s unique voice and even brainstorming being written off as replaceable by algorithms out of convenience.
Sepkoski also attributes the University’s lack of urgency in developing more substantial disciplinary measures to its stake in the computer science industry. This is part of its effort to hop on “the bandwagon they want to be a part of,” Sepkoski said.
Like Sepkoski, Rao is also concerned about the overreliance on AI. To avoid this, Rao teaches his course in a way that trains students to maintain the core thinking needed to grasp the material while making room for the usage of new tech developments.
Still, the extent of how much AI can power the future is up for debate; the sheer resource expenditure by its data centers to answer queries is an environmental concern — 25 times that of Google. As of May 2024, ChatGPT’s daily power usage is enough to power 180,000 American homes. One search uses one plastic bottle’s worth of water to cool down its servers.
When asked about the actual environmental feasibility of ChatGPT being a regular tool, Rao admitted, “(It) doesn’t matter how cool your job is if there’s no world to live in. Yeah, that’s not important. So I think that that is a very important ethical issue to consider.”
Such as with JSTOR’s new query function and Arist AI’s lecture referencing feature, many instructors do see potential in AI to improve efficiency and accessibility.
“I’m not a reactionary,” Rao said, likening the panic around AI to the panic in the 2000s surrounding the internet. “My nature is to say, let’s figure out how to live in this new world. And I don’t think we have yet.”