The introduction of artificial intelligence has significantly influenced universities across the country, and the University is no exception. Tools like ChatGPT, Atlas, Gemini and TurboAI have allowed students to do something never done before — complete high-quality work on a mass scale without ever learning class material.
ChatGPT is a large language model trained on thousands of written human documents, and it’s common to see recurring phrases when prompting the model to write. Professors have tried solving the issue of AI plagiarism in the past by using AI-detection tools, but the rapid advancement of AI has made those tools less effective. Additionally, studies find that AI checking tools are negatively biased against non-native English speakers, which heightens disparity in the classroom.
OpenAI’s newest product, Atlas, has also sparked concern among educational institutions. Marketed as a “webpage co-pilot,” the tool integrates an AI assistant directly into your browser to streamline daily tasks. Atlas also has an “agent mode” which allows the AI to complete jobs without human interaction.
If one wants to plan a vacation, they could prompt their agent to filter flights while the user looks for a hotel. Atlas can purchase prompted findings with user consent. Sam Altman, the CEO of OpenAI, told users this feature “allows AI to perform multi-step tasks autonomously like a virtual computer.”
Sam Blaker, senior in ACES, noted the efficiency this new tool allows for multitasking.
Get The Daily Illini in your inbox!
“I have already installed (Atlas), and it is really useful,” Blaker said. “It helps me a lot with investing because I can ask it to do things in the background while I am focusing on a task.”
Students can also consult with the tool to write essays, code projects or fill in answer blanks on quiz sites like PrairieLearn. This newest step up in AI allows students to offload work in a way never seen before. With the use of the “agent” feature, for the first time, an assignment could be completed 100% by AI without any student interaction. This newest upgrade comes at a time when AI usage among students is at an all-time high.
Mike Szymanski, clinical research professor in Business, said he has noticed an uptick in AI usage among his students. In response, he conducted a pseudo-study in his BUS 301: Business in Action course. One group of students could use AI to complete assignments, while the other could not. Szymanski then tested both groups’ ability to recall information from the assignment 10 minutes after the assignment. Then, 48 hours later, he performed the same test and noted his findings.
“The students that used AI had significantly lower recall and originality scores than those who worked with their brains,” Szymanski said. “When you normalize using AI, you’re basically skipping the part where you build familiarity. And then you leave not knowing what ‘good work’ actually looks like.”
Szymanski’s research is under review by an internal board, but his preliminary findings align with a broader Massachusetts Institute of Technology study conducted this past summer. The MIT study examined how using a large language model writing assistant affects cognitive engagement, brain connectivity, writing behavior and learning outcomes during essay writing tasks.
Blaker disagrees with the claim that AI use negatively harms performance.
“The use of AI just makes things more efficient,” Blaker said. “I am able to do a lot more now than I would have been able to five years ago.”
Ritisha Bansal, freshman in Engineering, said she has witnessed heightened AI dependency among her peers.
“When midnight comes around and there’s only five minutes left to complete homework, they panic,” Bansal said. “So then you snip the question and get the answers from ChatGPT so you get it done. But this work isn’t your own, and when exams come around, you struggle because you have no idea what you actually learned.”
This act generally falls under plagiarism per the academic integrity handbook. As AI continues to advance, the University may need to update its academic integrity policies to reflect the new landscape and prevent further issues.
A recent New York Times article highlighted the AI usage among students at the University. The phrase “sincerely apologize” was used in student emails to professors to an overwhelming degree, prompting the instructors to bring up the issue in class.
Szymanski said he noticed a similar pattern of common phrasing when grading end-of-term reflection papers that were supposed to be personal and anecdotal.
“The pattern was unbelievably consistent,” Szymanski said. “I’d get five different students turning in papers that all opened with the exact same sentence, using the exact same wording taken from the syllabus. That’s how I knew they were just feeding the course description into ChatGPT and calling it ‘my personal growth.’”
Szymanski said he is wary of the future and believes the problem runs deeper than copy-and-paste dishonesty.
“After watching this play out in my own classroom, I honestly don’t think the crisis is that students are cheating,” Szymanski said. “I think the crisis is that they don’t see it as cheating. What I saw with my students is that AI didn’t kill their ability to write. It killed their willingness to try.”
