56% of surveyed college students admitted to using Artificial Intelligence to complete academic work, according to a 2023 report from CBS News.
AI helps students make work more efficient by enhancing autocorrect features and providing idea generation, but when used for classwork without instruction from professors, it violates the University of St. Thomas Undergraduate Student Academic Integrity Policy as a form of academic dishonesty.
To combat the rising number of students using AI, professors have begun using AI detection tools like Turnitin.com, but “they are not always accurate,” St. Thomas environmental studies professor Paul Lorah said.
In an attempt to navigate the new landscape of education, Lorah decided that the best way forward was to promote AI literacy in his classroom.
“I am experimenting with allowing students to use AI in upper-level courses, including Conservation Geography and the Geography of Global Health,” Lorah said. “On some assignments I allow them to interact with AI to generate ideas and even produce some text.”
Since beginning this approach, Lorah said that he has seen an astonishing result.
“Many students do not choose to use AI at all, and as the semester progresses those that do generally use it less and less,” Lorah said.
Lorah found that the biggest deterrent of AI use is the need to cite the AI engines that generate the text. He requires his students to check the accuracy of facts produced by AI, and he said that by the time they do so, “most students feel like they might as well summarize ideas in their own words.”
As positive as the results with AI literacy have been, Lorah has encountered instances of AI cheating.
“It is really, really obvious when that happens,” Lorah said.
Though AI generation can appear obvious to professors, verifying this unapproved AI use is tricky. Turnitin.com has a margin of error of 15 points out of 100, according to a report from the University of Kansas. This uncertainty, layered with the need to rely on the opinion of professors, is a source of stress for some.
One such student was political science major Louise Strivers at the University of California, Davis. She was falsely accused of using AI on a brief that she did not use AI to generate, according to a 2023 Rolling Stone Report. Her work was flagged by an AI detection tool regardless, and her professor reported her to the school’s administration. She then faced two weeks of paperwork and appealed to clear her case. In the meantime, she is required to self-report the cheating allegation while applying to law school.
Junior St. Thomas civil engineering students Donna Bruinius and Dylan Moreno agreed that these concerns about AI and detection are relevant.
“I think if they flag me for AI usage when I know I didn’t . . . I would hope they could at least have a discussion with me,” Bruinius said.
Moreno said that in a world where AI is now so prevalent, AI literacy may be the best solution.
“I think AI is a good tool, especially if you know how to use it right,” Moreno said.
As for Bruinius, she said familiarity with AI is helpful for use in professional fields.
“In the real world, they’re using AI,” Bruinius said, “I think it would be better for us to know how to use it instead of being shied away from it or shamed for using it.”
Lorah’s efforts to teach AI literacy in the classroom concur with Bruinius and Moreno’s thoughts.
“AI is probably going to generate creative destruction in unexpected ways that help some students and harm others,” Lorah said, “I think it makes sense to start experimenting with the technology now as I’d rather that my students are not the ones caught by surprise.”
Anna Brennan can be reached at brenn7501@stthomas.edu.