Gustavus Faculty Speaks on AI Challenges

Staff Writer- Evangelyn Hill

In our rapidly changing world, AI is becoming more and more of a concern for students and teachers alike, sparking difficult-to-answer questions.

When is AI use acceptable? What are the limits of what it can do effectively? What about ethical concerns around AI?

College campuses are understandably struggling to keep up with the shifting landscape of AI. Like teachers across the nation, Gustavus professors are putting serious thought into their policies around AI.
Librarian Rachel Flynn suggested that “there’s a lot of capabilities in the technology to work towards knowledge production that’s more accessible.” However, she admitted to still having more questions than answers.

Yurie Hong is a Classics professor and co-leader of a campus AI survey. “I think AI can be useful for a limited number of things, but that it is incredibly ethically and environmentally problematic,” Professor Hong said.

English professor and director of the Writing Center at Gustavus, Eric Vrooman, is concerned about the growing use of AI. Students are at college to become critical thinkers, and writing is a crucial part of that.

“To pass that responsibility on to AI is a real problem,” he said.

Having put significant time and effort into studying and thinking about AI’s potential use in a college setting, Gustavus professors are prepared to coach students in how to safely—and ethically—use AI.

Flynn, who also helped to lead the faculty and student AI survey, had two points of advice.

First, AI literacy is critical. Students need to understand what the technology is—and what it isn’t.
This means that when prompting an AI, students should know the capabilities and limitations of the large language models driving the artificial intelligence.

Second, she emphasized that students should understand that they have agency around AI and its use. Despite feelings that it’s “a foregone conclusion that we will all use [AI],” as Flynn put it, students have agency to choose to use or avoid AI.

Hong’s advice instead puts the emphasis on the importance of learning. “Learning is not efficient, and the purpose of doing your own work is to exercise your brain… AI can be an easy shortcut, but it also means that you don’t learn anything, and it is not the best way to spend your time in college when you are spending a lot of money to learn,” she said.

She likened college to a gym, explaining, “You may never be asked to write a paper on Shakespeare or analyze a historical text in your future career, but just like football players don’t get asked to do pushups or lift weights on the field, doing that brain exercise is what will enable you to perform well in the field of your choosing.”

Vrooman agreed, pointing out that AI shouldn’t be used for brainstorming in particular. That type of shortcut he called “a negligent act.”

Despite these and other concerns, Gustavus professors acknowledge the possibilities of AI.
Flynn highlighted how AI can bring information within reach of more people. “I think there’s a lot of capabilities in the technology to work towards knowledge production that’s more accessible.”
She also wondered if an ethical use of AI would be to make professors’ work more efficient so they can spend more time in meaningful interactions with students.

Hong echoed this, saying that she is experimenting with AI use herself. She’s used it to fill in conjugation tables for Greek words, or to create other charts, tasks that would normally take time away from more important work.

Another concern is that if AI use is completely avoided in class, students’ ignorance of AI will be a significant setback for them.

Vrooman said it is “negligent to not advise students on how it is being used in their discipline.”
Flynn agreed, saying that it’s necessary for professors to model ethical use of AI for their students.
While professors are cautious about AI use, students are currently a mixed bag. Flynn, having conducted the Gustavus campus survey on AI use, noted that she saw a spectrum of opinions.

Some (including professors) were early adopters of the technology, creating innovative ways to use AI in teaching and learning. On the opposite end of the spectrum were strong voices of opposition to AI use.
In her experience, these student voices speaking out against AI were even stronger than those advocating for AI use. But as each year’s incoming students are more and more acclimated to AI, she thinks there will be less vocal opposition to AI.

Her main worry is ensuring the campus’s AI use is aligned with Gustavus’s core values.
Ethical concerns about AI go beyond just how and when to use it. Professors continue to voice frustration with the environmental costs of AI – and, as Vrooman noted, everyone who does a Google search is now complicit in the use of AI, since AI summaries are now automatic on multiple search engines.

“The first thing I say whenever AI comes up… is that I don’t see how it’s sustainable. The energy and water requirements for it are such that I don’t personally… I feel like ethically it’s more than just complicated. But the consequences are extreme,” Vrooman said.

He pointed out the accelerating usage of AI, which brings energy use concerns. Energy grids may become overtaxed in areas with data centers, and energy production to fuel these centers can worsen global warming and pollution.

The overarching theme of Gustavus professors’ thoughts, between concerns about environment and ethics and student learning outcomes, is uncertainty.

“Personally, I have a lot of questions still. I have real concerns about it as a scholar,” Flynn said.

Leave a Reply