Editor in Chief- Grace LaTourelle
As artificial intelligence availability and usage increases globally and at Gustavus, the campus has been navigating how to address concerns and issues arising from it. A special committee, composed of faculty and students, was created last summer to develop a document of principles to outline shared institutional values regarding AI. On Feb. 23, 2026, Student Senate passed a vote to write a resolution to endorse the Guiding Principles for the Responsible Use of AI document.
“The principles are important because they serve as a shared guide for how we continue to sustain our liberal arts values in the context of AI, rather than reacting to AI as purely a technology disruption,” Assistant Provost of Faculty Development and Support Dave Stamps said.
According to AI Guiding Principles working group member and Student Senate Finance Chair Senior Chris Gatuza, the committee identified, synthesized, and honed in on five values regarding AI usage from various on-campus and off-campus sources. The five guiding principles are Critical AI literacy, Human-Centered Learning, Informed Agency, Integrity (transparency, consent, and privacy), and Stewardship & Environmental Responsibility.
According to Student Senate co-president and Junior Laura Sunnarborg, Stamps requested that the principles document be reviewed and that Senate write a resolution to endorse it. The deliberation took place at the Senate meeting on Feb. 9th.
Of the 35 members of Student Senate, only 24 are voting members. Many people expressed their concern over the principles outlined in the document. According to Sunnarborg, there was also confusion about whether the document was principle or policy.
“When the document was returned to us…there were not any changes made in response to student feedback. I had some concerns about how feedback was collected and the framing of the issue to the broader campus,” Student Senate member and Junior Jonathan Ryan said.
Due to the concerns of Senate, the resolution to endorse the guiding principles was not passed, struck down in an 11-8 vote. On Feb. 23, members of the AI committee, Maddelena Marinari, Guario Salivia, and Stamps addressed Senate at their meeting to clear up confusion around the principles. Following this clarification, another vote was held and the resolution passed to endorse the guiding principles document.
“The ultimate decision was to approve the document. This was done impromptu, with no advance warning, and right after a session in which the AI committee was allowed to relentlessly lobby in favor. This means that the committee can go ahead forming policies to their liking, with only vague, unenforceable values to limit them. We will certainly be seeing more AI in classes after this,” Student Senate member and Junior Benjamin Ryan said.
The research and development of this document as well as the endorsement by Student Senate, Faculty Senate, and President’s Executive Leadership team, was phase one of a two-phase plan. The second phase will delve into policy making, in which Faculty AI Fellows will create syllabus templates for “AI-required,” “AI-encouraged,” “AI-accepted,” and “AI-prohibited” classes, as well as write an implementation guide and AI Literacy policy recommendation. Phase two will be done during the month of May.
“In the Faculty AI fellows program, a document not initially made available for students, we saw the guidelines being used to begin an expansion of AI use in coursework, including in all FTS and SIGX courses,” Jonathan Ryan said. “More concerning was the lack of mention of any plan to offset the environmental impacts or the risks to critical thinking present in AI expansion.”
While Student Senate was involved in the endorsement process in phase one, policy creation will largely be up to the Faculty AI Fellows in phase two as well as individual departments, and will not be brought to Student Senate, according to Sunnarborg and Gatuza.
Stamps explained that student input has been central to the development of the principles.
“The student voice remains essential. Student feedback has already meaningfully shaped this document. Continuing to involve students…will be critical to keeping this work relevant and accountable,” Stamps said.
However, continued concern revolves around students’ role and input in AI usage guidelines and policy on campus.
“…I do not believe there was enough student involvement in the creation of the guidelines. Many students are passionate about AI policy, but students largely were not directly involved in creating the policy outside of the one student member,” Jonathan Ryan said. “I think these guidelines will be detrimental to AI policy at Gustavus, because they take the discussion away from the students and puts it behind closed doors while claiming student support.”
Salivia commended Student Senate on their initial rejection of the guidelines and discussion surrounding concerns.
“I think the students are in the right place by questioning us, and by questioning the technologies, and by questioning the principles. And I invite students to continue doing that, because I think you guys have the most powerful voice…I think the students should continue to exercise that power…” Salivia said.
Both faculty and students alike have expressed the changes they have observed on campus surrounding artificial intelligence.
“…Profs are struggling to define what is or isn’t out of bounds…while many students are panicking that they’ll be called out for plagiarism simply because they used the legitimate tools available to them,” Communication Studies professor Martin Lang said. “To be clear, some students are straight-up cheating and know it when they use AI. But the reality on the ground is much messier than that, and it’s going to take a lot of collaboration and mutual trust to sort it all out.”
Some professors have moved to implementing AI in the classroom. Salivia explained that in one of his classes, he assigns students an essay that they must write using their chosen AI model, in order that they learn the tools available to them. However, Salivia has also noticed a change in exam performance in students who are doing side projects using AI.
“…the level of dependency that these students are generating by using artificial intelligence is worrisome,” Salivia said.
Sunnarborg pointed out that issues over academic integrity are complicated, as AI usage is often hard to prove or call out.
“While reading student responses to Moodle assessments this year, I have found an increased enthusiasm for em dashes,” Physics Professor Darsa Donelan exclaimed.
Many professors and students claimed that knowledge of how to use AI was central to the emerging issues. Despite their stance on AI, Sunnarborg encouraged all students to do their research.
“Try to really understand how it works: what does it mean that it’s a large language model, how much water does it actually use in comparison to other industries…What is the data on how much it actually decreases your critical thinking skills…? Whatever side you’re on, do your research,” Sunnarborg said.
Phase two of the plan is projected to be finished over the summer, with the policies implemented next school year. Students are encouraged to bring any current or future concerns to Student Senate.