Serving the UMN community since 1900

The Minnesota Daily

Serving the UMN community since 1900

The Minnesota Daily

Serving the UMN community since 1900

The Minnesota Daily

Daily Email Edition

Get MN Daily NEWS delivered to your inbox Monday through Friday!

SUBSCRIBE NOW

Opinion: ChatGPT has its limits – our fear of plagiarism doesn’t

UMN Law School is changing its exams this year in response to fears about ChatGPT.
In+the+last+few+years%2C+AI+has+advanced+along+a+science-fictional+trajectory.
Image by Ava Weinreis
In the last few years, AI has advanced along a science-fictional trajectory.

Artificial intelligence is here. It’s actually been around for a while, speckling software with capabilities, like predictive text, that we now take for granted.

But it’s in the last few years that AI has advanced along a science-fictional trajectory, with companies releasing dozens of generative AI models. These inventions are capable of generating images and text, and one subset in particular has gained incredible attention and sparked deep wariness.

Language learning models are trained on vast databases of written information and commonly used as chatbots. These models famously include OpenAI’s ChatGPT tool but have also been implemented by Google, Bing and other companies as personal assistants and search engine aids.

The immense power of language learning models has spurred wariness and fear of plagiarism.

In one survey, 48% of students admitted to having used ChatGPT for an at-home test or quiz. Of surveyed students, 53% had the model write an essay for them and 22% had it write an outline for a paper.

The rapid rollout of these tools coupled with limited knowledge of their abilities and a boom of online schooling over the pandemic has all contributed to a panic on the part of educational institutions and instructors about the use of AI.

In recent years, law school exams have generally been open-book, said Kali Killmer, a first-year law student at the University of Minnesota Law School. Her exams will be different from those other lawyers in training have sat through, which allowed students to reference textbooks, notes, class materials and even the Internet, in some cases. 

“But because of AI this year … the exam will still be in a virtual format in that it’ll be on my computer, but it’ll be on a lockdown browser completely,” Killmer said. “I won’t be able to do anything else on it, so all the supplemental materials you’re allowed to use on the exam you’ll have to bring in hard copy.”

Killmer cited two prominent AI-related incidents that gained national attention in the legal world.

A judge in New York imposed sanctions on two lawyers in June who submitted a legal brief that was written with the help of ChatGPT. The AI chatbot generated six citations for fake cases throughout the brief.

ChatGPT also passed the bar exam this March. The Uniform Bar Exam (UBE) is designed by the National Conference of Bar Examiners and forms a shared core of the bar exam, which would-be lawyers must take in order to practice law, in more than 40 states across the country. Not only did ChatGPT pass the exam, but it showed great improvements over attempts by previous generations of the model. ChatGPT is learning and advancing.

Killmer describes a response to these events, and others, on how her law school’s approach to plagiarism has changed.

“The general trend has been, like: ‘We don’t know what AI is going to be or how well it’s going to facilitate cheating on law school exams or on the bar exam. So we’re gonna precautionarily move to lockdown, hardcopy things until we have a better assessment,’” Killmer said.

The question is top of mind for many educators.

In January of this year, instructors at the University ran ChatGPT through four law school exams. The bot “performed on average at the level of a C+ student, achieving a low but passing grade in all four courses,” according to the paper.

But for law students, like many other students, the incentive to use ChatGPT or other AI models for plagiarism is outweighed by the risks and difficulties.

Killmer used the example of legal citations, which new law students often struggle to learn. Using an AI-based citation generator as a shortcut to writing citations borders on redundant, since students are aware of AI’s unreliability and tendency to invent entire cases that never existed. On time-restricted law exams, Killmer said, students don’t have time to double-check an AI’s work and make corrections.

For now, there are hard limits to the abilities of ChatGPT and other similar tools.

Everyday Google search results now turn out dozens of AI-generated blogs and articles. Amazon has been flooded with hundreds of AI-written books, often with no disclosure of their authorship. Clarkesworld, one of science fiction’s biggest publications, closed its submissions in February of this year after receiving almost as many AI-generated works as ones by real people.

The common thread of these AI creations? They all sucked.

“It’s difficult for me to believe that there’s a scenario where your average law student could use ChatGPT with any facility to come up with an exam answer that would be better than anything they could come up with on their own, just by the way that the exams are formatted,” Killmer said.

ChatGPT and other AIs are advancing quickly, but for now, the limits of their abilities are clear. Instead of engaging in conversations about the ethics of these tools or their role in the classroom or the professional world, institutions are erring on the side of blanket zero-tolerance policies with little nuance or conversation.

I have my reservations about AI. I worry about how AI’s popularization will affect jobs and the value of different skills. I worry about how it will impact art and creativity. I worry about students using AI as a shortcut instead of learning critical analytical and writing skills they will need for the rest of their lives. I worry about AI’s ability to regurgitate racism or manipulate users. I miss living in a world where I didn’t have to worry about these things.

By relying on fear-mongering and excessive precaution, we risk creating students who are unaware of generative AI’s abilities and potential, and consequently unprepared to deal with it in the professional world. 

By creating classrooms where any mention of AI is taboo, we encourage curious students to view instructors as the enemy and certain subjects as unthinkable. This is the opposite of what an academic environment should cultivate.

Perhaps this is indeed the best way forward for the University Law School. But what about the rest of us? What conversations can be had around AI in our classrooms? 

Even when its use is prohibited, it is our responsibility to create spaces where we can talk about it honestly and analyze its abilities and limitations as they change in the coming years.

Leave a Comment
More to Discover

Accessibility Toolbar

Comments (0)

All The Minnesota Daily Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *