After its launch last month, ChatGPT, the latest chatbot released by OpenAI, made the rounds online.
Alex, a sophomore at a university in Pittsburgh, started toying with the chatbot about a week after it was released, after finding out about it on Twitter. Within a couple of days, he got really excited by the quality of the writing it produced. The chatbot was good, he says—really good. (“Alex” is the name that this person provided to EdSurge. He only agreed to speak anonymously, for fear of repercussions for admitting to academic dishonesty.)
He’d found the chatbot around finals week, with everyone in a mad rush to finish papers. Most people seemed interested in asking the chatbot for jokes or stories, Alex says, but he was “instantly intrigued with the idea of using it to write a paper.”
After trying it out on some essay prompts he’d been assigned, though, he noticed some problems. The writing could be awkward. It would repeat phrases, or include inaccurate quotes. Those little things added up, making the writing seem like it didn’t come from a human. But Alex started tailoring the text, experimenting with breaking up and varying the kind of prompts he fed the chatbot. It seemed to take out some of the thankless legwork (or, some professors might argue, the work) from essay writing, only requiring a little pre-work and a touch of editing: “You can at least write papers 30 percent quicker,” he says.
Eventually, he says that the papers he and the bot were creating together passed plagiarism checkers with ease. He sang the chatbot’s praises to friends. “I was like Jesus walking around preaching the good word, teaching people how to use this,” is how he put it.
Something fundamental had changed: “I was literally just giddy and laughing, and I was like, ‘Dude, look at this,’ and everything is f*cking changed forever,” he says.
He wasn’t the only one he knew using the AI. But others were less careful about the process, he noted. They put a lot of faith in algorithmic writing, turning in essays without really going over them first.
A finance major, Alex also smelled opportunity. His pockets weren’t exactly flush. So, early on, before it had caught on, Alex sold a handful of papers—he estimates about five—for “a couple of hundred bucks.” Not a bad rate for a couple hours of work.
Cat and Mouse Game
The last few weeks have seen a rush of articles in the popular press detailing how students are using ChatGPT to write their papers. The Atlantic magazine put the question starkly: “The College Essay is Dead.”
And the tool doesn’t just present a challenge to those teaching English classes. The AI chatbot can seemingly spit out answers to some questions of finance and math as well.
But like the internet—which provided the data the chatbot was trained on—ChatGPT’s output can be dicey. That means that essay answers it produces for students often include statements that aren’t factually accurate, and sometimes it just makes stuff up. It also writes racially-insensitive and misogynistic things.
But Alex’s story shows that a little human input can correct such issues, which raises the question many professors are wondering: Can plagiarism-detection tools catch these AI creations?
It turns out that the makers of TurnItIn, one of the most widely used plagiarism detection tools, aren’t breaking a sweat. “We’re very confident that—for the current generation of AI writing generation systems—detection is possible,” says Eric Wang, vice president of AI for the company.
Plagiarism is evolving, but it can still, in theory, be sussed out, he argues. That’s because unlike human writing, which tends to be idiosyncratic, machine writing is designed to use high-probability words, Wang says. It just lacks that human touch.
Put simply, essays written by chatbots are incredibly predictable. The words the machine writes are words that you expect, where you’d expect them. And this leaves, Wang says, a “statistical artifact” that you can test for. And the company says it’ll be able to help educators catch some of the cheats using algorithmic tools like ChatGPT sometime next year.
Who’re You Calling Unoriginal?
Whether you think pronouncing the college essay dead is a premature diagnosis or not, the concerns are responding to a real trend.
Cheating, well, it’s all the rage.
As students burn out from the unprecedented stress and uncertainty they’ve been thrown into, they seem to be more tempted to take short cuts. Universities have reported that cheating has, in some cases, doubled or even tripled since the start of the pandemic. For example: In the 2020-2021 school year, in the heat of the pandemic, Virginia Commonwealth University reported 1,077 instances of academic misconduct, a more than threefold increase.
The figures show that cheating has increased dramatically, but the actual figures may be undercounts, says Derek Newton, who runs The Cheat Sheet, a newsletter focused on academic fraud. People are reluctant to fess up to cheating, Newton says. Most of the academic integrity studies rely on self-reporting, and it can be hard to prove someone’s cheating, he adds. But he says it’s clear that cheating has “exploded.”
What’s causing that? As colleges have rushed to teach more students, they’ve turned to online programs. That creates good conditions for cheating because it reduces the amount of human interactions people are having, and it increases the feelings of anonymity among students, Newton says. There’s also been an increase in the use of “homework help sites”—companies that provide on-demand answers and places for students to share exam answers, which he claims brings cheating to scale.
The problem? Students aren’t learning as much, and the value that colleges are supposed to bring to students isn’t there, in Newton’s view. And because it’s rare for students to cheat just once, he says, the rise in cheating degrades accountability and quality in the professions colleges train students for (including in fields like engineering). “So I view this problem as triply bad: It’s bad for the students. It’s bad for the schools. And it’s bad for all of us.”
Alex, the sophomore in Pittsburgh, sees the relationship between the chatbot and student a little differently.
He says it’s a “symbiotic relationship,” one where the machine learns from you as you use it. At least, the way he does it. “That helps with its originality,” he says, because it learns its user’s quirks.
But it also raises the question of what constitutes originality.
He doesn’t argue what he’s doing is right. “Obviously the whole thing is unethical,” he admits. “I’m telling you right now I committed academic dishonesty.”
He argues, though, that students have long used tools like Grammarly that offer specific suggestions on how to rework prose. And plenty of students already turn to the internet for the source material for their essays. For him, we’re just in a new reality that academia needs to reckon with.
And Alex guesses that word is spreading quickly among students about how to use ChatGPT to write papers. “There’s really no way to stop it,” he argues.
Even some college leaders seem open to revamping how they teach to meet the challenge of AI.
“I am encouraged by the pressure that #ChatGPT is putting upon schools & educators,” tweeted Bernard Bull, president of Concordia University Nebraska, this week. “As one who has been arguing for humanizing & de-mechanizing #education, it is an intriguing twist that a technological development like this may well nudge us toward more deeply human approaches.”