Since its release, ChatGPT has made my academic life much easier. Where I relied heavily on office hours and tedious Google searches in the past, now I can quickly double-check math problems and proofs, debug coding assignments and find new research sources. It’s helpful. But, more importantly, it’s convenient. This, it seems, is universally agreed upon.
“Typically what I do is I put the syllabus into ChatGPT and… generate a general outline of how I should break up the assignments… Any sort of readings I have to do, I typically put it in and have it give me a summary… and continue asking it questions… [on] concepts that I might not have fully gotten the first read,” said Hector Rodriguez, a fourth-year computer science and mathematics major.
“I use it occasionally when I’m writing applications because you have to speak really formally in them,” said Sarafina Stolz, a first-year geography and English double major.
Professors find it useful too — fun, even. Professor Matthew Hill, a teaching associate professor for the university’s Writing Program, explained that he dedicates class time for students to use ChatGPT for peer review.
“My thinking was using ChatGPT for any student who was struggling to come up with some peer responses to get some ideas,” he explained.
In addition, he used a small language model in a sound-writing class he taught, enabling students to create songs and write about them. Outside of DU, he owned up to using it for his own gaming sessions, from writing games to helping him sharpen mechanics.
It’s obvious that the useful applications of Large Language models (LLMs) like ChatGPT are vast, but the question that has been nagging at me recently is: how does this actually work?
The answer is data. More specifically, data centers.
Because of their cooling needs, data centers are among the 10 highest consumers of water among industrial and commercial spaces in the U.S., with some of Google’s data centers consuming more than half a million gallons of water per day, according to the University of Illinois’s Center for Secure Water.
“Nobody, on a large scale, is thinking about what the long-term outcomes [of AI] are going to be,” said Dr. Faan Tone Liu, a teaching professor in DU’s computer science department.
Of course, data centers aren’t exclusively used for AI, and the Google center referenced actually services the likes of Google Drive and Gmail, technologies that have been around for over a decade. However, the demand for AI is only heightening the need for data centers and other valuable resources.
“It’s so hard because it’s the new thing happening right now. You don’t really want to be left behind,” said Rodriguez.
The maintenance of LLMs like ChatGPT isn’t the only problem. According to estimates by Google and researchers at UC Berkeley, the energy it took to train ChatGPT-3 was enough to meet the electricity demand of 120 average U.S. homes for one year. The bow on top was the 552 tons of carbon dioxide it produced along the way.
To illustrate the individual impact, researchers estimate that a ChatGPT query consumes five times the energy of a basic web search, and it also chugs half a liter of water.
And, the environment isn’t the only concern with AI.
“It’s used off of stolen property… It’s going to degrade our quality of expectations for art, [and] it’s going to take away jobs related to production companies such as film,” said Evelyn Stovin, a religious studies student in their fourth year.
Dr. Liu had different concerns: “It’s really damaging students’ ability to learn because there’s a process of struggle, and it creates pathways in the brain, and it’s not happening. I’m really seeing students’ educations damaged by it,” she said.
Not only that, but every user has a story of a blatantly wrong answer their model of choice has given them.
“Sometimes it’s wrong. It just spews out a random answer… [that’s] not found anywhere; it just made it up,” said Madisenne McAllister, a second-year international studies student.
Her experience is far from unique. In fact, ChatGPT hallucinates, that is, invents fictitious answers, roughly 3% of the time. That metric may pass for casual use, but imagine the use of an AI with the same success rate in a hospital. 3 out of every 100 patients would receive an incorrect diagnosis, dosage or surgery.
For some, these grievances are enough to swear off the software.
“How do I use it as a student? I don’t. How do I use it outside of that role? I don’t,” said Corbin Lew, a first-year computer science and game design student.
His abstinence stems from his distaste for the stolen data ChatGPT was built on. Stovin cited the same reasons for swearing it off, along with environmental factors.
But most, myself included, find themselves typing a query after a quick glance at their watch and another at the paper they’re writing, or the code they’re debugging, or the schedule they’re planning. When AI is free, easy to access and fast, it’s hard to say no. It’s the path of least resistance, and as humans, we’re bound to take it.
However, everything has its cost, and for AI, that price manifests in the staggering resources it requires to function, whether it be electricity, water or (stolen) data. And, while we can operate these technologies now, with the voracious demand for AI, we simply don’t have the resources to maintain the current growth in the industry.
“We are in an electricity-rich environment, and it’s a straight line up as to how much more energy we are going to need as a country. So, it’s not just AI that is fueling these energy concerns. AI is a huge culprit, though,” Professor Hill explained.
And he’s right. It’s not AI, or oil or logging that’s causing the global temperature to rise. It’s us. It’s a mindset so fixed on the achievements of the present that it forgets to consider the future.
I don’t understand why this technology can’t wait. Why not develop ChatGPT on a smaller scale so that it’s not only saving electricity and water, but also so that it’s given more time to produce correct answers, to allow lawmakers and educational institutions to form regulations around it and to require proper consent on the data it’s trained on?
If I’m to make those demands, though, it only makes sense that I should apply the same ideology to my own relationship with AI. Why do I need to reach the answer to my homework so fast? What am I sacrificing by taking a shortcut to that solution?
Very few of us can dictate how much funding companies like openAI receive, what processes they obey to maintain their hardware or what data they use to train their models. However, we can control our usage of it. That is, after all, what makes it so valuable — the creators and marketers know that we use it, and they know that if we use it for long enough, we’ll need it.
By refusing the sacrifice of our time and our attention to AI, we strip it of its fundamental power. It remains an incredibly useful tool, but one that comes at too high a price to be available to the masses, and especially to large corporations. As humans, we can’t be blamed for walking a path of convenience. To do so knowing that the path eventually destroys itself, however, is suicidal.