Someday I’ll write about poems again. I have a list of them I want to get to, but I’ve had some trouble focusing on that a bit lately for reasons.
I’m going to preface my thoughts here by saying I’m focusing exclusively on large language models and generative AI and how I think they’re negatively affecting teaching and learning in a college environment. I’m not going to get into other forms of machine learning or use cases for LLMs generally or the ethics of using tools which have been trained on artists work with neither permission or compensation, or the crazy amounts of resources they use to generate their content. I’m also not going to make any portents of doom, though I am going to share this image I saw online earlier today.
Before I get into what my problems are with the ways I’m seeing LLMs introduced into the classroom, I want to talk some about what I think teaching is all about, about what I’m trying to accomplish in my classroom full of college students learning about technical communication.
When I think about my own time as a student, first as a chemistry major and then as an English major and graduate student, I tend to separate what I learned into data and skills. I learned what the atomic number of oxygen was and how to light a Bunsen burner. I learned who TS Eliot was and how to read his poems. I think that’s pretty common.
The latter category, the how part, the skills, often gets a lot of attention from people who equate getting a college degree with getting a job. They often reduce college to learning skills for the workplace. I say it to my students every semester when I’m telling them why they’re doing a group project. It’s because employers want employees who know how to work together efficiently. Other departments also tailor what they teach and how they teach it based on what employers tell them they want.
And there’s a logic to that. College is expensive, students are often going into debt for this degree which has a history of leading to higher earnings over the course of their lives, and employers would prefer to hire people who don’t need a lot of remedial training to become productive cogs in their machines. There’s a reason why the claim that college is a form of job training makes a kind of sense.
Except that college isn’t job training. Think back to the kinds of skills you learned in college. I’ll use the chemistry lab example since that was my major for a couple of years before calculus convinced me my future involved less math and more words. Had I completed my degree and gotten a job as a chemist, no doubt some of what I learned in my labs would be very useful, but not in a job training way. For example, learning how to safely operate a chemical hood is pretty basic stuff, but the chances that I would be using the same hood in my workplace that I did in my college class are pretty close to zero. And my employer would know that going in. What they’d expect me to know is more like “when and why should I use a hood” and “what is likely to happen if I violate safety protocols?” How to use the specific hood would be covered in training and documentation.
This is an important distinction because in college, we don’t tend to teach students how to do things. In pretty much every field, what you learn in school is the underlying architecture of your field, how things work and why, so that when you get a job you can figure out how these other things work more quickly. You have to be able to figure out new things because the ground shifts so quickly.
Sometimes what you’re learning is how to use new technology. When I started my current teaching job, I had to learn how to use Canvas (in a hurry—I was hired as a sub after the semester started and took over a class the day after I got my laptop) after never having used a course management system before.
Sometimes you have to adjust to bigger changes. Lawyers have to adjust to changes in law on the fly all the time, whether it’s due to changes made legislatively or administratively or by judges who decide something isn’t constitutional anymore. If they depended on what they’d learned the law was in law school, they’d be terrible at their jobs. Doctors, accountants, engineers, historians, archaeologists, psychologists, you name it. Every field evolves and these days, evolves rapidly, and you have to adjust to it.
My point here is that what we learned in school was not how to do things per se but how to solve problems in specialized fields and how to adapt to new information and discoveries.
Some smartass is going to read this and think “yeah so LLMs are just the newest thing to adapt to, get over it boomer” to which I will say first, I’m GenX motherfucker and second, I’m getting to that.
One of the reasons students and graduates both have reason to accept the metaphor of college as job training is because we kind of sell it to them that way in how we approach assessment. I have my students produce a series of documents that I put grades and comments on and based on how they execute those documents, they get a grade at the end of the semester which determines whether or not they progress toward their degree. Maybe it affects their financial aid, a scholarship. Maybe it affects their visa status if they’re from another country. No matter how much I tell my students that the truly important thing is whether or not they learn how to respond effectively to an audience or translate complicated technical subjects into clear language for non-experts, to them the grade can be more important because it can impact them in very concrete ways.
So it has never been a surprise to see students try to find ways to skip the work and get a high-quality finished product. They think, with reason, that the essay, the lab report, the exam is the important thing, no matter how much we may emphasize that the real value is in learning the underlying basics of their field, in learning the how and why of it.
LLM companies feed into this more than any other kind of tech I’ve seen, even more than online paper mills. At least paper mills make no pretense of what they’re doing. They don’t sell themselves as a tool for you to use. They’re very transactional.
LLM companies say what they’re providing are tools for greater productivity and creativity. It’s possible that for some subset of users, they are, though most of the claims I’ve seen to that effect are pretty dubious, and there have been more than a few studies that have shown that they reduce connection to the text they’re used to create and can undermine the skills of experienced users with continued use.
But our students aren’t experienced users, not in their fields. That’s why they’re students. They’re supposed to be getting baseline knowledge and skills in their specializations here so that they’ll be prepared to adapt to changes in their fields for years to come. And there’s no way to bypass the work that’s necessary to gain those skills.
When I lecture about this to my students, one of the things I mention is that if they use an LLM to generate text for an assignment and then don’t get the grade they think it deserves, they’re going to have a hard time understanding why and how it fell short. To be clear: this is not about academic dishonesty or plagiarism. Unless a students admits to me that they used an LLM to complete an assignment or left the prompt in there (which some of my colleagues have had happen), the chances I could build a case against them that would get sanctions at the university level are pretty low. And I don’t want to do that. I’m not a cop and I try not to act like one.
They’re going to have a hard time getting why and how their document fell short because they lack a fundamental understanding of how language works. They haven’t practiced enough to have muscle memory for it yet. LLMs are powerful machines and we’re being told that they require no training to operate, but it seems like every day there’s another story about a lawyer being called before a judge for submitting a brief full of hallucinated cases or worse, people saying they’re in love with their AI partner or substituting a chatbot for a therapist and spiraling into depression or worse.
What I’ve seen suggested in some places is that we need to be the ones who train our students how to use these tools. Drivers Ed for LLMs, I guess. I think that presumes our students are experienced enough to be able to use them effectively, and I don’t think that’s accurate and I teach mostly second and third year college students. I heard from a friend of mine with kids slightly older than my twins that teachers in middle school are using these tools in the classroom and that’s outrageous to me. That’s like giving the car keys to someone who can’t reach the pedals yet, and the car is all engine and no brakes.
I highly recommend this piece from Charlie Warzel in The Atlantic titled “AI is a Mass Delusion Event” for more on this. Thanks for sticking with me this long.