At work, these days I often feel like I’ve been dropped on an alien planet as I watch my colleagues embrace AI with unbridled glee. There are more than a few who will pull up ChatGPT in committee meetings to craft verbiage for various types of paperwork, as if we didn’t all have brains in our heads and weren’t perfectly capable of coming up with a paragraph of uninspired prose on our own. I have colleagues whose command of English is so poor that students perpetually complain in teaching evaluations and whose proposal drafts I could barely understand despite the topics being decidedly in my area of expertise. Then, virtually overnight, these colleagues have started producing prose that is perfect at the sentence level.
Perhaps the worst thing is the rush to get more: more courses, more faculty hires, more of everything having to do with AI. There are cross-departmental hiring initiative focused on developing AI technologies. Students are flocking to courses on AI like … students flocking to courses on AI.
My own graduate students seem nervous about job prospects, talking about how they will lose their edge if other candidates are using AI (not sure for what, but whatever) and they’re not. To that I say that, if AI can make you lose your edge, you never had much of an edge to begin with. The students seem perturbed, but I’m not sure they believe me.
Part of why everything seems to surreal is that I spend a lot of time interacting with writers and visual artists in online spaces, and the disgust toward the use of generative AI among the creatives is absolute. I found my own first novel on the pirated-books database LibGen that Meta used (uses?) to train its AI. I refuse to submit my work to magazines or book publishers that use AI in any part of the publishing process, and my author friends are no different.
I tried to bring up the plagiarism inherent in training AI at work several times, but no one seems to care. They care about student cheating—seeing colleagues who’ve all but automated their way out of teaching (prerecorded videos, automated grading) now bitching about having to go back to paper tests would make me want to laugh if it didn’t make me want to cry—but not about what has gone into developing these tools. Bringing up the theft of creative work that goes into training large language models falls on deaf ears and makes me seem out of touch, bordering on crazy perhaps for not accepting the inevitability of a technological fad. No one among my colleagues seems to give a shit about what the creatives want and where the training data comes from, presumably because none of them have real contact with artists and therefore simply do not care… But I also fear that many tech people harbor true disdain toward artists and are now showing their hand.
It feels so disorienting to work among people who drink the Kool-AI-d and have to pretend like I don’t think it’s the absolute worst fucking idea in a long time or that it’s all going to crash spectacularly while we keep hiring and hiring and hiring…
Look, I know there are definitely things that AI should be doing, things that humans can’t do even in principle, such as sifting through unconscionably high numbers of new molecules to find new antibiotics
or synthesizing new inorganic materials for specific functions.
But digital art? Writing? Translation? Hey, even coding? Worst of all, firing people so AI can do their jobs shoddily? Why would we do that? Why don’t my colleagues care? And shouldn’t we at universities be concerned with the societal implications of what we do?
What say you, blogosphere? Thoughts on the supposed inevitability of AI slop drowning out all creative thought?
Leave a comment