Kool-AI-d

At work, these days I often feel like I’ve been dropped on an alien planet as I watch my colleagues embrace AI with unbridled glee. There are more than a few who will pull up ChatGPT in committee meetings to craft verbiage for various types of paperwork, as if we didn’t all have brains in our heads and weren’t perfectly capable of coming up with a paragraph of uninspired prose on our own. I have colleagues whose command of English is so poor that students perpetually complain in teaching evaluations and whose proposal drafts I could barely understand despite the topics being decidedly in my area of expertise. Then, virtually overnight, these colleagues have started producing prose that is perfect at the sentence level.

Perhaps the worst thing is the rush to get more: more courses, more faculty hires, more of everything having to do with AI. There are cross-departmental hiring initiative focused on developing AI technologies. Students are flocking to courses on AI like … students flocking to courses on AI.

My own graduate students seem nervous about job prospects, talking about how they will lose their edge if  other candidates are using AI (not sure for what, but whatever) and they’re not. To that I say that, if AI can make you lose your edge, you never had much of an edge to begin with. The students seem perturbed, but I’m not sure they believe me.

Part of why everything seems to surreal is that I spend a lot of time interacting with writers and visual artists in online spaces, and the disgust toward the use of generative AI among the creatives is absolute. I found my own first novel on the pirated-books database LibGen that Meta used (uses?) to train its AI. I refuse to submit my work to magazines or book publishers that use AI in any part of the publishing process, and my author friends are no different.

I tried to bring up the plagiarism inherent in training AI at work several times, but no one seems to care. They care about student cheating—seeing colleagues who’ve all but automated their way out of teaching  (prerecorded videos, automated grading) now bitching about having to go back to paper tests would make me want to laugh if it didn’t make me want to cry—but not about what has gone into developing these tools. Bringing up the theft of creative work that goes into training large language models falls on deaf ears and makes me seem out of touch, bordering on crazy perhaps for not accepting the inevitability of a technological fad. No one among my colleagues seems to give a shit about what the creatives want and where the training data comes from, presumably because none of them have real contact with artists and therefore simply do not care…  But I also fear that many tech people harbor true disdain toward artists and are now showing their hand.

It feels so disorienting to work among people who drink the Kool-AI-d and have to pretend like I don’t think it’s the absolute worst fucking idea in a long time or that it’s all going to crash spectacularly while we keep hiring and hiring and hiring…

Look, I know there are definitely things that AI should be doing, things that humans can’t do even in principle, such as sifting through unconscionably high numbers of new molecules to find new antibiotics
or synthesizing new inorganic materials for specific functions.

But digital art? Writing? Translation? Hey, even coding? Worst of all, firing people so AI can do their jobs shoddily? Why would we do that? Why don’t my colleagues care? And shouldn’t we at universities be concerned with the societal implications of what we do?

What say you, blogosphere? Thoughts on the supposed inevitability of AI?

 

 


4 responses to “Kool-AI-d”

  1. It’s irritating because my husband’s company’s largest expense is the cost of data to train their ML models. Yet somehow these other companies have managed to pirate and gamble that they won’t have to make amends, which for the most part they have not (Anthropic settlements are barely a drop in the bucket).

    And they’re not paying full cost for energy costs and people using the AI aren’t paying full costs for their use. Everything would be going a lot slower if people had to face their actual costs like economics models suggest they should. They’re borrowing from the future in ways I’m not sure are rational and government isn’t stepping in to stem spillover costs. I really wish we had a functioning government and I wish that Europe was taking these concerns more seriously as well.

  2. My fear is: we are going to turn into idiots who happily push buttons and are no longer able to think. It is so tempting to “outsource” anything that is hard to AI. Writing an email. Drafting an abstract. Solving a problem.

    The issue is – if you don’t use it, you loose it. If you don’t a language, you start to forget it. If you don’t do trigonometry in your daily life – most likely, you’ve lost it. So what does it say about the future of our cognitive abilities, if we just stop doing things that are challenging?

    There’s been a lot of buzz about AI in my line of work (medical communications), but most of my colleagues have been very cautious about it. In my work group, I haven’t seen AI actually being used for much beyond generating a few sentences to summarize a poster. That may change.

    I can see AI-translation being useful in science. Because then you could translate articles, or posters, or abstracts into your native language. I can also see how this could be abused…

  3. On one hand, I love ChatGPT for professional correspondence. It takes my emails and translates them into corporatese and they have been better received (and the emails take less time to compose) than previously, and I am getting what I want more often.

    On the other, as a dr, it is a little terrifying to see my colleagues embrace it without questioning it. Yesterday I was handing off a patient who had a neuromuscular condition to a colleague and colleague said, “Doesn’t patient need malignant hyperthermia precautions because of XYZ disease,” and I was like “XYZ disease has nothing to do with malignant hyperthermia. Did AI tell you that?” AI had told her that, and it was wrong. And… well I did think colleague was an idiot before, and now I think so even more, especially when it’s easy enough to find an article on XYZ disease with a teensy bit of effort, and this level of expertise and effort is just part of being a physician in our specialty. IMO they should be embarrassed.

  4. I also cannot understand the point of AI in most of the cases in which I have seen it used. I used to joke that people would use AI to expand bullet points into long emails, then use AI to compress them back into bullet points. I’m not sure it’s a joke now.

    But I really don’t understand how scientists in non-AI fields end up being such pushers of the technology. It writes functional but not particular good code that requires a lot of work to make useful. It writes prose that is, well, bad. It says things that are, well, wrong. And it has yet to be particularly useful for me scientifically. Colleagues tell me AI is now as good as a first-year graduate student, but even if that is true, it is more or less useless to me at that level. I do not have graduate students to help me do my research, I have them so I can help them learn to do research.

    And while they acknowledge my issues, there is always some new, fancier tech that solves that problem. I just don’t get it. I feel like I don’t even live in the same universe they do.

Leave a comment