Insider management has hinted at using AI in our newsroom, but AI can never replace the work of real humans

Picket sign that says "Don't replace me with ChatGPT"

Insider Union members picketed outside of One Liberty Plaza after going on a ULP strike on Friday. Joey Hadden

  • On more than one occasion, Insider’s management has hinted that AI could have a prominent future in the newsroom.

  • Journalists worry about AI being used to cut costs, as well as its lack of accuracy.

  • While it can be helpful to get gears turning, it will never get us to the right destination.

On April 13, Nich Carlson, Insider’s global editor-in-chief, sent a companywide memo with the subject line: “AI at Insider: We can use it to make us faster and better. It can be our ‘bicycle of the mind.’” In it, he mentioned that he’s used ChatGPT to “think about how and what I wanted to say in this memo, do casual background research for a post I assigned, brainstorm headline ideas, and prepare for a live interview.” 

While he did acknowledge that there are “challenges” to the technology and included the mandate that reporters were not to use it to “write sentences that you put into your scripts or articles,” he followed up with the note that “this may change in the future.” The word “may” raised many eyebrows, as a sign-up link for a pilot group for anyone interested in experimenting with the tool as a “word processing aid” closely followed.

A bullet-point list explaining ways reporters might find use for ChatGPT right now, from asking “AI to explain tricky, unfamiliar concepts” to asking it to “make suggested edits to your writing to make it more readable and concise,” the latter of which suggested it would “save your editors precious time,” was also included in the memo.

First came ChatGPT; then came layoffs 

On April 20 — just one week later — Insider announced in an email that it planned to lay off 10% of its staff, which included 20% of the Insider Union. An Insider spokesperson told Gizmodo that the layoffs are unrelated to the burgeoning use of AI in the newsroom. However, after the announcement of the Insider Union strike on June 2, Henry Blodget, the CEO of Insider, sent a companywide email saying the strike would “give us an opportunity to accelerate some initiatives that we believe will help us do an even better job of serving our audience over the long run.” 

While his memo does not explicitly give the green light to accelerate experimentation with AI in the newsroom, it’s hard to not wonder if the “initiatives” mentioned are related to the breathless excitement expressed just a week before layoffs over the “potential opportunities” our management believes artificial intelligence may present when it comes to creating content. 

Union and non-union Insider employees aren’t the only people in the industry worried about how AI use could play a role in cost- and job-cutting, even if management claims it will help rather than hurt us. BuzzFeed — another company that recently started experimenting with AI — also conducted a round of layoffs on April 20 with the announcement that the shuttering of BuzzFeed News would result in 15% of the company being laid off. 

Additionally, the Writers Guild of America, or WGA, has been on strike since May 2 and is working to address concerns about how the use of artificial intelligence will affect the industry head-on, asking for regulated “use of artificial intelligence on MBA[minimum basic agreement]-covered projects.” 

Specifically, the WGA’s proposal stipulates the following: “AI can’t write or rewrite literary material; can’t be used as source material; and MBA-covered material can’t be used to train AI.” As of yet, per the WGA negotiations status page, this portion of the contract has been rejected, with a counteroffer of annual meetings to discuss advancements in technology. Notably, while the WGA strike is ongoing, the Director’s Guild of America, or DGA, reached a tentative agreement on June 3, part of which includes restrictions on the use of AI, the Los Angeles Times reported.

AI wants what humans have — accuracy and empathy

Though AI can be helpful for writers in some capacities — such as generating basic ideas or prompts to get out of a period of writer’s block — it has its limits. Some of those limits are implied in the name; by its very nature, it is artificial, and the vast majority of the content it creates lands as such. 

It may occasionally do a sufficient job at coughing something up that’s meant to be skimmed and not thoroughly read, work that exists only to pass along the barest bones of information without any soul; after all, it doesn’t have one. But ChatGPT is not to be trusted beyond its actual capacity, as it frequently delivers information that’s factually incorrect. Not only is that bad journalism, it could be potentially dangerous to our audience, our sources, and Insider’s credibility. 

Of the AI tool’s recent offenses, it served up nonexistent legal cases when prompted to create a legal brief, which was filed for use in court — the lawyer who used it was “unaware of the possibility that its content could be false” and asked the program itself to verify its own information; it insisted the cases it had cited were real. 

It also named a real professor in an invented case of sexual assault for a lawyer’s research study. The case didn’t exist, and in an interview with The Washington Post, the professor said that learning he had been named in the study via email was “quite chilling” and added that “an allegation of this kind is incredibly harmful.” 

After a mass shooting at Michigan State University in February — a time when people typically lean on each other — administrators chose instead to lean on AI to send their messages of condolence to students. Administrators at the office of equity, diversity, and inclusion at Vanderbilt University’s Peabody College of Education and Human Development used ChatGPT to draft the consolation email. The Guardian reported that university officials have since apologized, acknowledging that the use of ChatGPT to draft the note showed “poor judgment.”

And in late May, the National Eating Disorders Association, or NEDA, decided to replace its hotline workers with Tessa, a “wellness chatbot,” just days after those workers had unionized, per Vice. In a blog highlighting the importance of the hotline — and the human, non-robot workers who staff it — Abbie Harper called the implementation of the chatbot “union busting, plain and simple.” 

Harper concluded the blog by stating, “​​The support that comes from empathy and understanding can only come from people.” Her words proved to be prescient; just a week later, The Cut reported on the organization’s decision to pull Tessa. “​​It came to our attention last night that the current version of the Tessa Chatbot, running the Body Positive program, may have given information that was harmful,” NEDA wrote on Instagram about the decision.

AI can never replace actual workers; workers including Insider Union members, who are currently on an indefinite unfair-labor-practice, or ULP, strike that includes a fight for a fair contract valuing our humanity, which we pour into our work every day. After all, as that same email from Blodget said, “Insider is an inspiring and amazing team.” All we’re asking for is to be valued for the work we do on that team.

Business Outsider is a strike publication of Insider Union which is a unit of The NewsGuild of New York.

Follow our Twitter for updates on the strike, and if you enjoyed this content and would like to throw in some cash for our members who are losing wages every day that we strike for a fair contract, feel free to visit our hardship fundraiser here. Wanna help us tell the boss to reach a deal? Let Nich Carlson and Henry Blodget know you support us by sending a letter. 

Previous
Previous

4 ways to support Insider Union strikers as they fight for better healthcare, better wages, and a better Insider

Next
Next

Management changed our health insurance illegally at the end of last year. Here’s how the change has affected our members.