The AtlanticThe Atlantic

Before a bot steals your job, it will steal your name

By Jacob Sweet

11 Aug 2023 · 5 min read

informed Summary

  1. Many advanced chatbots are being given human names, a trend that is expected to continue as generative AI advances. These names are used to make bots seem more believable and real, but can also be a marketing ploy to influence how users perceive the product.

In May, Tessa went rogue. The National Eating Disorder Association’s chatbot had recently replaced a phone hotline and the handful of staffers who ran it. But although it was designed to deliver a set of approved responses to people who might be at risk of an eating disorder, Tessa instead recommended that they lose weight. “Every single thing that Tessa suggested were things that led to the development of my eating disorder,” one woman who reviewed the chatbot wrote on Instagram. Tessa was quickly canned. “It was not our intention to suggest that Tessa could provide the same type of human connection that the Helpline offered,” the nonprofit’s CEO, Liz Thompson, told NPR. Perhaps the organization didn’t want to suggest a human connection, but why else give the bot that name?

The new generation of chatbots can not only converse in unnervingly humanlike ways; in many cases, they have human names too. In addition to Tessa, there are bots named Ernie (from the Chinese company Baidu), Claude (a ChatGPT rival from the AI start-up Anthropic), and Jasper (a popular AI writing assistant for brands). Many of the most advanced chatbots— ChatGPT, Bard, HuggingChat—stick to clunky or abstract identities, but there are now many new additions to the already endless customer-service bots with real names (Maya, Bo, Dom).

Sign in to informed

  • Curated articles from premium publishers, ad-free
  • Concise Daily Briefs with quick-read summaries
  • Read, listen, save for later, or enjoy offline
  • Enjoy personalized content
Or

LoginForm.agreeToTerms