The AtlanticThe Atlantic

What have humans just unleashed?

By Charlie Warzel

16 Mar 2023 · 10 min read

Editor's Note

The people building AI applications don’t seem to know what they’re building, "even as they rush to build," writes The Atlantic's Charlie Warzel; we are entering an era of "radical uncertainty."

GPT-4 is here, and you’ve probably heard a good bit about it already. It’s a smarter, faster, more powerful engine for AI programs such as ChatGPT. It can turn a hand-sketched design into a functional website and help with your taxes. It got a 5 on the AP Art History test. There were already fears about AI coming for white-collar work, disrupting education, and so much else, and there was some healthy skepticism about those fears. So where does a more powerful AI leave us?

Perhaps overwhelmed or even tired, depending on your leanings. I feel both at once. It’s hard to argue that new large language models, or LLMs, aren’t a genuine engineering feat, and it’s exciting to experience advancements that feel magical, even if they’re just computational. But nonstop hype around a technology that is still nascent risks grinding people down because being constantly bombarded by promises of a future that will look very little like the past is both exhausting and unnerving. Any announcement of a technological achievement at the scale of OpenAI’s newest model inevitably sidesteps crucial questions—ones that simply don’t fit neatly into a demo video or blog post. What does the world look like when GPT-4 and similar models are embedded into everyday life? And how are we supposed to conceptualize these technologies at all when we’re still grappling with their still quite novel, but certainly less powerful, predecessors, including ChatGPT?

Sign in to informed

  • Curated articles from premium publishers, ad-free
  • Concise Daily Briefs with quick-read summaries
  • Read, listen, save for later, or enjoy offline
  • Enjoy personalized content
Or

LoginForm.agreeToTerms