Let's start with the end. If you know anything about the state of artificial intelligence, it's that many of the people advancing the technology are gravely concerned about the technology they're advancing. Two statements stand out. The first was a petition, following the March release of OpenAI's ChatGPT-4, calling for a six-month pause on any AI system exceeding GPT's capabilities. The signatories - a loose association of AI geniuses (Turing Award-winner Yoshua Bengio), tech barons (Elon Musk) and moths to a flame (Andrew Yang) - asked: "Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?"
The second statement, issued in May, was an escalation of both stakes and prestige - a Met Gala of doom. Signed by nearly all the major AI company CEOs and most of the top AI research scientists, this statement was just 22 words long. Which really helped the 'E' word pop: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."