AI is not a machine that ‘thinks for itself’; it’s a tremendously promising tool for humans to use.

We can’t escape artificial intelligence (AI), not because the Terminator is hunting us down but because the media can’t stop talking about it. Whether it is ChatGPT, self-driving cars, or Elon Musk’s humanoid robots, the AI conversation sometimes seems to be everywhere. The misconception that AI can think for itself is almost as pervasive, which adds to the public’s anxiety about being replaced.

Hysteria around so-called thinking machines is rampant. Monmouth polling reveals that nearly three in four Americans believe devices with the “ability to think for themselves” would hurt jobs and the economy. A key driver of the fear is the belief that machines can think for themselves. In fact, a majority of Americans believe AI is either already more intelligent than humans or is on its way to being so. These pervasive concerns come from a misunderstanding about how AI functions.

Computer scientist Jaron Lanier argues that our entire understanding of AI is incorrect, starting with the acronym “AI” or “artificial intelligence.” AI, such as ChatGPT, a large language model (LLM) that creates conversational responses to text prompts, does not think independently but instead copies patterns. Humans also possess pattern recognition, but we aren’t limited to that. ChatGPT and other AI-type models are. Intelligence isn’t only the ability to predict the next step in a sequence but entails the ability to reason through abstraction, as François Chollet surmised in his paper on evaluating AI for general intelligence.

Read the full National Review article here.

Isaac Schick is a policy analyst at the American Consumer Institute, a nonprofit education and research organization. For more information about the Institute, visit www.TheAmericanConsumer.Org or follow us on Twitter @ConsumerPal.

Share: