2023 was the year of artificial intelligence (AI) headlines, but 2024 may be the year of its adoption, according to business owners in the Forbes 2023 survey. However, this may hit a speed bump as a heavy-handed approach to regulation appears politically popular, given widespread concerns regarding data collection. What is overlooked by many policymakers is how consumer actions don’t match opinion polls. The failure to examine this “privacy paradox” leaves policymakers limiting consumer choice in a misunderstood effort to promote the public’s wishes. This is especially harmful to AI, which relies heavily on data.

Policymakers aren’t entirely at fault for leaning towards heavy AI regulations since consumers poll very differently than they act. It’s clear that when asked, the public claims to be concerned about the implications of AI on their data privacy. When examining consumer opinions towards data collection more broadly, a Pew Research poll reveals that 81 percent of Americans believe that the risks of data collection outweigh the benefits.  

Given these results, it’s unsurprising that according to a Pew study, 72 percent of the public thinks the government needs to do more to regulate data privacy. Shockingly, despite the overwhelming negativity from the public, 67 percent of people have little to no idea what companies do with their data.

These facts constitute a “paradox” because there is almost no discernible change in behavior brought on by public concern. Applications like TikTok continue to grow in popularity despite concerns about the app. Likewise, many apps and sites that have sparked privacy alarms continue to be used, including ChatGPT, an AI language model.

There is a noticeable difference between consumer opinions and their rational choices when using digital products. This discrepancy became the subject of a 2017 paper called “The Privacy Paradox,” which compiled and reviewed 32  studies on feelings toward data privacy and behavior. In the paper, a gap exists between how consumers perceive their data and whether their actions align with these beliefs. Even relatively minor gains in convenience can outweigh most privacy concerns.

AI is just another digital tool that uses consumer data to optimize outputs, only more so. At their heart, a language model AI like ChatGPT is just advanced pattern recognition software. ChatGPT can “learn” from interacting with users and scouring databases. The more interactions ChatGPT has and the more access to data (of which interactions constitute data), the better answers ChatGPT will give.

The value of ChatGPT and future AI models is already evident in their growing use by consumers. This use advances AI further by giving it more data for formulating responses and solving problems. Restricting the ability of AI to utilize consumer data through government intervention would ironically take more value from consumers than it would provide. People are aware that their data is being collected, and 81 percent are concerned about it, but this doesn’t stop them from using sites containing data. Policymakers should keep this in mind before brazenly pushing for regulations in 2024 that might hinder what consumers use, even if it caters to how they claim to feel.

Isaac Schick is with the American Consumer Institute, a nonprofit education and research organization. For more information about the Institute, visit www.TheAmericanConsumer.Org or follow us on X @ConsumerPal.