As consumers adapt to understand the uses of ChatGPT3, fears over worst-case scenarios have been grabbing headlines. Like all new technologies, ChatGPT3 has its critics who believe the technology will change the world as we know it — and not for the better. Despite the bleak images some cynics paint, their concerns are generally nothing new in the tech space.
Consumers who have “chatted” with customer service online are probably familiar with the idea of chatbots. Mostly these bots are limited to preprogrammed responses, and in this context, they primarily help filter customers before bringing in an actual person to help. ChatGPT3 — which stands for Generative Pretrained Transformer 3 — is the next wave of this technology. Rather than being limited to set responses, it can generate oddly human-like responses to prompts.
The technology’s human-like responses, combined with the ability to respond to follow-up questions and admit mistakes, raise concerns across many disciplines, including higher education. These concerns were magnified when Wharton Business School of the University of Pennsylvania released research showing that the artificial intelligence (AI) program was capable of passing the final examination for one of its core courses.
There’s additional speculation that the technology could likely pass other tests such as the U.S. medical licensing board’s or the bar exam. Professions that have typically been safe from technology takeovers, such as paralegals, analysts or consultants, might be feeling pressure from the new wave of automation.
Undoubtedly, ChatGPT3 will change how learning is tested. However, this isn’t the first technology to impact higher education. Calculators, spell check, programs like Grammarly and Excel have all automated and streamlined processes that used to be performed manually and require huge amounts of time and energy. These changes didn’t ruin education, but they did change it.
A 1986 article in the Journal for Research in Mathematics Education found that instead of replacing mathematics, the use of calculators in classrooms improved student performance and attitudes. While the authors found that performance varied by grade level to some extent, overall, their findings suggested that incorporating technology into education is not an assurance of lower performance or knowledge. Educators should take the same approach to ChatGPT3 as they have to other technologies and look for ways to incorporate the tool to improve learning.
Another critique of the technology finds an easy target in the lobbying industry. An opinion piece originally published in the New York Times asserted that ChatGPT3’s ability to produce content would “hijack democracy” by amplifying the current lobbying process. This process involves sending emails, producing social media content and targeting specific key lawmakers, all with the intention of influencing the public and decision-makers.
These concerns aren’t new. The descriptions of what ChatGPT3 lobbyists could do simply replicate what lobbyists already do. Targeting key lawmakers, producing content, and trying to influence the conversation is the purpose of lobbying firms all over Washington. AI hasn’t created this environment, although it may magnify what already exists by making the production of content easier.
Like the invention of the steam engine, the capturing of electricity and the widespread adoption of personal computers, new technology has the potential to disrupt and change existing practices. However, science hasn’t stalled because information is easily accessible, mathematicians are still needed despite calculators and the written word hasn’t disappeared because of spell check.
Concerns over the implications of AI may be warranted. However, those who believe ChatGPT3 will bring nothing but destruction speak from a predictable place of fear and protectionism.