Should You Be Worried About ChatGPT?

ChatGPT has taken the world by storm. What this means, however, is not yet clear. We must reflect on how it works and the problems associated with it if we are to evaluate whether ChatGPT is good or bad.

What is ChatGPT used for?

ChatGPT is a chatbot, so it looks like a normal messaging service: you type something, it responds. It functions like a search engine and can also perform tasks for you.

It is being used creatively, for generating anything from essays to songs, and for problem solving, by asking it to solve programming issues or explain topics.

Why has ChatGPT become so popular?

ChatGPT is part of a wider generative AI boom. Over the last year, AI-generated images have flooded social media. These image generators simply need you to type out a prompt, and then they produce a realistic picture using what you suggest.

This is raising more questions than answers about the crossover between art and technology. The question arises whether a human even needs to be involved in producing art. See our article on “AI and Art” for more on this.

The main difference between ChatGPT and these image generators is the conversational format. ChatGPT is good at imitating “the human-like ability to “talk” and respond in smooth, natural instant dialogue”. It remembers details of your conversation and builds them into its subsequent responses and even asks for clarification if it doesn’t fully understand your request.

This interactivity means obtaining information through the chatbot or asking it to complete tasks seems more personalized and direct than through a Google search. With 100 million monthly users already, we must be mindful of its issues.

Why is ChatGPT bad?

Given its popularity, David Rozado, in an MDPI study, highlights how it could be “misused for societal control, spreading misinformation, curtailing human freedom, and obstructing the path towards truth seeking.

Biases and hallucinations

Rozado presented the bot with fifteen political orientation quizzes. Despite the chatbot claiming it holds no political opinions, fourteen out of the fifteen tests showed “a preference of left-leaning viewpoints”.

This becomes concerning when users ask questions without definitive answers, such as those related to gender roles or who to vote for.

Another common problem is ‘hallucinations’. This is where the bot confidently makes a plausible-sounding but factually incorrect claim. Recently, a journalist asked it to write up a quarterly report for Tesla’s earnings, which it appeared to do with ease, free of grammatical errors. However, the numbers were found to be completely random, and did not correspond to any real Tesla report.

But why does this happen?

GPT stands for ‘Generative Pre-trained Transformer’. ‘Pre-trained’ refers to how the model is trained on a large corpus of text data to predict the next word in a passage. This is so it can ultimately produce human-like text.

Therefore, ChatGPT produces ‘human-like text’, which sounds but may not necessarily be true. This is like how the AI image generators create an image that looks real, not one that is real.

Should we worry about ChatGPT?

Recently, an image of Pope Francis sporting a luxurious white bubble coat spread rapidly online. This image is a ‘deepfake’, which is a term used to describe an image or video generated by an AI that alters real content into something new. This picture was created using Midjourney, which generates images from user’s prompts.

The stir this image caused shows how easily synthetic media can be mistaken as real. At least for long enough that it can widely spread before being checked.

Similarly, ChatGPT can generate essays in seconds, which is already being used for homework, exams, and even fake scientific papers. This is worrying given the sheer quantity and speed we consume online content. As more synthetic content and information appears, we may have to revaluate how we consume media.

What should we do?

ChatGPT is a tool, so how it is used determines its value.

ChatGPT is good for solving programming problems, getting ideas for creative projects, and sourcing information that can be factually supported. For questions without definitive answers and more complex requests, it is less effective.

ChatGPT is bad if it is misused, consciously or not. The developers need to be more transparent about its issues. Until then, we need to critically reflect on the sources of and find evidence for its claims and what we generally see and read online.

For ethical and moral questions, we should keep listening to human voices, and let AI help with the logical and mechanical questions.

If we can overcome these issues, generative AI could greatly benefit us, including in healthcare and education. See our article “Artificial Intelligence” for examples of how AI is already being implemented. Additionally, if you are interested in submitting research about the potential of ChatGPT, consider submitting to the newly opened topic “AI Chatbots: Threat or Opportunity?“.

 

if (window._paq) { _paq.push(['disableTracking']); }

Privacy Preference Center