The world is talking about ChatGPT, a big deal and development in computing. Just days after its launch last November millions of people were trying out this AI-enabled chatbot, experimenting with intelligent and responsive dialogue between human and machine.
At TITAN we are also exploring the use of large language models for augmenting human capabilities so thought we would take a quick look at how to responsibly use these new technologies - through our lens of countering disinformation - and explore how we can ensure solutions like ChatGPT provide benefits to users.
What is ChatGPT?
ChatGPT is a large language model developed by OpenAI that is capable of generating human-like responses to natural language prompts. It is trained on a massive amount of data and is designed to understand and respond to a wide range of questions and topics. Whilst it is not smarty enough (yet) to replace humans, its sounds pretty authoritative in its reply's to questions and if you don't know where the answers are coming from, could make you think you were interacting with another person.
This exciting innovation throws up many potential uses for ChatGPT-like technology; from providing customer service and personal assistance to content creation and research. However amidst the buzz of promise and transformation. there are also some concerns about misuse, including how these smart chatbots could be used to spread disinformation.
One of the biggest concerns about ChatGPT is the fact that it can generate responses to questions that are indistinguishable from human-written text. This new AI capability means it could be leveraged to create fake news stories or spread false information across social media channels, which could have serious consequences for public opinion and could even influence political outcomes.
To address these concerns, TITAN believes its important for people to be aware of possible challenges surrounding ChatGPT and other similar technologies, and to use them responsibly. Here are a few things to bear in mind when leveraging these highly sophisticated language models:
Consider the source: When you receive a response from ChatGPT, it is important to consider the source of the information. Just like with any other source of information, it is important to evaluate the credibility of the information and the source it came from. ChatGPT is a machine learning model that generates responses based on patterns it has learned from large datasets. It does not have the ability to verify the accuracy of the information it generates. For example, TITAN researchers found the model generated fake credentials when asked to provide references for a piece of text. Users should therefore be cautious and evaluate the accuracy and credibility of the information generated before accepting it as true.
TITAN's vision is to create a 'fact-checking state-of-mind' when consuming online content which should also apply to responses from chat agents. Everyone should question the veracity of online information and be able to think through and explore accuracy themselves before coming to their own opinion. The aim for TITAN's conversational agent is not to provide 'answers' to the user, but rather to guide them through critical thinking processes with prompts that help them undertake their own investigations
Be aware of biases: Like all machine learning models, ChatGPT is not immune to biases. If the data used to train the model is biased in any way, this could influence the responses generated by ChatGPT. For example, if the data used to train the model includes more information about one political party over another, this could result in biased responses. Users should be aware of the potential for bias and evaluate responses accordingly
TITAN uses Socratic Thinking principles to help users ask the right questions when assessing content for accuracy. Socratic questioning requires people to identify and defend their original positions regarding their thoughts and beliefs. The user is asked to account for themselves, rather than recite facts, including their motivations and bias upon which their views are based.
Use multiple sources: When using technologies like ChatGPT to find information, it is a good idea to use multiple sources to verify the accuracy of the information. Just because ChatGPT generates a response does not mean that it is accurate or unbiased. By consulting multiple sources, users can get a more complete picture of the information and can make more informed decisions
TITAN's conversational agent, understands a users critical thinking level and guides the person with appropriate prompts and guidance through the fact checking process, including how to verify content using a wide range of information sources.
In conclusion, whilst ChatGPT has hugely exciting potential to be a powerful tool for information gathering and communication, people should be aware of the issues surrounding disinformation and misuse. These new smart chatbot technologies could potentially be used to generate manipulated content, such as deep fakes or fake news stories, so as with other types of online content users should be cautious of any content that seems too good to be true, or that seems designed to elicit an specific emotional response from them. Here at TITAN we believe that by taking a cautious and responsible approach, not believing everything produced is accurate; and questioning what they read, users can avoid being misled and can derive benefits.
Note: TITAN is currently in an early co-creation phase exploring user needs regarding disinformation identification.