It is important for you to critically evaluate and fact-check any information generated by ChatGPT, and to use it as a tool for idea generation and refinement rather than a replacement for your own critical thinking and research (Maynard, "ChatGPT in the Classroom").
AI is Not Always Trustworthy
- ChatGPT was trained using a massive dataset of text written by humans that was pulled from the Internet.
- AI responses can reflect the biases of the humans who wrote the text used in the training dataset.
- ChatGPT is not connected to the Internet and the data used to train it was collected prior to 2021.
- AI tools sometimes "hallucinate" or make things up.
Example: It is known to create citations for articles that don't actually exist.
Fact-Check AI
Don't assume the information provided is correct. Here are two basic strategies for evaluating information provided by an AI tool:
Lateral Reading
- Look to see if other reliable sources contain the same information and can confirm what ChatGPT says. This could be as simple as searching for a Wikipedia entry on the topic or doing a Google search to see if a person, thing, event, etc. that ChatGPT mentions actually exists.
- When you look at multiple sources, you maximize lateral reading and can help avoid bias from a single source.
Check Citations
- If an AI tool provides a reference or citation, confirm that the source actually exists.
- Copy the citation into a search tool like Google Scholar or the Library's Quick Search to see if it is included. If not, it will need further investigation.
- Do a Google search for the lead author. Make sure it's a real person with acceptable credentials.
- If the source is real, check that it actually contains the information that the AI tool says it does. Read the source or its abstract!
Questions?