It is important for you to critically evaluate and fact-check any information provided by a generative AI tool. AI should not be relied upon as a replacement for your own critical thinking and research.
AI is Not Always Trustworthy
- Generative AI creates new content based on learned patterns from existing data. (Examples: ChatGPT, MS Copilot, Gemini)
- AI responses can reflect the biases of the humans who wrote the text used to train the AI tool.
- Some versions of AI tools are not "connected" to the Internet and can't provide up-to-date information.
- AI tools sometimes "hallucinate" or make things up.
Example: These tools can create citations for articles that don't actually exist.
Fact-Check AI
Don't assume the information provided is correct. Here are two basic strategies for evaluating information provided by an AI tool:
Lateral Reading
- Look to see if other reliable sources contain the same information and can confirm what ChatGPT says. This could be as simple as searching for a Wikipedia entry on the topic or doing a Google search to see if a person, thing, event, etc. that ChatGPT mentions actually exists.
- When you look at multiple sources, you maximize lateral reading and can help avoid bias from a single source.
Check Citations
- If an AI tool provides a reference or citation, confirm that the source actually exists.
- Copy the citation into a search tool like Google Scholar or the Library's Quick Search to see if it is included. If not, it will need further investigation.
- Do a Google search for the lead author. Make sure it's a real person with acceptable credentials.
- If the source is real, check that it actually contains the information that the AI tool says it does. Read the source or its abstract!
Questions?