Consider these risks when using generative AI tools for study or work.
- Do not provide any private information when using these tools.
- Verify any information provided by generative AI tools with credible sources and check for missing information.
- Acknowledge any generative tools that you use for your assignments or work and how you used them. For example, include the name, model or version, date used and how you used it in your assignment or work.
- Be sure to check with your instructor if you plan to use generative AI tools to help you complete assignments or discussions.
Academic integrity
If you use AI tools and present the work as your own, you put your academic integrity at risk.
Our ChatGPT and other generative AI tools referencing guide has tips on how to cite or acknowledge your use of these tools.
Incomplete, inaccurate or offensive information
Information provided by generative AI tools may be:
- Incorrect (because of incorrect training data or the algorithm interpreting it wrong)
- Out of date (because the training data is not up to date)
- Biased or offensive (because the training data was biased or included offensive content
- Lacking common sense (because the AI can't actually think)
- Lacking originality (because the AI is only putting together the words of others)
Here is ChatGPT's summary of its own limitations:
Generative AI, such as ChatGPT, have limitations on what information they can provide. ChatGPT doesn’t have access to:
- Personal information or private data.
- Events that happen after the knowledge cut off.
- Information not present in the dataset used for training.
- Information that is not available in written or spoken form.
- Information about very specific or niche topics that are not available publicly.
Source: Answer provided by OpenAI’s ChatGPT version 3 on 27 January 2023 (edited for brevity).