Body
Statement of Best Practice
Generative Artificial Intelligence (AI) tools are here and all indications point to their prevalence only increasing in the foreseeable future. As such, Miami's Information Security Office (ISO) offers the following guidance and considerations for navigating the use of AI in the classroom and in offices across our campuses
Contact
- Information Security Office
Guidance
Information security, data privacy, and general concerns
- Sensitive information must not be entered into or as the prompt in part or total of a generative AI tool. AI tools should only be used with data classified as public
- Please note: Users may input sensitive information in the following instances:
- The user has access to a paid team version of ChatGPT through a University department
- The user is utilizing the University instance of Google Gemini
- When possible users should opt-out of permitting the use of entered data for learning / training models
- AI tools do not provide any expectation of privacy or confidentiality for any information used as input or prompt
- AI generated material may be inaccurate, misleading, or even entirely fictional. Validate the accuracy of information produced by AI tools
- AI generated material may exhibit biases and discriminatory ideas or content
- The National Institute of Standards and Technology (NIST) provides the following draft approach to managing risks posed by AI
- Not all generative AI tools are created equal and some may not be sufficiently accessible to all users' needs (chatbot for example does not meet accessibility standards)
Thoughts and recommendations for students
- The use of generative AI tools and the content they produce should be done in an ethical and responsible way. Miami’s Academic Integrity policy can be found here, and the Responsible Use of Computing Resources section of our Policy Library can be found here
- Ensure you clearly understand what is considered authorized use of generative AI and what is not for each of your classes
- Authorized use of AI for assignments and assessments should include acknowledgement and citation of where and how AI tools were used
- The use of generative AI is not a substitute for critical thinking and at present doesn’t produce content at the level of analysis and synthesis expected of collegiate thought
Thoughts and recommendations for instructors
- Academic Integrity has provided resources for faculty regarding AI use here
- The Center for Teaching Excellence has provided information about incorporating AI into instruction here
- Create clear AI use and misuse policies for each course
- Design assignments and assessments such that AI generated content or assistance is either expected, of little or no use, or inaccessible during completion of the assignment or assessment
- It’s reasonable to assume that students are proficient in utilizing AI tools
- Student course work should not be entered into AI tools in order to generate feedback (at least not without the student’s prior consent)
- Many tools claim to be able to detect AI generated content, but the reliability of such detection is often highly questionable. The ISO does not recommend any such tools or relying on the accuracy of any detections such tools make
- Generative AI can assist students in clarifying thoughts, brainstorming, grammar, and understanding complex concepts. An understanding of how to effectively use AI can positively serve students into their future endeavors
- If requiring or recommending students to use generative AI tools, carefully choose tools that are accessible to all students
Thoughts and recommendations for staff
- AI generated code should not be used for university systems and services without appropriate testing and review by a person
- Miami data must not be used as prompt input and consumed by AI learning models
Note
- This will be a living document and as we learn more, we will continue to provide updates to it