Body
Statement of Best Practice
Miami University encourages the exploration and responsible adoption of AI technologies to enhance learning, research, and operational efficiency. Miami's Information Security Office (ISO) provides this guide to faculty, staff, and students as a framework for using these tools while protecting university data, maintaining academic integrity, and ensuring equity.
Contact
- Information Security Office
Guiding Principles
- Accountability: AI is a co-pilot, not an autopilot. You are responsible for the accuracy and ethics of any output you publish, submit, or implement. *
- Data Stewardship: The use of AI must protect the privacy of individuals and the security of University data. Public AI models learn from user input, any data entered is no longer private.
- Transparency: The use of AI should not be hidden or obfuscated. When AI is used to assist in decision making it should be disclosed appropriately. *
- Equity and Access: Not all AI tools are free or accessible to all users. Avoid requiring tools that create a financial or accessibility barrier. Users should acknowledge that AI tools are trained on large but limited sets of historical data and their output will be influenced by and may amplify any biases within the data. A person should review with appropriate rigor any AI output of consequence.
Guidance
For Students
- Check the Syllabus: How AI can or can't be used is course or assignment specific. When in doubt, talk to your instructor.
- Cite it: If you use AI to assist in the submission of work, be transparent about it. The misrepresentation of AI output as your own original thought may be considered academic misconduct.
- Process over Product: AI can generate a "result", but it cannot replace the learning that happens during the struggle of writing or problem-solving.
- Question the Source: AI is not a "truth engine" and what it produces may include incorrect, incomplete, biased, or outright fabricated information. Use critical thinking skills to identify and correct this kind of misinformation in AI output.
- Miami’s Academic Integrity policy can be found here, and the Responsible Use of Computing Resources section of our Policy Library can be found here
For Faculty
- Set Clear Expectations: Clearly communicate to students the acceptable and unacceptable uses of AI tools for each course and assignment in syllabi and assignment instructions.
- Detection Limitations: Do not rely solely on AI detection software. Research indicates these tools have high false-positive rates, particularly for non-native English speakers. Use them as a conversation starter, not as absolute proof of cheating.
- Student Privacy: Do not upload student work into AI tools for grading or feedback without explicit consent, as this may violate FERPA or the student's intellectual property rights.
- Transparency and Disclosure: Clearly disclose the methods and tools, including AI applications, used in research publications and presentations.
- Bias Mitigation: Actively work to identify and mitigate biases in AI models and data used in research, particularly in studies involving sensitive populations.
- Academic Integrity has provided resources for faculty regarding AI use here
- The Center for Teaching Excellence has provided information about incorporating AI into instruction here
For Staff and Administration
- Verify Everything: AI is prone to "hallucinations". Always fact-check AI-generated reports or communications before they go to stakeholders.
- Human Oversight: Whether you are in HR, Finance, or IT, any use of AI to automate decisions regarding employment, admissions, or grading requires "Human-in-the-Loop" oversight. The use of AI to assist in high-stakes decisions as these should be disclosed and consented to. Staff must act as a "bias filter" to ensure the AI's "recommendation" aligns with the university's values.
- Operational Efficiency: Use AI for drafting emails, summarizing meeting notes, or organizing data, but ensure sensitive data is only included as input in appropriate and approved tools.
- The National Institute of Standards and Technology (NIST) provides the following draft approach to managing risks posed by AI
Data Privacy and Approved AI Tools
The security of your prompt depends on which tool you use.
| Data Classification |
Public and Personal Tools (e.g. Free ChatGPT, Free Gemini) |
University-Approved Tools (e.g. Enterprise Gemini) |
| Public Data (e.g. Directory, News, etc) |
Permitted |
Permitted |
| Internal Only (e.g. Meeting notes, drafts, etc) |
Caution |
Permitted |
| Confidential (e.g. student work and grades, PII, etc) |
PROHIBITED |
Consult |
| Restricted (e.g. SSNs, Research IP, etc) |
PROHIBITED |
PROHIBITED |
When in doubt, before entering information into an AI tool, ask:
- What is the classification of this information?
- Is this tool approved as an Enterprise Application or for Conditional Use for the specific use-case?
- Could this information be retained or reused outside Miami University control?
- Would disclosure create legal, contractual, or reputational risk?
If you're still unsure, contact the Data Owner and the Information Security Office.
AI Tools currently approved for Enterprise or Conditional Use
The following tools have been approved for Enterprise or Conditional Use. If you aren't sure a particular tool has been approved for your use-case it may need a Conditional Use Risk Review, which can be initiated with the form here.
Enterprise Applications
- Anthology ALLY
- Google Gemini
- Google NotebookLM
- AI enabled features within Enterprise Applications (e.g. Slack AI, Workday AI, Zoom AI)
Conditional Use Applications
- Beautiful AI
- Cascade AI
- Claude AI
- Claude Pro
- Consensus AI
- CoPilot
- Diffy AI
- Elicit AI
- Flow XO
- Google AI Studio
- Google Gemini Pro / Google AI Pro
- Google Gemini Code Assist
- Google Opal
- Grok
- JetBrains AI
- LM Studio
- Midjourney
- Moxie AI
- Narratize
- Notta AI
- OpenAI API
- OpenAI ChatGPT
- OpenAI ChatGPT Teams
- OpenAI Dall-E 3
- Otter.ai
- Perplexity AI
- PlayAI
- SaneBox AI
- Scite
- SolDel Grok
- SparkAI
- Speechify
- Topaz Video AI
- VisibileAI
- Windsurf
If a particular tool is not listed above or if your particular use-case with a tool listed above has not undergone the Conditional Use Risk Review process, the tool is not considered approved for use with non-public data.
Note
- This guide is a living document and will be reviewed and updated regularly to adapt to rapidly evolving AI technologies, emerging best practices, and change in legal or regulatory landscapes. Feedback from the University community is encouraged to ensure the guide remains relevant and effective.
This guide was developed with the assistance of AI.