AITS AI Guidelines for Safe Use
These guidelines support the responsible, secure, and compliant use of
artificial intelligence (AI) tools by AITS and System Office staff working
with University data. Approved AI services have been vetted for security,
legal, and data privacy considerations and are intended to help employees
use AI effectively without introducing unintended risk. Typical use cases
include (but are not limited to): data analysis, automation, research
support, code and script generation, chatbots, and productivity assistance.
Scope: These guidelines apply to AITS and System Office
employees, contractors, and approved external participants who use AI tools
with University data.
Use Approved Tools Only
-
Use only AI platforms and services that have been approved for handling
University data.
- Do not use unapproved AI tools for University work.
-
Personal versions of tools (e.g. ChatGPT, Gemini, Claude, Copilot,
Perplexity) should be avoided with University data. Unchecking e.g.
"Use my data to train the model" or "Help improve the model
for everyone" is generally advised for personal tools.
-
Ensure the service has completed required security, legal, and data
privacy reviews.
-
Follow any usage instructions, constraints, or limitations provided by
AITS, Information Security, or Data Governance teams.
Understand the Data You’re Using
Only use AI tools with data you're allowed to share. If you're
unsure, please contact AITS AI Requests
aitsairequests@uillinois.edu.
✅ Public
Information intended for public release. No adverse impact if
disclosed.
Generally appropriate for AI
⚠️ Internal
Non-public university data intended for internal use. Potential
operational impact if disclose.
Use with caution
⚠️ Sensitive
Role-restricted or sensitive university data. Serious privacy or
operational risk if improperly shared.
Approved tools only
🚫 High Risk
Regulated or high-risk data (e.g. SSNs, credit cards, protected
health information). Do not enter into AI tools unless that tool has
special approval or you've received an exception.
Do not use with AI
-
Know how your data is classified:
high-risk, sensitive, internal, or public.
-
Examples of sensitive or high-risk data include PII, PHI,
FERPA-protected records, SSNs, financial data, and proprietary
information.
- Be aware of data volume (e.g. tens of thousands of records).
-
Approval of a tool does not imply unrestricted use of sensitive or
high-risk data.
Requesting Review of Unapproved Use Cases or Tools
Prepare and Submit Data Thoughtfully
-
Minimize data shared with AI tools: submit only what is necessary for the
task.
-
For testing or experimentation, use anonymized, synthetic, or sample data
instead of real records.
Review and Validate AI Outputs
-
AI-generated outputs must be reviewed by a human before use in
decision-making.
-
Do not rely solely on AI outputs for critical, operational, or
compliance-related decisions without appropriate human review.
- Validate summaries, recommendations, code, and predictions.
- Document how AI outputs were used, where appropriate.
Protect Outputs and Share Responsibly
-
Do not share AI-generated outputs that include sensitive data via public
channels (e.g., non-University email, chat, personal accounts) unless
explicitly allowed.
-
Disclose AI use where appropriate, especially in reports, official
communications, or externally shared materials.
- Maintain records of AI tools used for sensitive projects.
-
Follow
University guidance
on ethical and responsible use of generative AI.
-
When in doubt, consult AITS AI Requests (aitsairequests@uillinois.edu).
Training
Human Resources Guidelines
-
System Office Human Resources provides guidance on the use of artificial
intelligence (AI) in employment and hiring processes for the University of
Illinois System, ensuring compliance with state law and university
policies.
- For more information see Hiring and Employment
Report Issues and Concerns
-
Please direct questions, unexpected behavior, suspected data exposure, or
security concerns to AITS AI Requests (aitsairequests@uillinois.edu).
-
Notifications are reviewed by the AITS AI team and AITS Enterprise Systems
Assurance.