Generative AI: Nine Practices for Safe Use at Work

1. Know what data you're allowed to put in. 

Dartmouth-sanctioned tools (Claude Enterprise, Microsoft Copilot) are approved for most institutional data, but not all of it. CUI, ITAR/EAR-controlled research, PHI, and unredacted student records governed by FERPA have specific handling requirements — and consumer or unsanctioned AI tools are not approved for any institutional data. When in doubt, treat the AI tool like a public website and ask yourself whether you'd paste the same content there.

2. Use sanctioned tools, not personal accounts. 

Sign in with your Dartmouth credentials to enterprise versions of AI tools. Personal ChatGPT, personal Claude, personal Gemini, and free AI sites do not carry our contractual protections — your prompts may be retained, used for training, or exposed in ways the enterprise versions are not. If a tool isn't on the approved list, don't use it for work.

3. Use the most current, patched version of the tool. 

AI vendors push security fixes, model updates, and new safeguards frequently. Prefer the browser version when possible — it's always current. For desktop apps, mobile apps, and browser extensions, enable automatic updates and don't dismiss update prompts. Install only from official sources (vendor sites, Apple App Store, Google Play, official browser web stores), and uninstall AI tools you no longer use so they don't sit on your device as a dormant attack surface.

4. Verify before you trust. 

Generative AI produces fluent, confident output that can be wrong. Citations can be fabricated, legal references can be invented, code can contain subtle vulnerabilities, and summaries can omit material facts. Treat AI output as a draft from a capable but unverified colleague — check sources, test code, and confirm anything that will inform a decision, a communication, or a deliverable.

5. Don't paste secrets, credentials, or keys. 

API keys, passwords, session tokens, private SSH keys, and connection strings should never go into a prompt — even to ask the AI to "help debug." Once it's in the prompt, treat it as compromised: rotate the credential.

6. Be cautious with attachments and links from AI output. 

AI tools can be manipulated through prompt injection — instructions hidden inside documents, web pages, or emails the AI reads on your behalf. If an AI assistant suggests you visit a URL, run a command, install a package, or send information somewhere, evaluate it the same way you'd evaluate an email from an unknown sender.

7. Keep humans in the loop for consequential decisions. 

Don't let AI tools auto-send emails, auto-approve transactions, auto-modify production systems, or make decisions about people (hiring, grading, admissions, discipline, accommodations) without human review. Agentic features and connectors amplify both productivity and blast radius — the more an AI can do on your behalf, the more important your review becomes.

8. Document AI's role in your work. 

When AI materially contributes to research, code, written deliverables, or analysis, note it. This protects you against academic-integrity questions, supports reproducibility, helps colleagues evaluate the work, and aligns with emerging publisher, sponsor, and accreditor expectations. Your department or sponsor may have specific disclosure requirements — check before you publish or submit.

9. Report problems quickly. 

If you suspect institutional data went into an unsanctioned tool, an AI account was compromised, an AI assistant did something unexpected on your behalf, or output you relied on caused a downstream issue — contact the ITC Information Security team. Early reporting almost always produces a better outcome than discovery later.