Dartmouth Chat

Tags AI GenAI LLM

Dartmouth Chat is a feature-rich web chat application for interacting with Large Language Models (LLMs), both locally hosted and those provided by LLM vendors like OpenAI and Anthropic.

This service is currently considered experimental and is free of charge for all users for the time being. Please see the walkthrough for in-depth information on how to use this service.

Frequently Asked Questions

1) What are the data privacy protections in place for those using this service?

Dartmouth Chat is hosted and run on Dartmouth infrastructure. That means that all your data, like chat history and uploaded documents, is stored on Dartmouth computers. Data is not shared between users nor is Dartmouth processing or using the information for any type of training purposes. Models marked as "Local" are hosted on Dartmouth infrastructure and data sent to the Local models (like chat messages or uploaded images and files) never leaves Dartmouth's systems.

Models marked as "Cloud" in the model selection menu are hosted remotely by third-party providers, such as Open AI and Anthropic. Therefore, usage of these models falls under the respective provider's terms. In contrast to personal accounts at the respective provider's own web chat app, e.g. chatgpt.com, Dartmouth Chat generally falls under the providers' more secure business terms, as opposed to the general terms governing personal accounts. 

Here are links to the relevant terms organized by provider, including quotes of some specific relevant sections (as of the writing of this article):

Anthropic:

- Business terms: https://www.anthropic.com/legal/commercial-terms

[...] Anthropic agrees that Customer (a) retains all rights to its Inputs, and (b) owns its Outputs.

- Data Processing Addendum: https://www.anthropic.com/legal/data-processing-addendum

- Data Handling & Retention for Commercial Customers: https://privacy.anthropic.com/en/articles/7996885-how-do-you-use-personal-data-in-model-training#h_1a7d240480

By default, we will not use your Inputs or Outputs to train our models.

Google:

- Terms for Paid Services: https://ai.google.dev/gemini-api/terms#paid-services

Google doesn't use your prompts (including associated system instructions, cached content, and files such as images, videos, or documents) or responses to improve our products.

- Data Processing Addendum: https://business.safety.google/processorterms/

Mistral:

- Terms for La Plateforme: https://mistral.ai/terms#additional-terms-for-la-plateforme

[W]e do not use Your Data to improve, enhance, or train our models or the Services or for any other purpose than to provide the Services and to monitor abuse.
[Y]ou (i) retain all ownership rights in Input and (ii) own all Output. 

OpenAI:

- Business terms: https://openai.com/policies/business-terms/

[...] you (a) retain all ownership rights in Input and (b) own all Output. [...]  We will not use Customer Content to develop or improve the Services.

- Enterprise privacy terms: https://openai.com/enterprise-privacy/

2) Are there usage limits for the models available through Dartmouth Chat

For the Local models (currently Mistral 7b and Llama 3.211b), there are no usage limits. For the Cloud models, a daily token limit is enforced. Each model has a different price associated with its token usage - indicated in the model selector menu in Dartmouth Chat by a number of $: The more dollar signs, the more expensive the tokens and the faster you will hit the daily limit.

This limit is on a per day basis and resets at midnight Eastern time. If you hit the limit, switch to a local model or try again the following day.

3) Are model's responses influenced or supplemented by data external to the LLMs?

By default, answers provided via Dartmouth Chat are sourced directly from the selected model. There are opt-in options to use web search results or pull in your own data sources.

4) Where can I find information on using the APIs exposed through Dartmouth Chat?

Check out the tutorial on the Dartmouth Chat API and the LangChain Dartmouth Cookbook.

5) Does Research Computing provide a more secure service for working with LLMs including secure data storage and transmission?

Please reach out to research.computing@dartmouth.edu to discuss the available options.

100% helpful - 1 review