Dartmouth Chat walkthrough

 
 
 

Jump to:

 

Navigating Model Offerings

  1. When you go to chat.dartmouth.edu, you will see this page:



     
  2. In the center of the page, and upper left hand corner you will see the name of the selected model. The default model is GPT-4o mini.


     
  3. To change the selected model, click the “⌄” sign next to the current model name at the upper left hand corner:


     
  4. When the “⌄” sign is clicked, a drop-down menu is displayed containing several model options to choose from:


     
  5. Click the name of the model that you would like to use. (Note that for each user there is a usage limit per day and the models have $ to indicate their expense. The local models such as Llama 3.2 11b have no such limits.)
  6. For example, if we click Llama 3.2 11b, we see that that model is now going to be used in the chat session:

Writing a Prompt

  1. Let’s try out the “Overcome procrastination give me tips” suggested prompt. Note that the configuration of suggested prompt boxes might look different than the above screenshot, you might have to scroll horizontally to find the “overcome procrastination” suggested prompt box. When we click on the box, it propagates the input field with text for the prompt:


     
  2. Now let’s click the up-arrow sign to the right of the input field to submit our prompt:


     
  3. When we click that button, the model takes a few moments to generate a response. Sometimes it may take a longer amount of time. After the response is generated, your session should look something like this. Note that the response generated is going to look different every time, so it will not be the same response as the screenshot:

Comparing Responses between Several Models Side-By-Side

  1. Perhaps we would like to compare the response to this prompt from Llama 3.2 11b with that of another model. We can do this by clicking the “+” sign next to model name:


     
  2. When we click the “+” sign a new piece of text is revealed underneath the current selected model: “ Select a model ⌄ – ”


     
  3. Click the “⌄” sign next to “Select a model” to navigate the drop down menu of models. Let’s choose Mistral 7b:


     
  4. We can repeat this process of adding several models to the chat session. To remove a model, click the “–” sign next to its name. For now, let’s compare the responses from Llama 3.2 11b and Mistral 7b to the prompt.
     

Editing Formerly Sent Prompts

  1. Let’s first edit the prompt to include more information about the user context. Let’s add “I am a Dartmouth student that is trying to cope with being able to manage lots of coursework while also getting more involved in clubs on campus.” To edit the prompt to include this information, hover over the input bubble “Could you start by asking me about instances when I procrastinate the most and then give me some suggestions to overcome it?” This will reveal an icon of a pencil and clipboard beneath the input bubble:


     
  2. Click the icon of the pencil. The input bubble now becomes editable:


     
  3. Now, let’s add the additional information to the input, “I am a Dartmouth student that is trying to cope with being able to manage lots of coursework while also getting more involved in clubs on campus.”


     
  4. Now, let’s hit send. In a few moments, we see both the Llama 3.2 11b output and Mistral 7b output side-by-side, allowing us to compare the responses. We also see that the responses are now more specifically relevant to Dartmouth students after editing the prompt with that additional context.
     

Adding Website Article or Paper from URL to the Prompt

  1. Let’s now prompt the chatbot with an example of when we, as the user, procrastinate the most. Perhaps we procrastinate the most when we have to read a long research publication in preparation for an assignment. The user may want to then include the file of a research paper that they need to read for an upcoming assignment in the prompt to get a response more relevant to their current situation.

    To include an online report, article, video etc. in the prompt to automatically upload a file to the input, use the following format: “# {URL}”. The “#” signifies that the following URL needs to be uploaded as a file with the prompt.

    In this case, let’s include the following “Gender Shades” research publication: https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf

    For the prompt, let’s attach this paper by writing the following in the input bubble:  #https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf 



     

  2. Now, hit enter (or return on a mac) on your keyboard.


     

  3. It should take a few moments to fully upload as a file. When it has finished loading you will see a document icon on the red portion of the pop-up:


     

  4. Now that the document is uploaded, let’s write our prompt. Let’s first erase the “ #https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf ”. After we erase that text, let’s send the following prompt: I procrastinate the most when I have to read a long research publication to prepare for an upcoming assignment. For example, I need to read the attached “Gender Shades” research paper for an upcoming assignment. Please help me work through this paper in preparation for the assignment.


     

  5. Let’s submit that by clicking the up-arrow bubble, or hit enter/return. The responses are now relevant to the content in this particular paper.
     

Adding Locally Saved File to Prompt

  1. In the “Gender Shades” research paper, there is a map of the globe that describes the distribution of skin color across various continents. Let’s download this map, upload it to the input box, and ask the chatbot to analyze the map.

    First, let’s save this image to our computer: https://cdn.britannica.com/59/61759-004-9A507F1C.gif

    Because Llama and Mistral cannot process large images, let’s change the models we currently have from Llama 3.2 11b → Gemini 1.5 Flash and Mistral 7b → GPT-4o Mini
      

    To do that, click the  “⌄” sign next to the model names in the upper left corner of the interface to reveal drop-down menu of models. Choose Gemini 1.5 Flash and GPT-4o Mini from the drop-down menus. After, you should now see that the selected models have changed to Gemini 1.5 Flash and GPT-4o Mini.


     

  2. Now, let’s upload the saved image to the input box. To do this, first click the “+” sign at the far left of the input bubble. A pop-up menu will appear that lists “Web Search” and “Upload Files”


     

  3. Click “Upload Files” and select the image file of the map saved to your computer. You should see a small picture of the image uploaded in the input bubble

Image Analysis

  1. Now, let’s write the prompt for the chatbot to analyze the map based on country and skin color. Let’s input: “Please analyze the attached map featured in the “Gender Shades” publication.”


     
  2. When we submit that input, we see responses generated analyzing the map in relation to the “Gender Shades” publication. 

    Always remember to verify analyses like these and all AI-generated responses based on credible sources. LLMs are always subject to error, particularly due to the text predictive nature of these systems. 
     

Adding Website Video from URL to Prompt

  1. The authors of the “Gender Shades” publication, Dr. Joy Boulamwini and Dr. Timni Gebru hosted a 1-hour live video discussing the paper and topics relevant to those covered in the paper: https://www.youtube.com/watch?v=8JSxbZyivuE. Let’s ask these chatbots to summarize the video, highlighting key points made to help us prepare for our upcoming assignment on the paper. To do that, let’s upload the video to the input bubble entering the following:  #https://www.youtube.com/watch?v=8JSxbZyivuE


Summarizing Video from URL

  1. Let’s now write in the input bubble, the following: "Please summarize the attached video, highlighting key points relating to the “Gender Shades” publication."
  2. You may need to create a new session if the chatbots refuse to summarize the videos, click the note-writing icon on the left panel. If the left panel is not already open, first click the 3 lines icon in the upper-left corner of the interface near the model names, then click the revealed note-writing icon.

  3. If you do this, make sure to upload the video via URL to the input bubble in the new chat session along with the summary prompt. You can now compare generated summaries of the video between the models.
     

Adding Several Models to Compare Responses to Formerly Sent Prompt

  1. Let’s add Mistral 7b and Llama 3.2 11b to the selected models to compare summaries between all 4 of these models. Click the “+” sign twice and choose the Mistral 7b model and Llama 3.2 11b model from the two drop-down menus after clicking the “⌄” sign next to the two “Select a model” texts


     
  2. Let’s now hover over our bubble in the chat: Please summarize the attached video "AJL Gender Shades 5th Anniversary Celebration", highlighting key points related to the "Gender Shades" publication, and click the revealed pencil logo


     
  3. Let’s re-send the prompt


     
  4. Now we can prepare generated summaries across all four models. Make sure to verify the generated summaries against points made in the actual video as there is always a likelihood of inaccurate information.

Adding Results from a Web Search to Your Response

  1. LLM training data starts going stale the minute the model is published for use. We can add results from a Web Search to your prompt to pull in current data for your query, click on the + sign to the left of your prompt and toggle on Web Search:
    web search
  2. A green web search indicator will appear near your prompt. Ask a question such as "Who is the current president of Dartmouth College" to test the functionality.