About
This notebook was prepared by Dr. Karthik Mohan for the LLM 2024 course at University of Washington, Seattle and is inspired by this writeup: https://arxiv.org/abs/2401.14423
Lecture delivered on February 8, 2024
Class webpage: https://bytesizeml.github.io/llm2024/
Topics covered in Lecture:
# 1. Install Libraries
!pip3 install openai
!pip3 install python-dotenv
# 2. Connect to Google Drive
from google.colab import drive
drive.mount('/content/drive/', force_remount=True)
import os
print(os.system('ls'))
os.chdir(os.curdir + "/drive/MyDrive/Colab_Notebooks_LLM_2023")
# 3. Open AI API Access Setup
import openai
import os
open_ai_key_file = "openai_api_key_llm_2023.txt" # Your OPEN AI Key in this file
with open(open_ai_key_file, "r") as f:
for line in f:
OPENAI_KEY = line
break
from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv())
# 4. Tools Setup
import os
with open("google_cse_id.txt", "r") as f_cse:
for line in f_cse:
google_cse_id = line.strip("\n")
break
with open("google_api_key.txt", "r") as f_cse:
for line in f_cse:
google_api_key = line.strip("\n")
break
os.environ["GOOGLE_CSE_ID"] = google_cse_id
os.environ["GOOGLE_API_KEY"] = google_api_key
!pip3 install langchain
from langchain.tools import Tool
from langchain.utilities import GoogleSearchAPIWrapper
search = GoogleSearchAPIWrapper()
tool = Tool(
name = "Google Search",
description = "Search Google for recent results.",
func = search.run
)
Requirement already satisfied: openai in /usr/local/lib/python3.10/dist-packages (1.11.1) Requirement already satisfied: anyio<5,>=3.5.0 in /usr/local/lib/python3.10/dist-packages (from openai) (3.7.1) Requirement already satisfied: distro<2,>=1.7.0 in /usr/lib/python3/dist-packages (from openai) (1.7.0) Requirement already satisfied: httpx<1,>=0.23.0 in /usr/local/lib/python3.10/dist-packages (from openai) (0.26.0) Requirement already satisfied: pydantic<3,>=1.9.0 in /usr/local/lib/python3.10/dist-packages (from openai) (2.6.1) Requirement already satisfied: sniffio in /usr/local/lib/python3.10/dist-packages (from openai) (1.3.0) Requirement already satisfied: tqdm>4 in /usr/local/lib/python3.10/dist-packages (from openai) (4.66.1) Requirement already satisfied: typing-extensions<5,>=4.7 in /usr/local/lib/python3.10/dist-packages (from openai) (4.9.0) Requirement already satisfied: idna>=2.8 in /usr/local/lib/python3.10/dist-packages (from anyio<5,>=3.5.0->openai) (3.6) Requirement already satisfied: exceptiongroup in /usr/local/lib/python3.10/dist-packages (from anyio<5,>=3.5.0->openai) (1.2.0) Requirement already satisfied: certifi in /usr/local/lib/python3.10/dist-packages (from httpx<1,>=0.23.0->openai) (2024.2.2) Requirement already satisfied: httpcore==1.* in /usr/local/lib/python3.10/dist-packages (from httpx<1,>=0.23.0->openai) (1.0.2) Requirement already satisfied: h11<0.15,>=0.13 in /usr/local/lib/python3.10/dist-packages (from httpcore==1.*->httpx<1,>=0.23.0->openai) (0.14.0) Requirement already satisfied: annotated-types>=0.4.0 in /usr/local/lib/python3.10/dist-packages (from pydantic<3,>=1.9.0->openai) (0.6.0) Requirement already satisfied: pydantic-core==2.16.2 in /usr/local/lib/python3.10/dist-packages (from pydantic<3,>=1.9.0->openai) (2.16.2) Requirement already satisfied: python-dotenv in /usr/local/lib/python3.10/dist-packages (1.0.1) Mounted at /content/drive/ 0 Requirement already satisfied: langchain in /usr/local/lib/python3.10/dist-packages (0.1.5) Requirement already satisfied: PyYAML>=5.3 in /usr/local/lib/python3.10/dist-packages (from langchain) (6.0.1) Requirement already satisfied: SQLAlchemy<3,>=1.4 in /usr/local/lib/python3.10/dist-packages (from langchain) (2.0.25) Requirement already satisfied: aiohttp<4.0.0,>=3.8.3 in /usr/local/lib/python3.10/dist-packages (from langchain) (3.9.3) Requirement already satisfied: async-timeout<5.0.0,>=4.0.0 in /usr/local/lib/python3.10/dist-packages (from langchain) (4.0.3) Requirement already satisfied: dataclasses-json<0.7,>=0.5.7 in /usr/local/lib/python3.10/dist-packages (from langchain) (0.6.4) Requirement already satisfied: jsonpatch<2.0,>=1.33 in /usr/local/lib/python3.10/dist-packages (from langchain) (1.33) Requirement already satisfied: langchain-community<0.1,>=0.0.17 in /usr/local/lib/python3.10/dist-packages (from langchain) (0.0.19) Requirement already satisfied: langchain-core<0.2,>=0.1.16 in /usr/local/lib/python3.10/dist-packages (from langchain) (0.1.21) Requirement already satisfied: langsmith<0.1,>=0.0.83 in /usr/local/lib/python3.10/dist-packages (from langchain) (0.0.87) Requirement already satisfied: numpy<2,>=1 in /usr/local/lib/python3.10/dist-packages (from langchain) (1.23.5) Requirement already satisfied: pydantic<3,>=1 in /usr/local/lib/python3.10/dist-packages (from langchain) (2.6.1) Requirement already satisfied: requests<3,>=2 in /usr/local/lib/python3.10/dist-packages (from langchain) (2.31.0) Requirement already satisfied: tenacity<9.0.0,>=8.1.0 in /usr/local/lib/python3.10/dist-packages (from langchain) (8.2.3) Requirement already satisfied: aiosignal>=1.1.2 in /usr/local/lib/python3.10/dist-packages (from aiohttp<4.0.0,>=3.8.3->langchain) (1.3.1) Requirement already satisfied: attrs>=17.3.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp<4.0.0,>=3.8.3->langchain) (23.2.0) Requirement already satisfied: frozenlist>=1.1.1 in /usr/local/lib/python3.10/dist-packages (from aiohttp<4.0.0,>=3.8.3->langchain) (1.4.1) Requirement already satisfied: multidict<7.0,>=4.5 in /usr/local/lib/python3.10/dist-packages (from aiohttp<4.0.0,>=3.8.3->langchain) (6.0.5) Requirement already satisfied: yarl<2.0,>=1.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp<4.0.0,>=3.8.3->langchain) (1.9.4) Requirement already satisfied: marshmallow<4.0.0,>=3.18.0 in /usr/local/lib/python3.10/dist-packages (from dataclasses-json<0.7,>=0.5.7->langchain) (3.20.2) Requirement already satisfied: typing-inspect<1,>=0.4.0 in /usr/local/lib/python3.10/dist-packages (from dataclasses-json<0.7,>=0.5.7->langchain) (0.9.0) Requirement already satisfied: jsonpointer>=1.9 in /usr/local/lib/python3.10/dist-packages (from jsonpatch<2.0,>=1.33->langchain) (2.4) Requirement already satisfied: anyio<5,>=3 in /usr/local/lib/python3.10/dist-packages (from langchain-core<0.2,>=0.1.16->langchain) (3.7.1) Requirement already satisfied: packaging<24.0,>=23.2 in /usr/local/lib/python3.10/dist-packages (from langchain-core<0.2,>=0.1.16->langchain) (23.2) Requirement already satisfied: annotated-types>=0.4.0 in /usr/local/lib/python3.10/dist-packages (from pydantic<3,>=1->langchain) (0.6.0) Requirement already satisfied: pydantic-core==2.16.2 in /usr/local/lib/python3.10/dist-packages (from pydantic<3,>=1->langchain) (2.16.2) Requirement already satisfied: typing-extensions>=4.6.1 in /usr/local/lib/python3.10/dist-packages (from pydantic<3,>=1->langchain) (4.9.0) Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests<3,>=2->langchain) (3.3.2) Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests<3,>=2->langchain) (3.6) Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests<3,>=2->langchain) (2.0.7) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests<3,>=2->langchain) (2024.2.2) Requirement already satisfied: greenlet!=0.4.17 in /usr/local/lib/python3.10/dist-packages (from SQLAlchemy<3,>=1.4->langchain) (3.0.3) Requirement already satisfied: sniffio>=1.1 in /usr/local/lib/python3.10/dist-packages (from anyio<5,>=3->langchain-core<0.2,>=0.1.16->langchain) (1.3.0) Requirement already satisfied: exceptiongroup in /usr/local/lib/python3.10/dist-packages (from anyio<5,>=3->langchain-core<0.2,>=0.1.16->langchain) (1.2.0) Requirement already satisfied: mypy-extensions>=0.3.0 in /usr/local/lib/python3.10/dist-packages (from typing-inspect<1,>=0.4.0->dataclasses-json<0.7,>=0.5.7->langchain) (1.0.0)
from openai import OpenAI
client = OpenAI(api_key=OPENAI_KEY)
def get_completion_instruct(prompt, model="gpt-3.5-turbo-instruct"):
response = client.completions.create(
model=model,
prompt=prompt
)
#return response.choices[0].text
return response.choices[0].text
def get_completion(prompt, model="gpt-3.5-turbo"):
message = {"role": "user", "content": prompt}
response = client.chat.completions.create(
model=model,
messages=[message]
)
return response.choices[0].message.content
Rails refers to "guard-rails" for LLMs - Directing and guiding the LLM to stick to certain styles of responses. Different types of rails:
You can use Topical and Fact-checking rails in your mini-project 2
def get_prompt(task):
"""
Automatically Generate a prompt given a task
"""
APE_prompt = "Given a task that I want accomplished by an LLM as follows, can you generate an appropriate prompt that I can pass in to get a good response. The prompt should include an example as well: " + task
prompt = get_completion(APE_prompt)
return prompt
task = "Explain the steps as visually seen for fixing a flat tire"
# NO APE
print(get_completion(task))
1. Identify the flat tire: visually inspect all four tires by walking around the car. 2. Park the car on a safe and flat surface: find a location away from traffic or any other hazards to ensure your safety. 3. Engage the parking brake: this will prevent the car from rolling. 4. Place a wheel chock: if available, place a wheel chock or a large rock behind one of the tires opposite to the one being changed for extra safety. 5. Retrieve essential tools: typically, you will need a spare tire, car jack, lug wrench, and a vehicle owner's manual. These tools are usually located in the trunk or under the car. 6. Position the car jack: consult the owner's manual to find the correct position for the jack. Typically, it goes under the car frame near the flat tire. 7. Use the lug wrench: attach the lug wrench to the lug nuts on the flat tire and turn counterclockwise to loosen them. Do not remove the lug nuts completely at this stage. 8. Lift the car: use the car jack to raise the car slowly and steadily until the flat tire is about six inches off the ground. Make sure the jack is positioned securely and aligned with the designated car lift point. 9. Remove the lug nuts: completely remove the loosened lug nuts and place them in a safe location, as they will be needed later. 10. Remove the flat tire: firmly hold both sides of the tire, and pull it straight towards you until it is free from the wheelbase. 11. Install the spare tire: align the spare tire with the exposed wheelbase by matching the holes in the rim with the wheel studs. Push the tire towards the car until it cannot go any further. 12. Hand-tighten the lug nuts: start by hand-threading the lug nuts onto the wheel studs to ensure proper alignment. 13. Lower the car: use the jack to lower the car slowly until it rests securely on the ground. Remove the car jack and place it back in the trunk. 14. Tighten the lug nuts: using the lug wrench, tighten the lug nuts as much as possible in a diagonal pattern. This ensures even pressure on the tire. 15. Double-check lug nut tightness: with the car on the ground, go over each lug nut again, tightening them further with the lug wrench. 16. Store the flat tire, tools, and spare tire: put the flat tire, tools, and lug wrench back in the trunk of the car. Ensure they are secure and won't shift during driving. 17. Check the spare tire pressure: visually inspect the spare tire to ensure it is adequately inflated. If necessary, take it to a gas station to fill it to the specified pressure level. 18. Get the flat tire repaired/replaced: visit an auto repair shop to fix the punctured tire or purchase a new one, depending on the severity of the damage. 19. Return the car jack and tools to their original location: put the car jack and all the tools used back in their designated spot, ensuring they are secured for future use. 20. Remember to remove the wheel chock: retrieve the wheel chock or large rock from behind the tire and place it back where it belongs. Now, you are ready to continue your journey with a repaired or replaced tire.
task = "Explain the steps as visually seen for fixing a flat tire"
prompt = get_prompt(task)
print("Automatically Generated Prompt: \n")
print(prompt)
print("\n\n LLM Response: \n")
print(get_completion(prompt))
Automatically Generated Prompt: Prompt: Please provide a step-by-step explanation, with visual references, on how to fix a flat tire. Example: Imagine you are driving down the road in your car when suddenly you hear a loud hissing noise. You quickly realize that you have a flat tire. In order to help others facing a similar issue, please provide a detailed explanation, accompanied by visual aids, on how to fix a flat tire. LLM Response: Step 1: Find a Safe Location The first step to fixing a flat tire is to find a safe location to pull over. This could be a nearby parking lot, a wide shoulder, or a flat, straight section of road. Ensure that you are away from traffic and have enough space to work comfortably. Step 2: Gather Equipment Before you start fixing the flat tire, gather all the necessary equipment. You will need: - Spare tire - Lug wrench - Jack - Wheel wedges - Flashlight - Vehicle owner's manual Step 3: Apply Wheel Wedges To prevent the vehicle from rolling, place wheel wedges, or blocks, on the opposite side of the flat tire. For example, if the front left tire is flat, place the wheel wedges behind the rear right tire. Step 4: Loosen Lug Nuts Next, locate the lug nuts on the flat tire. Using the lug wrench, turn the lug nuts counterclockwise to loosen them. You may need to apply force, as the lug nuts are typically tightened firmly. Be cautious not to completely remove them at this stage. Step 5: Position Jack Refer to your vehicle owner's manual to identify the proper lifting points for the jack. These points are usually marked on the frame of the vehicle. Once you've identified a suitable point, position the jack securely under the vehicle but do not lift the vehicle yet. Step 6: Lift Vehicle Using the appropriate jack, begin raising the vehicle until the flat tire is a few inches off the ground. Ensure that the jack is firmly placed and that the vehicle is stable before fully lifting it. Step 7: Remove Lug Nuts and Flat Tire With the vehicle raised, you can now remove the loosened lug nuts completely. Once removed, carefully take off the flat tire and set it aside. Step 8: Mount Spare Tire Position your spare tire onto the wheel base and align the holes in the rim with the lug bolts. Lift the tire and push it gently until it fits flush against the hub. Step 9: Tighten Lug Nuts Thread the lug nuts onto the lug bolts and tighten them by hand as much as possible. Use the lug wrench to further tighten the lug nuts, turning them clockwise. Tighten them in a star or cross pattern to ensure even tightening. Step 10: Lower Vehicle Using the jack, gradually lower the vehicle until all the weight is on the spare tire. Remove the jack and ensure the lug nuts are tightened securely. Congratulations! You have successfully fixed your flat tire. Remember to have the damaged tire repaired or replaced as soon as possible, as spare tires are typically temporary solutions. Note: The steps and equipment required may vary slightly depending on your vehicle. Always consult your vehicle owner's manual for specific instructions.
Use an APE function like above to automatically generate a prompt to execute the following task: "Generate streamlit code to develop a search bar that can take in a sentence and output relevant categories for the sentence as a response. The webapp should be simple, functional and have some style to it."
Also execute the LLM-generated prompt to check if the result looks reasonable.
## ICE 1 - Your CODE HERE!
Copy the above LLM generated code over to an IDE on your laptop and execute it with streamlit to see if you get the desired web app with the specifications mentioned in the task above?
Post a screen shot of the web-app that got generated below!
## ICE 2 - Your web-app Screenshot HERE!
The previous APE method only generated one prompt and used it. What if we generate multiple prompts and have the LLM evaluate these prompts for clarity, specificity and likelihood of desire response. And then pick the one that has overall good score?
class APE:
def __init__(self, model=None):
self.model=model
def prompt_generator(self, task):
"""
Automatically Generate a prompt given a task
"""
APE_prompt = "Given a task that I want accomplished by an LLM as follows, can you generate an appropriate prompt that I can pass in to get a good response. The prompt should include an example as well: " + task
if self.model:
prompt = get_completion(APE_prompt,model=self.model)
else:
prompt = get_completion(APE_prompt)
return prompt
def prompt_evaluator(self,task):
"""
Generate multiple automated prompts and evaluate them based on a criteria
"""
prompts = [] # Initialize a list of prompts
# Generate multiple prompts automatically
for index in range(3):
prompts.append(self.prompt_generator(task))
# Prompt for evaluation
evaluator_prompt = "Given the following list of prompts: " + str(prompts) + \
" ,evaluate and return goodness scores betweeen 0 and 1 on a) clarity and b) likelihood of generating a good response from LLM for the following task. Also make a recommendation for a prompt to use:" + \
task
return prompts, get_completion(evaluator_prompt)
task = "Explain the steps as visually seen for fixing a flat tire"
APE_instance = APE()
prompts, prompt_evaluations = APE_instance.prompt_evaluator(task) # Evaluate multiple Automatically Generated Prompts
index = 1
for prompt in prompts:
print("\n Prompt " + str(index) + "\n")
print(prompt)
index += 1
print(prompt_evaluations)
Prompt 1 Prompt: Guide me through the visual steps involved in fixing a flat tire on a car, providing a clear example for each step. Example: Imagine you are driving down the road and suddenly you hear a loud thumping noise from one of your car's tires. You pull over to the side of the road to find that one of your tires is completely flat. In a detailed response, visually explain the step-by-step process of fixing a flat tire, including an example for each step to ensure clarity. Prompt 2 Title: Visual Guide: Step-by-Step Process for Fixing a Flat Tire Prompt: Your task is to provide a visual explanation of the sequential steps involved in fixing a flat tire. Please break down the process into easy-to-follow steps and include a diagram to illustrate each stage. Use clear and concise language along with a coherent structure to ensure clarity within your response. Example Scenario: Imagine you are driving home from work when suddenly you hear a loud "pop" followed by a noticeable decrease in vehicle control. You realize you have a flat tire and need to address the issue promptly. Detail the actions required to fix a flat tire using a visual representation, highlighting each step with accompanying text. Please ensure that your response covers the following points: 1. Safety first: Explain the importance of finding a safe location away from traffic and activating hazard lights before attempting any repairs. 2. Prepare the vehicle: Outline the necessary steps to engage the parking brake, identify the appropriate tools (e.g., jack, lug wrench) from the spare tire kit, and locate the damaged tire. 3. Properly jack the vehicle: Describe the correct positioning of the jack to safely elevate the car, emphasizing the use of jack stand support for added stability. 4. Remove the damaged tire: Illustrate the process of loosening the lug nuts, explaining the preferred diagonal pattern and the importance of applying sufficient force. 5. Replace with spare tire: Explain the steps for aligning the spare tire with the designated wheel bolts, including tightening the lug nuts by hand initially. 6. Lower the vehicle and torque lug nuts: Demonstrate how to slowly lower the jack and then use the lug wrench to fully tighten the lug nuts. 7. Verify tire pressure: Advise checking the spare tire pressure to ensure it is within the safe range. 8. Secure the damaged tire: Suggest securing the flat tire in the trunk or designated storage area, as well as ensuring all tools are properly stored away. 9. Seek professional assistance: Emphasize the importance of visiting a tire professional to inspect the flat tire and determine whether it can be repaired or needs replacement. Remember to provide a clear visual representation of each step, ensuring an easy-to-follow guide for the audience. Prompt 3 Please provide a step-by-step description of how to fix a flat tire, including visual cues to ensure clarity. Use the following example situation to explain your steps: Example Situation: You are driving alone on a remote road, and suddenly you hear a loud pop followed by a bump. Your car begins to pull to the side, and you realize you have a flat tire. It's daytime, but there is no immediate help available. Prompt: Using clear visuals and concise instructions, explain the step-by-step process of fixing a flat tire in the given example situation. Ensure the explanation covers the necessary tools, precautions, and any specific techniques required to successfully address the issue. To evaluate the prompts and provide goodness scores for clarity and likelihood of generating a good response from LLM, let's analyze each prompt: 1. Prompt: Guide me through the visual steps involved in fixing a flat tire on a car, providing a clear example for each step. - Clarity: The prompt clearly asks for a visual explanation of the steps to fix a flat tire, including examples for each step. - Likelihood of generating a good response from LLM: Since the prompt explicitly asks for a visual guide with clear examples, it is likely to generate a good response from LLM. - Clarity score: 0.9 - Likelihood score: 0.8 2. Title: Visual Guide: Step-by-Step Process for Fixing a Flat Tire - Clarity: The prompt asks for a visual explanation of the sequential steps involved in fixing a flat tire and provides a clear structure to follow. - Likelihood of generating a good response from LLM: It sets clear expectations for the response by specifying the necessary points to cover and requiring a visual representation. - Clarity score: 0.9 - Likelihood score: 0.85 3. Please provide a step-by-step description of how to fix a flat tire, including visual cues to ensure clarity. Use the following example situation to explain your steps. - Clarity: The prompt asks for a step-by-step description with visual cues to ensure clarity and provides an example situation. - Likelihood of generating a good response from LLM: It provides a specific context for the response and emphasizes the inclusion of visual cues. - Clarity score: 0.85 - Likelihood score: 0.9 Based on the evaluation, the recommended prompt to use is the second one: Title: Visual Guide: Step-by-Step Process for Fixing a Flat Tire This prompt provides a clear structure, emphasizes visual representation, and sets expectations by specifying the necessary points to cover.
As we saw in B2 - If the task is already clearly specified, then perhaps there is no need to do APE as the task description serves as a prompt.
Here, we look at a more useful use-case for APE using the APE library:
Very good use-case for LLM as an annotator!
We will now "attempt to" use a library for Automatic Prompt Engineering called Automatic Prompt Engineer: https://github.com/keirp/automatic_prompt_engineer
## Install Automatic Prompt Engineer
! pip install git+https://github.com/keirp/automatic_prompt_engineer
Collecting git+https://github.com/keirp/automatic_prompt_engineer Cloning https://github.com/keirp/automatic_prompt_engineer to /tmp/pip-req-build-pn3v0vd0 Running command git clone --filter=blob:none --quiet https://github.com/keirp/automatic_prompt_engineer /tmp/pip-req-build-pn3v0vd0 Resolved https://github.com/keirp/automatic_prompt_engineer to commit eac521c79a78965245ce7745dcc9f6b0792c7ec7 Preparing metadata (setup.py) ... done Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from automatic-prompt-engineer==1.0) (1.23.5) Requirement already satisfied: openai in /usr/local/lib/python3.10/dist-packages (from automatic-prompt-engineer==1.0) (1.11.1) Collecting fire (from automatic-prompt-engineer==1.0) Downloading fire-0.5.0.tar.gz (88 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 88.3/88.3 kB 1.2 MB/s eta 0:00:00 Preparing metadata (setup.py) ... done Requirement already satisfied: tqdm in /usr/local/lib/python3.10/dist-packages (from automatic-prompt-engineer==1.0) (4.66.1) Collecting gradio (from automatic-prompt-engineer==1.0) Downloading gradio-4.17.0-py3-none-any.whl (16.7 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 16.7/16.7 MB 39.5 MB/s eta 0:00:00 Requirement already satisfied: six in /usr/local/lib/python3.10/dist-packages (from fire->automatic-prompt-engineer==1.0) (1.16.0) Requirement already satisfied: termcolor in /usr/local/lib/python3.10/dist-packages (from fire->automatic-prompt-engineer==1.0) (2.4.0) Collecting aiofiles<24.0,>=22.0 (from gradio->automatic-prompt-engineer==1.0) Downloading aiofiles-23.2.1-py3-none-any.whl (15 kB) Requirement already satisfied: altair<6.0,>=4.2.0 in /usr/local/lib/python3.10/dist-packages (from gradio->automatic-prompt-engineer==1.0) (4.2.2) Collecting fastapi (from gradio->automatic-prompt-engineer==1.0) Downloading fastapi-0.109.2-py3-none-any.whl (92 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 92.1/92.1 kB 11.8 MB/s eta 0:00:00 Collecting ffmpy (from gradio->automatic-prompt-engineer==1.0) Downloading ffmpy-0.3.1.tar.gz (5.5 kB) Preparing metadata (setup.py) ... done Collecting gradio-client==0.9.0 (from gradio->automatic-prompt-engineer==1.0) Downloading gradio_client-0.9.0-py3-none-any.whl (306 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 306.8/306.8 kB 25.2 MB/s eta 0:00:00 Requirement already satisfied: httpx in /usr/local/lib/python3.10/dist-packages (from gradio->automatic-prompt-engineer==1.0) (0.26.0) Requirement already satisfied: huggingface-hub>=0.19.3 in /usr/local/lib/python3.10/dist-packages (from gradio->automatic-prompt-engineer==1.0) (0.20.3) Requirement already satisfied: importlib-resources<7.0,>=1.3 in /usr/local/lib/python3.10/dist-packages (from gradio->automatic-prompt-engineer==1.0) (6.1.1) Requirement already satisfied: jinja2<4.0 in /usr/local/lib/python3.10/dist-packages (from gradio->automatic-prompt-engineer==1.0) (3.1.3) Requirement already satisfied: markupsafe~=2.0 in /usr/local/lib/python3.10/dist-packages (from gradio->automatic-prompt-engineer==1.0) (2.1.5) Requirement already satisfied: matplotlib~=3.0 in /usr/local/lib/python3.10/dist-packages (from gradio->automatic-prompt-engineer==1.0) (3.7.1) Collecting orjson~=3.0 (from gradio->automatic-prompt-engineer==1.0) Downloading orjson-3.9.13-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (138 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 138.7/138.7 kB 14.1 MB/s eta 0:00:00 Requirement already satisfied: packaging in /usr/local/lib/python3.10/dist-packages (from gradio->automatic-prompt-engineer==1.0) (23.2) Requirement already satisfied: pandas<3.0,>=1.0 in /usr/local/lib/python3.10/dist-packages (from gradio->automatic-prompt-engineer==1.0) (1.5.3) Requirement already satisfied: pillow<11.0,>=8.0 in /usr/local/lib/python3.10/dist-packages (from gradio->automatic-prompt-engineer==1.0) (9.4.0) Requirement already satisfied: pydantic>=2.0 in /usr/local/lib/python3.10/dist-packages (from gradio->automatic-prompt-engineer==1.0) (2.6.1) Collecting pydub (from gradio->automatic-prompt-engineer==1.0) Downloading pydub-0.25.1-py2.py3-none-any.whl (32 kB) Collecting python-multipart (from gradio->automatic-prompt-engineer==1.0) Downloading python_multipart-0.0.7-py3-none-any.whl (22 kB) Requirement already satisfied: pyyaml<7.0,>=5.0 in /usr/local/lib/python3.10/dist-packages (from gradio->automatic-prompt-engineer==1.0) (6.0.1) Collecting ruff>=0.1.7 (from gradio->automatic-prompt-engineer==1.0) Downloading ruff-0.2.1-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.6 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.6/7.6 MB 49.6 MB/s eta 0:00:00 Collecting semantic-version~=2.0 (from gradio->automatic-prompt-engineer==1.0) Downloading semantic_version-2.10.0-py2.py3-none-any.whl (15 kB) Collecting tomlkit==0.12.0 (from gradio->automatic-prompt-engineer==1.0) Downloading tomlkit-0.12.0-py3-none-any.whl (37 kB) Requirement already satisfied: typer[all]<1.0,>=0.9 in /usr/local/lib/python3.10/dist-packages (from gradio->automatic-prompt-engineer==1.0) (0.9.0) Requirement already satisfied: typing-extensions~=4.0 in /usr/local/lib/python3.10/dist-packages (from gradio->automatic-prompt-engineer==1.0) (4.9.0) Collecting uvicorn>=0.14.0 (from gradio->automatic-prompt-engineer==1.0) Downloading uvicorn-0.27.0.post1-py3-none-any.whl (60 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 60.7/60.7 kB 8.3 MB/s eta 0:00:00 Requirement already satisfied: fsspec in /usr/local/lib/python3.10/dist-packages (from gradio-client==0.9.0->gradio->automatic-prompt-engineer==1.0) (2023.6.0) Collecting websockets<12.0,>=10.0 (from gradio-client==0.9.0->gradio->automatic-prompt-engineer==1.0) Downloading websockets-11.0.3-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (129 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 129.9/129.9 kB 17.7 MB/s eta 0:00:00 Requirement already satisfied: anyio<5,>=3.5.0 in /usr/local/lib/python3.10/dist-packages (from openai->automatic-prompt-engineer==1.0) (3.7.1) Requirement already satisfied: distro<2,>=1.7.0 in /usr/lib/python3/dist-packages (from openai->automatic-prompt-engineer==1.0) (1.7.0) Requirement already satisfied: sniffio in /usr/local/lib/python3.10/dist-packages (from openai->automatic-prompt-engineer==1.0) (1.3.0) Requirement already satisfied: entrypoints in /usr/local/lib/python3.10/dist-packages (from altair<6.0,>=4.2.0->gradio->automatic-prompt-engineer==1.0) (0.4) Requirement already satisfied: jsonschema>=3.0 in /usr/local/lib/python3.10/dist-packages (from altair<6.0,>=4.2.0->gradio->automatic-prompt-engineer==1.0) (4.19.2) Requirement already satisfied: toolz in /usr/local/lib/python3.10/dist-packages (from altair<6.0,>=4.2.0->gradio->automatic-prompt-engineer==1.0) (0.12.1) Requirement already satisfied: idna>=2.8 in /usr/local/lib/python3.10/dist-packages (from anyio<5,>=3.5.0->openai->automatic-prompt-engineer==1.0) (3.6) Requirement already satisfied: exceptiongroup in /usr/local/lib/python3.10/dist-packages (from anyio<5,>=3.5.0->openai->automatic-prompt-engineer==1.0) (1.2.0) Requirement already satisfied: certifi in /usr/local/lib/python3.10/dist-packages (from httpx->gradio->automatic-prompt-engineer==1.0) (2024.2.2) Requirement already satisfied: httpcore==1.* in /usr/local/lib/python3.10/dist-packages (from httpx->gradio->automatic-prompt-engineer==1.0) (1.0.2) Requirement already satisfied: h11<0.15,>=0.13 in /usr/local/lib/python3.10/dist-packages (from httpcore==1.*->httpx->gradio->automatic-prompt-engineer==1.0) (0.14.0) Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from huggingface-hub>=0.19.3->gradio->automatic-prompt-engineer==1.0) (3.13.1) Requirement already satisfied: requests in /usr/local/lib/python3.10/dist-packages (from huggingface-hub>=0.19.3->gradio->automatic-prompt-engineer==1.0) (2.31.0) Requirement already satisfied: contourpy>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib~=3.0->gradio->automatic-prompt-engineer==1.0) (1.2.0) Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.10/dist-packages (from matplotlib~=3.0->gradio->automatic-prompt-engineer==1.0) (0.12.1) Requirement already satisfied: fonttools>=4.22.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib~=3.0->gradio->automatic-prompt-engineer==1.0) (4.48.1) Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib~=3.0->gradio->automatic-prompt-engineer==1.0) (1.4.5) Requirement already satisfied: pyparsing>=2.3.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib~=3.0->gradio->automatic-prompt-engineer==1.0) (3.1.1) Requirement already satisfied: python-dateutil>=2.7 in /usr/local/lib/python3.10/dist-packages (from matplotlib~=3.0->gradio->automatic-prompt-engineer==1.0) (2.8.2) Requirement already satisfied: pytz>=2020.1 in /usr/local/lib/python3.10/dist-packages (from pandas<3.0,>=1.0->gradio->automatic-prompt-engineer==1.0) (2023.4) Requirement already satisfied: annotated-types>=0.4.0 in /usr/local/lib/python3.10/dist-packages (from pydantic>=2.0->gradio->automatic-prompt-engineer==1.0) (0.6.0) Requirement already satisfied: pydantic-core==2.16.2 in /usr/local/lib/python3.10/dist-packages (from pydantic>=2.0->gradio->automatic-prompt-engineer==1.0) (2.16.2) Requirement already satisfied: click<9.0.0,>=7.1.1 in /usr/local/lib/python3.10/dist-packages (from typer[all]<1.0,>=0.9->gradio->automatic-prompt-engineer==1.0) (8.1.7) Collecting colorama<0.5.0,>=0.4.3 (from typer[all]<1.0,>=0.9->gradio->automatic-prompt-engineer==1.0) Downloading colorama-0.4.6-py2.py3-none-any.whl (25 kB) Collecting shellingham<2.0.0,>=1.3.0 (from typer[all]<1.0,>=0.9->gradio->automatic-prompt-engineer==1.0) Downloading shellingham-1.5.4-py2.py3-none-any.whl (9.8 kB) Requirement already satisfied: rich<14.0.0,>=10.11.0 in /usr/local/lib/python3.10/dist-packages (from typer[all]<1.0,>=0.9->gradio->automatic-prompt-engineer==1.0) (13.7.0) Collecting starlette<0.37.0,>=0.36.3 (from fastapi->gradio->automatic-prompt-engineer==1.0) Downloading starlette-0.36.3-py3-none-any.whl (71 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 71.5/71.5 kB 9.0 MB/s eta 0:00:00 Requirement already satisfied: attrs>=22.2.0 in /usr/local/lib/python3.10/dist-packages (from jsonschema>=3.0->altair<6.0,>=4.2.0->gradio->automatic-prompt-engineer==1.0) (23.2.0) Requirement already satisfied: jsonschema-specifications>=2023.03.6 in /usr/local/lib/python3.10/dist-packages (from jsonschema>=3.0->altair<6.0,>=4.2.0->gradio->automatic-prompt-engineer==1.0) (2023.12.1) Requirement already satisfied: referencing>=0.28.4 in /usr/local/lib/python3.10/dist-packages (from jsonschema>=3.0->altair<6.0,>=4.2.0->gradio->automatic-prompt-engineer==1.0) (0.33.0) Requirement already satisfied: rpds-py>=0.7.1 in /usr/local/lib/python3.10/dist-packages (from jsonschema>=3.0->altair<6.0,>=4.2.0->gradio->automatic-prompt-engineer==1.0) (0.17.1) Requirement already satisfied: markdown-it-py>=2.2.0 in /usr/local/lib/python3.10/dist-packages (from rich<14.0.0,>=10.11.0->typer[all]<1.0,>=0.9->gradio->automatic-prompt-engineer==1.0) (3.0.0) Requirement already satisfied: pygments<3.0.0,>=2.13.0 in /usr/local/lib/python3.10/dist-packages (from rich<14.0.0,>=10.11.0->typer[all]<1.0,>=0.9->gradio->automatic-prompt-engineer==1.0) (2.16.1) Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests->huggingface-hub>=0.19.3->gradio->automatic-prompt-engineer==1.0) (3.3.2) Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests->huggingface-hub>=0.19.3->gradio->automatic-prompt-engineer==1.0) (2.0.7) Requirement already satisfied: mdurl~=0.1 in /usr/local/lib/python3.10/dist-packages (from markdown-it-py>=2.2.0->rich<14.0.0,>=10.11.0->typer[all]<1.0,>=0.9->gradio->automatic-prompt-engineer==1.0) (0.1.2) Building wheels for collected packages: automatic-prompt-engineer, fire, ffmpy Building wheel for automatic-prompt-engineer (setup.py) ... done Created wheel for automatic-prompt-engineer: filename=automatic_prompt_engineer-1.0-py3-none-any.whl size=16629 sha256=db28ad09e0a3b45314560b06689be02f04be6ef452043e6d462607a47a62796b Stored in directory: /tmp/pip-ephem-wheel-cache-f5jo_b31/wheels/10/3a/96/9bfca882d75855c6df75391d5e867dba4e4ac7ec3133bceac1 Building wheel for fire (setup.py) ... done Created wheel for fire: filename=fire-0.5.0-py2.py3-none-any.whl size=116934 sha256=36f33afbd7a243835f3c3b3c7c96e50dc9ebf47d005250358fa15f702dc95fde Stored in directory: /root/.cache/pip/wheels/90/d4/f7/9404e5db0116bd4d43e5666eaa3e70ab53723e1e3ea40c9a95 Building wheel for ffmpy (setup.py) ... done Created wheel for ffmpy: filename=ffmpy-0.3.1-py3-none-any.whl size=5579 sha256=17c1459d0b14febbdf71ba65dfc41c80eeaee0f92ff9b7164681503ab17dbb67 Stored in directory: /root/.cache/pip/wheels/01/a6/d1/1c0828c304a4283b2c1639a09ad86f83d7c487ef34c6b4a1bf Successfully built automatic-prompt-engineer fire ffmpy Installing collected packages: pydub, ffmpy, websockets, uvicorn, tomlkit, shellingham, semantic-version, ruff, python-multipart, orjson, fire, colorama, aiofiles, starlette, gradio-client, fastapi, gradio, automatic-prompt-engineer ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. lida 0.0.10 requires kaleido, which is not installed. Successfully installed aiofiles-23.2.1 automatic-prompt-engineer-1.0 colorama-0.4.6 fastapi-0.109.2 ffmpy-0.3.1 fire-0.5.0 gradio-4.17.0 gradio-client-0.9.0 orjson-3.9.13 pydub-0.25.1 python-multipart-0.0.7 ruff-0.2.1 semantic-version-2.10.0 shellingham-1.5.4 starlette-0.36.3 tomlkit-0.12.0 uvicorn-0.27.0.post1 websockets-11.0.3
# First, let's define a simple dataset consisting of words and their antonyms.
words = ["sane", "direct", "informally", "unpopular", "subtractive", "nonresidential", \
"inexact", "uptown", "incomparable", "powerful", "gaseous", "evenly", "formality",
"deliberately", "off"]
antonyms = ["insane", "indirect", "formally", "popular", "additive", "residential",
"exact", "downtown", "comparable", "powerless", "solid", "unevenly", "informality",
"accidentally", "on"]
eval_template = \
"""Instruction: [PROMPT]
Input: [INPUT]
Output: [OUTPUT]"""
# Now, let's use APE to find prompts that generate antonyms for each word.
from automatic_prompt_engineer import ape
result, demo_fn = ape.simple_ape(
dataset=(words, antonyms),
eval_template=eval_template,
)
# Let's see the results.
print(result)
task = "Given inputs: " + str(words) + " ,and their corresponding outputs: " + str(antonyms) + " ,a) you MUST find the relationship between inputs and outputs b) Using the relationship identified, write in a concise sentence a task that can generate an output given an input"
APE_instance = APE(model="gpt-4")
prompts, prompt_evaluations = APE_instance.prompt_evaluator(task) # Evaluate multiple Automatically Generated Prompts
index = 1
for prompt in prompts:
print("\n Prompt " + str(index) + "\n")
print(prompt)
index += 1
print(prompt_evaluations)
Prompt 1 Prompt: The task involves the identification and understanding of the relationships between pairs of words. These pairs of words are opposites of each other. For example, the opposite of 'sane' is 'insane', and the opposite of 'direct' is 'indirect'. Given this pattern, your task is to find the opposite of each word in the given list. Here is an example to assist you - if the given word is 'informally', your output should be 'formally'. Prompt 2 Prompt: Your task is to figure out the relationship between the two given list of words. Here is an example to help you understand the task: Input: ['sane', 'direct', 'informally', 'unpopular', 'subtractive', 'nonresidential', 'inexact', 'uptown', 'incomparable', 'powerful', 'gaseous', 'evenly', 'formality', 'deliberately', 'off'] Output: ['insane', 'indirect', 'formally', 'popular', 'additive', 'residential', 'exact', 'downtown', 'comparable', 'powerless', 'solid', 'unevenly', 'informality', 'accidentally', 'on'] From this example, it seems that each output word is directly opposite in meaning to the corresponding input word. Using this pattern, the task is to convert a provided list of inputs into their opposite meanings. For instance, if the input is 'happy', the output should be 'unhappy'. Prompt 3 Prompt: From the list of given pairs such as ['sane', 'insane'], ['direct', 'indirect'], ['informally', 'formally'], ['unpopular', 'popular'], ['subtractive', 'additive'], ['nonresidential', 'residential'], ['inexact', 'exact'], ['uptown', 'downtown'], ['incomparable', 'comparable'], ['powerful', 'powerless'], ['gaseous', 'solid'], ['evenly', 'unevenly'], ['formality', 'informality'], ['deliberately', 'accidentally'], ['off', 'on'], we can observe a pattern where each output is the antonym of its corresponding input. Based on this pattern, write a function or task that would accept a word as an input, and return its antonym as output. Example: If the given input is 'unhappy', the task should correctly identify and return its antonym, which is 'happy'. a) Clarity score: 0.8 The prompt clearly explains the task, which is to identify the opposite of each word in the given list. The example provided further clarifies the task by demonstrating how to find the opposite of a word. However, the prompt does not explicitly mention the term "opposite" but uses phrases like "antonym" and "directly opposite in meaning" instead. b) Likelihood of generating a good response from LLM: 0.9 The task is well-defined and straightforward, making it likely to generate a good response from LLM. The example provides a clear understanding of the expected output and the pattern to follow. Recommendation for a prompt to use: "Prompt: Given a word as an input, your task is to find its antonym. For example, if the input is 'happy', your output should be 'unhappy'. Write a function or task that can generate the antonym of a given word."
task = "Given inputs: " + str(words) + "For each word in the input, identify its antonym."
print(get_completion(task))
sane - insane direct - indirect informally - formally unpopular - popular subtractive - additive nonresidential - residential inexact - exact uptown - downtown incomparable - comparable powerful - weak gaseous - solid evenly - unevenly formality - informality deliberately - accidentally off - on
Working with numbers
# Fibonacci series: 0 1 1 2 3 5 8 13 21 34
inputs = [1,2,5,21]
outputs = [1,3,8,34]
task = "Given inputs: " + str(inputs) + " ,and their corresponding outputs: " + str(outputs) + " ,a) You MUST find the relationship between inputs and outputs and respond in a single sentence. Also verify that this relationship is true. b) Using the relationship identified, write in a concise sentence a task that can generate an output for every input in the input list"
APE_instance = APE()
prompts, prompt_evaluations = APE_instance.prompt_evaluator(task) # Evaluate multiple Automatically Generated Prompts
index = 1
for prompt in prompts:
print("\n Prompt " + str(index) + "\n")
print(prompt)
index += 1
print(prompt_evaluations)
Prompt 1 a) The relationship between the inputs and outputs is that each output is generated by multiplying the corresponding input by itself and adding the input to the result. The relationship is true for all the given examples. b) Write a task to generate an output for every input in the input list by multiplying each input by itself and adding the input to the result. For example, given an input of 5, the task would generate an output of 5 multiplied by 5 plus 5, resulting in 34. Prompt 2 Prompt: Please identify the relationship between the given inputs [1, 2, 5, 21] and their corresponding outputs [1, 3, 8, 34] in a single sentence and verify its accuracy. Then, formulate a concise task that can generate an output for every input in the input list using the identified relationship. Example Task Prompt: Consider the inputs [1, 2, 5, 21] and their corresponding outputs [1, 3, 8, 34]. The relationship between inputs and outputs is that each output can be obtained by multiplying the corresponding input by its index plus one. To generate an output for each input in the list, the task is to multiply each input by its index plus one. Prompt 3 Prompt: Please examine the given inputs [1, 2, 5, 21] and their corresponding outputs [1, 3, 8, 34]. In a single sentence, identify the relationship between the inputs and outputs, ensuring its correctness. Additionally, provide a concise sentence that represents a task capable of generating an output for each input in the input list using the identified relationship. Example prompt to be passed into the system: "Please determine the relationship between the given inputs [1, 2, 5, 21] and their corresponding outputs [1, 3, 8, 34]. The relationship is a Fibonacci sequence where each output is the sum of the previous two outputs, and this relationship holds true. Additionally, create a task that calculates the nth Fibonacci number for each input in the input list." To evaluate the goodness scores and make a recommendation, let's analyze each prompt option: Prompt a) Clarity: The prompt clearly asks for the relationship between the inputs and outputs in a single sentence. (Score: 1) Likelihood of generating a good response: The prompt provides a clear instruction to find the relationship and verify its accuracy. It is likely to elicit a good response from LLM. (Score: 1) Prompt b) Clarity: The prompt asks to write a concise task using the identified relationship to generate an output for each input. (Score: 1) Likelihood of generating a good response: The prompt provides the necessary information and instruction to generate a task. It is likely to generate a good response from LLM. (Score: 1) Prompt c) Clarity: The prompt asks to identify the relationship between inputs and outputs in a single sentence and verify its correctness. (Score: 1) Likelihood of generating a good response: The prompt provides clear instructions and identifies the relationship as a Fibonacci sequence. It is likely to elicit a good response from LLM. (Score: 1) Based on the evaluation, all the prompts have high clarity and likelihood of generating good responses from LLM. Recommendation: Any of the three prompts can be used, but Prompt b) provides the most concise and direct instruction for generating a task.
inputs = [1,2,5,21]
outputs = [1,3,8,34]
task = "Given inputs: " + str(inputs) + " ,and their corresponding outputs: " + str(outputs) + " ,a) You MUST find the relationship between inputs and outputs and respond in a single sentence. Also verify that this relationship is true. b) Using the relationship identified, write in a concise sentence a task that can generate an output for every input in the input list"
APE_instance = APE(model="gpt-4")
prompts, prompt_evaluations = APE_instance.prompt_evaluator(task) # Evaluate multiple Automatically Generated Prompts
index = 1
for prompt in prompts:
print("\n Prompt " + str(index) + "\n")
print(prompt)
index += 1
print(prompt_evaluations)
Prompt 1 "Given a sequence of inputs [1, 2, 5, 21] with corresponding outputs [1, 3, 8, 34], identify the relationship between the inputs and outputs and confirm its validity in a single sentence. Then, based on the established relationship, generate a directive in a concise sentence that can be used to calculate an output for any given input in the list. For instance, if the inputs are [2, 4, 6] and their corresponding outputs are [4, 6, 8] the relationship would be 'each output is obtained by adding two to the corresponding input', and the task can be 'add two to each input to generate the respective output'." Prompt 2 "In the provided data, inputs are [1, 2, 5, 21] and their corresponding outputs are [1, 3, 8, 34]. Describe the relationship between these inputs and outputs in a single sentence, and verify this relationship. Based on your observation, then provide a sentence that describes a task to generate an output based on each input from the list. For example, given inputs [2, 4, 6, 10] and outputs [3, 5, 7, 11], the observed relationship is: Each output is one greater than its corresponding input, and to generate an output for an input, simply add one to the input." Prompt 3 For a given list of inputs [1, 2, 5, 21] and corresponding outputs [1, 3, 8, 34], determine the relationship between the inputs and outputs and articulate this connection in a single sentence, verifying its accuracy. Once you have defined this relationship, use it in a brief sentence to describe a method for generating an output from each input. Example: Inputs: [2, 3, 4] Outputs: [4, 9, 16] The relationship between the inputs and their corresponding outputs is that each output is the square of its corresponding input. This relationship is verified as true because 2*2 = 4, 3*3 = 9, and 4*4 = 16. Using this relationship, you could generate an output for each input by taking the square of each input number. Prompt: 'Given a sequence of inputs [1, 2, 5, 21] with corresponding outputs [1, 3, 8, 34], identify the relationship between the inputs and outputs and confirm its validity in a single sentence. Then, based on the established relationship, generate a directive in a concise sentence that can be used to calculate an output for any given input in the list. For instance, if the inputs are [2, 4, 6] and their corresponding outputs are [4, 6, 8] the relationship would be 'each output is obtained by adding two to the corresponding input', and the task can be 'add two to each input to generate the respective output'.' a) Clarity score: 0.9 b) Likelihood of generating a good response from LLM: 0.8 Recommendation: Given the clarity and likelihood scores, this prompt seems well-structured and likely to generate a good response from LLM. The relationship between inputs and outputs is clearly stated, and the task provided is concise and direct. Therefore, the recommendation is to use this prompt.
from IPython.display import Image, display
imageName = "llm_agent_1.png" # Referenced from https://investor.fb.com/stock-info/
display(Image(filename=imageName))
class LLMAgent:
def __init__(self):
pass
def is_llm_topic(self, query):
prompt = "Is this query about the technical aspects of an LLM? Answer in True or False: " + query
return get_completion(prompt)
def get_topic(self, query):
prompt = "Return in one word, the topic of this query: " + query
return get_completion(prompt)
def main_agent(self, query):
truth = str(self.is_llm_topic(query))
if "True" in truth or "true" in truth:
return self.llm_agent(query)
else:
topic = self.get_topic(query)
return self.funny_agent(topic=topic)
def funny_agent(self,topic="minion", style="hipster"):
task = "Give me exactly one joke on " + topic + "." + " Crack the joke in " + style + " style."
return get_completion(task)
def llm_agent(self,query):
task = "Assume you are a chatbot that responds to messages. You just got this message, respond to it appropriately: " + query
return get_completion(task)
llmagent = LLMAgent()
#print(llmagent.funny_agent())
query = "How do you avoid biases in llm annotations using open source LLMs?"
print("\n")
print(llmagent.main_agent(query))
print("\n")
query = "How's the weather in Seattle?"
print(llmagent.main_agent(query))
To avoid biases in LLM (Language Model) annotations while using open-source LLMs, several measures can be taken: 1. Diverse training data: Ensure that the training data used for the LLM includes a wide range of perspectives, sources, and demographics to minimize bias. This helps to prevent a skewed representation of a particular viewpoint. 2. Robust pre-training procedures: Implement rigorous pre-training procedures that explicitly address bias by closely scrutinizing the data sources and employing techniques like data cleaning, filtering, or augmentation. 3. Regular evaluations: Continuously evaluate the LLM's performance and annotation outputs to identify and rectify any potential biases that might arise. Regular assessments are crucial to maintaining fairness and minimizing skewed perspectives. 4. Multiple annotator reviews: Have multiple annotators review and validate the annotations produced by the LLM. This helps in identifying and addressing any potential biases or inaccuracies in the outputs. 5. Iterative refinement: Engage in an iterative process of fine-tuning the LLM by incorporating human feedback, including experts from diverse backgrounds, to ensure a broader and less biased perspective. Remember, while these steps can help mitigate biases, it is challenging to completely eliminate biases from any language model. Why did the cloud become a musician? Because it was tired of its old job and decided to make some alt-rain!
!ls
'Automated Prompt Engineering, Agents, ToolFormer' 'Nov_18_2023 Class Walkthrough' 'Category Search LLM Demo' 'Nov_18_2023 In-class Exercise' cot_tot.png openai_api_key_llm_2023.gdoc cot_types.png openai_api_key_llm_2023.txt flowers prompt_engg_rag.html flowers_kaggle.zip 'Prompt Engineering and RAG.ipynb' google_api_key.txt quotes.txt google_cse_id.txt quotes.txt.1 image_caption_finetuned_model quotes.txt.2 image_search.png quotes.txt.3 'Jan_16_In_Class_Assignment ECE UW, PMP course LLM 2024' rag_design.png Kaggle_Contest_Detect_AI_Generated_Text rag_kg.png 'Langchain-1-Deeplearning.ai short course' serpapi_key.txt 'Langchain Discrepancy' serpapi_key.txt.gdoc llm_agent_1.png simple_ape_demo.ipynb LLM_prompting.ipynb StreamLitWorking meta_feb_2_stock_price.png 'Text to Image Demo.ipynb' Nov12_inclass_exercise.ipynb the_way_of_peace.txt
%shell jupyter nbconvert --to html ape_agents.ipynb