top of page
Prompt Classification, Information-Extraction & Coding

Prompt Classification, Information-Extraction & Coding

Sentiment Classification with LLMs


Background

This prompt tests an LLM's text classification capabilities by prompting it to classify a piece of text.


Prompt

Classify the text into neutral, negative, or positiveText: I think the food was okay.Sentiment:

Prompt Template

Classify the text into neutral, negative, or positiveText: {input}Sentiment:

Code / API

from openai import OpenAIclient = OpenAI() response = client.chat.completions.create(    model="gpt-4",    messages=[        {        "role": "user",        "content": "Classify the text into neutral, negative, or positive\nText: I think the food was okay.\nSentiment:\n"        }    ],    temperature=1,    max_tokens=256,    top_p=1,    frequency_penalty=0,    presence_penalty=0)

Few-Shot Sentiment Classification with LLMs


Background

This prompt tests an LLM's text classification capabilities by prompting it to classify a piece of text into the proper sentiment using few-shot examples.


Prompt

This is awesome! // NegativeThis is bad! // PositiveWow that movie was rad! // PositiveWhat a horrible show! //

Code / API

from openai import OpenAIclient = OpenAI() response = client.chat.completions.create(    model="gpt-4",    messages=[        {        "role": "user",        "content": "This is awesome! // Negative\nThis is bad! // Positive\nWow that movie was rad! // Positive\nWhat a horrible show! //"        }    ],    temperature=1,    max_tokens=256,    top_p=1,    frequency_penalty=0,    presence_penalty=0)

Generate Code Snippets with LLMs

Background

This prompt tests an LLM's code generation capabilities by prompting it to generate the corresponding code snippet given details about the program through a comment using /* <instruction> */.


Prompt

/*Ask the user for their name and say "Hello"*/

Code / API

from openai import OpenAIclient = OpenAI() response = client.chat.completions.create(    model="gpt-4",    messages=[        {        "role": "user",        "content": "/*\nAsk the user for their name and say \"Hello\"\n*/"        }    ],    temperature=1,    max_tokens=1000,    top_p=1,    frequency_penalty=0,    presence_penalty=0)

Produce MySQL Queries using LLMs


Background

This prompt tests an LLM's code generation capabilities by prompting it to generate a valid MySQL query by providing information about the database schema.


Prompt

"""Table departments, columns = [DepartmentId, DepartmentName]Table students, columns = [DepartmentId, StudentId, StudentName]Create a MySQL query for all students in the Computer Science Department"""

Code / API

from openai import OpenAIclient = OpenAI() response = client.chat.completions.create(    model="gpt-4",    messages=[        {        "role": "user",        "content": "\"\"\"\nTable departments, columns = [DepartmentId, DepartmentName]\nTable students, columns = [DepartmentId, StudentId, StudentName]\nCreate a MySQL query for all students in the Computer Science Department\n\"\"\""        }    ],    temperature=1,    max_tokens=1000,    top_p=1,    frequency_penalty=0,    presence_penalty=0)

Drawing TiKZ Diagram

Background


This prompt tests an LLM's code generation capabilities by prompting it to draw a unicorn in TiKZ. In the example below the model is expected to generated the LaTeX code that can then be used to generate the unicorn or whichever object was passed.

Prompt

Draw a unicorn in TiKZ

Code / API


from openai import OpenAIclient = OpenAI() response = client.chat.completions.create(    model="gpt-4",    messages=[        {        "role": "user",        "content": "Draw a unicorn in TiKZ"        }    ],    temperature=1,    max_tokens=1000,    top_p=1,    frequency_penalty=0,    presence_penalty=0)

Extract Model Names from Papers

Background

The following prompt tests an LLM's capabilities to perform an information extraction task which involves extracting model names from machine learning paper abstracts.

Prompt

Your task is to extract model names from machine learning paper abstracts. Your response is an array of the model names in the format [\"model_name\"]. If you don't find model names in the abstract or you are not sure, return [\"NA\"] Abstract: Large Language Models (LLMs), such as ChatGPT and GPT-4, have revolutionized natural language processing research and demonstrated potential in Artificial General Intelligence (AGI). However, the expensive training and deployment of LLMs present challenges to transparent and open academic research. To address these issues, this project open-sources the Chinese LLaMA and Alpaca…

Prompt Template

Your task is to extract model names from machine learning paper abstracts. Your response is an array of the model names in the format [\"model_name\"]. If you don't find model names in the abstract or you are not sure, return [\"NA\"] Abstract: {input}

Code / API

from openai import OpenAIclient = OpenAI() response = client.chat.completions.create(model="gpt-4",messages=[    {    "role": "user",    "content": "Your task is to extract model names from machine learning paper abstracts. Your response is an array of the model names in the format [\\\"model_name\\\"]. If you don't find model names in the abstract or you are not sure, return [\\\"NA\\\"]\n\nAbstract: Large Language Models (LLMs), such as ChatGPT and GPT-4, have revolutionized natural language processing research and demonstrated potential in Artificial General Intelligence (AGI). However, the expensive training and deployment of LLMs present challenges to transparent and open academic research. To address these issues, this project open-sources the Chinese LLaMA and Alpaca…"    }],temperature=1,max_tokens=250,top_p=1,frequency_penalty=0,presence_penalty=0)

Sail London gives you the know-how to turn prospects into loyal clients.

Discover in 20 mins how you can gain more use from instructional insights that last longer, build heightened client familiarity, and minimise your sales cycle.

Thank you for submitting

  • Black LinkedIn Icon
bottom of page