在这篇笔记本中,我们将探索小型提示工程技术和建议,帮助我们从更适合我们需求的模型中获得响应。
!pip install -q openai==1.1.1
#if you need a API Key from OpenAI
#https://platform.openai.com/account/api-keys
import openai
openai.api_key="your-openai-key"
model = "gpt-3.5-turbo"
#model = "gpt-4o"
1 使用 Few Shot Samples 格式化答案 #
为了以特定格式获得模型的响应,我们有多种选择,但最方便的方法之一是使用 Few-Shot Samples。这涉及向模型呈现用户查询和示例响应的对。
像 GPT-3.5 这样的大型模型对提供的示例反应良好,并根据指定的格式调整其响应。
根据给出的示例数量,这种技术可以称为:零样本、一次样本、少量样本。
一次样本应该足够了,建议最多使用六次样本。重要的是要记住,此信息在每个查询中传递,并占用输入提示中的空间。
#Functio to call the model.
def return_OAIResponse(user_message, context):
newcontext = context.copy()
newcontext.append({'role':'user', 'content':"question: " + user_message})
response = openai.chat.completions.create(
model=model,
messages=newcontext,
temperature=1,
)
return (response.choices[0].message.content)
在这个零次提示中,我们获得了正确的答案,但没有格式化,因为模型包含了他想要的信息。
#zero-shot
context_user = [
{'role':'system', 'content':'You are an expert in F1.'}
]
print(return_OAIResponse("Who won the F1 2010?", context_user))
Sebastian Vettel won the F1 2010 World Championship driving for Red Bull Racing.
像 GPT 3.5 和 GPT 4 这样大而好的模型只需要一次尝试就可以学习我们期望的输出格式。
#one-shot
context_user = [
{'role':'system', 'content':
"""You are an expert in F1.
Who won the 2000 f1 championship?
Driver: Michael Schumacher.
Team: Ferrari."""}
]
print(return_OAIResponse("Who won the F1 2011?", context_user))
Driver: Sebastian Vettel.
Team: Red Bull Racing.
较小的模型或更复杂的格式可能需要多次shot。以下是两次shots的样本。
#Few shots
context_user = [
{'role':'system', 'content':
"""You are an expert in F1.
Who won the 2010 f1 championship?
Driver: Sebastian Vettel.
Team: Red Bull Renault.
Who won the 2009 f1 championship?
Driver: Jenson Button.
Team: BrawnGP."""}
]
print(return_OAIResponse("Who won the F1 2006?", context_user))
#Few shots
context_user = [
{'role':'system', 'content':
"""You are an expert in F1.
Who won the 2010 f1 championship?
Driver: Sebastian Vettel.
Team: Red Bull Renault.
Who won the 2009 f1 championship?
Driver: Jenson Button.
Team: BrawnGP."""}
]
print(return_OAIResponse("Who won the F1 2006?", context_user))
Driver: Fernando Alonso.
Team: Renault.
我们一直在不使用 OpenAI 的角色的情况下创建提示,而且正如我们所见,它工作正常。
但是,正确的方法是使用这些角色来构建提示,从而使模型的学习过程更加有效。
通过不将整个提示像系统命令一样输入给它,我们使模型能够从对话中学习,这对它来说更为实用。
#Recomended solution
context_user = [
{'role':'system', 'content':'You are and expert in f1.\n\n'},
{'role':'user', 'content':'Who won the 2010 f1 championship?'},
{'role':'assistant', 'content':"""Driver: Sebastian Vettel. \nTeam: Red Bull. \nPoints: 256. """},
{'role':'user', 'content':'Who won the 2009 f1 championship?'},
{'role':'assistant', 'content':"""Driver: Jenson Button. \nTeam: BrawnGP. \nPoints: 95. """},
]
print(return_OAIResponse("Who won the F1 2019?", context_user))
Driver: Lewis Hamilton.
Team: Mercedes.
Points: 413.
我们也可以使用更传统的提示来解决这个问题,描述我们想要什么以及我们想要的格式。
然而,必须理解,在这种情况下,模型遵循指令,而在使用shot的情况下,它是在推理过程中实时学习。
context_user = [
{'role':'system', 'content':"""You are and expert in f1.
You are going to answer the question of the user giving the name of the rider,
the name of the team and the points of the champion, following the format:
Drive:
Team:
Points: """
}
]
print(return_OAIResponse("Who won the F1 2019?", context_user))
Drive: Lewis Hamilton
Team: Mercedes
Points: 413
Few Shots可用于分类。
context_user = [
{'role':'system', 'content':
"""You are an expert in reviewing product opinions and classifying them as positive or negative.
It fulfilled its function perfectly, I think the price is fair, I would buy it again.
Sentiment: Positive
It didn't work bad, but I wouldn't buy it again, maybe it's a bit expensive for what it does.
Sentiment: Negative.
I wouldn't know what to say, my son uses it, but he doesn't love it.
Sentiment: Neutral
"""}
]
print(return_OAIResponse("I'm not going to return it, but I don't plan to buy it again.", context_user))
Sentiment: Neutral