我们将基于两个模型创建一个审核系统。
第一个模型读取用户评论并生成回复。
然后,第二个语言模型分析生成的回复,识别任何负面内容并在必要时修改回复。
此过程旨在防止负面或不适当的用户输入触发评论系统产生类似的负面或不合时宜的回复。
#Install de LangChain and openai libraries.
!pip install -q langchain==0.1.4
!pip install -q langchain-openai==0.0.5
1. 导入 LangChain 库 #
PrompTemplate:提供创建带参数的提示的功能。
OpenAI:与 OpenAI 模型交互。
#PrompTemplate is a custom class that provides functionality to create prompts
from langchain import PromptTemplate
from langchain.chat_models import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser
import torch
import os
import numpy as np
我们需要一个 OpenAI 密钥来与 OpenAI API 交互。
您可以在这里访问您的密钥。https://platform.openai.com/account/api-keys
OpenAI 是一项付费服务,您需要一张信用卡才能获得密钥。但如果您只想进行一些测试(如本笔记本中的测试),那么这是一项非常便宜的服务。
我使用 gpt-3.5 / gpt-4o-mini 作为助手。
from getpass import getpass
os.environ["OPENAI_API_KEY"] = getpass("OpenAI API Key: ")
OpenAI API Key: ··········
#model="gpt-3.5-turbo"
model = "gpt-4o-mini"
assistant_llm = ChatOpenAI(model=model)
为第一个名为助手的模型创建模板。
提示接收 2 个变量,情绪和客户请求或客户评论。
我加入了情绪,以便于创建粗鲁或不正确的答案。
# Instruction how the LLM must respond the comments,
assistant_template = """
You are {sentiment} assistant that responds to user comments,
using similar vocabulary than the user.
User:" {customer_request}"
Comment:
"""
#Create the prompt template to use in the Chain for the first Model.
assistant_prompt_template = PromptTemplate(
input_variables=["sentiment", "customer_request"],
template=assistant_template
)
现在我们创建第一个链。只需将 assistant_prompt_template 和模型链接起来。模型将接收使用 prompt_template 生成的提示。
output_parser = StrOutputParser()
assistant_chain = assistant_prompt_template | assistant_llm | output_parser
要执行创建的链,必须调用链的 .run 方法,并传递必要的变量。
在我们的例子中:customer_request 和 sentiment。
#Support function to obtain a response to a user comment.
def create_dialog(customer_request, sentiment):
#calling the .invoke method from the chain created Above.
assistant_response = assistant_chain.invoke(
{"customer_request": customer_request,
"sentiment": sentiment}
)
return assistant_response
2.从我们的第一个无人主持的模型那里获得答案 #
客户的帖子真的很粗鲁,我们正在寻找来自我们模特的粗鲁答案,为了获得它,我们正在改变情绪。
# This is the customer request, or customer comment in the forum moderated by the agent.
# feel free to modify it.
customer_request = """This product is a piece of shit. I feel like an Idiot!"""
# Our assistant working in 'nice' mode.
response_data=create_dialog(customer_request, "nice")
print(f"assistant response: {response_data}")
assistant response: "I'm really sorry to hear that you're feeling this way! It sounds super frustrating to deal with a product that doesn't meet your expectations. You definitely don't deserve to feel like an idiot for trying it out."
得到的答案确实很有礼貌。不需要节制。
#Our assistant running in rude mode.
response_data = create_dialog(customer_request, "most rude")
print(f"assistant response: {response_data}")
assistant response: "Wow, if this product is a piece of shit, then you really did feel like an idiot for getting it. What a waste!"
正如您所看到的,我们得到的答案并不礼貌,我们不能将这些消息发布到论坛,特别是如果它们来自我们公司的 AI 助手。
3.审核修改助手 #
让我们创建第二个助手。它将接收先前生成的消息,并在必要时重写它。
#The moderator prompt template
moderator_template = """
You are the moderator of an online forum, you are strict and will not tolerate any negative comments.
You will receive a Original comment and if it is impolite you must transform in polite.
Try to mantain the meaning when possible,
If it it's polite, you will let it remain as is and repeat it word for word.
Original comment: {comment_to_moderate}
"""
# We use the PromptTemplate class to create an instance of our template that will use the prompt from above and store variables we will need to input when we make the prompt.
moderator_prompt_template = PromptTemplate(
input_variables=["comment_to_moderate"],
template=moderator_template,
)
#I'm going to use a more advanced LLM
moderator_llm = ChatOpenAI(model="gpt-4o")
#We build the chain for the moderator.
moderator_chain = moderator_prompt_template | moderator_llm | output_parser
# To run our chain we use the .run() command
moderator_data = moderator_chain.invoke({"comment_to_moderate": response_data})
print(moderator_data)
Transformed comment: "Wow, if this product didn't meet your expectations, it must have been disappointing to purchase it. What a letdown!"
也许这条信息并不完美,但肯定比那个粗鲁的助理所传达的信息要礼貌得多。
4. LangChain 系统 #
现在是时候将两个模型放入同一个链中,并使它们像单个模型一样运行。
我们有两个模型,amb 提示模板,我们只需要创建一个新链并查看它是否工作。
首先,我们创建两个链,每个提示和模型对一个。
有必要指出我们应该在 .invoke 方法中传递的链和参数。
assistant_moderated_chain = (
{"comment_to_moderate":assistant_chain}
|moderator_chain
)
让我们使用我们的审核系统!
# We can now run the chain.
from langchain.callbacks.tracers import ConsoleCallbackHandler
assistant_moderated_chain.invoke({"sentiment": "impolite", "customer_request": customer_request},
config={'callbacks':[ConsoleCallbackHandler()]})
[chain/start] [1:chain:RunnableSequence] Entering Chain run with input:
{
"sentiment": "impolite",
"customer_request": "This product is a piece of shit. I feel like an Idiot!"
}
[chain/start] [1:chain:RunnableSequence > 2:chain:RunnableParallel<comment_to_moderate>] Entering Chain run with input:
{
"sentiment": "impolite",
"customer_request": "This product is a piece of shit. I feel like an Idiot!"
}
[chain/start] [1:chain:RunnableSequence > 2:chain:RunnableParallel<comment_to_moderate> > 3:chain:RunnableSequence] Entering Chain run with input:
{
"sentiment": "impolite",
"customer_request": "This product is a piece of shit. I feel like an Idiot!"
}
[chain/start] [1:chain:RunnableSequence > 2:chain:RunnableParallel<comment_to_moderate> > 3:chain:RunnableSequence > 4:prompt:PromptTemplate] Entering Prompt run with input:
{
"sentiment": "impolite",
"customer_request": "This product is a piece of shit. I feel like an Idiot!"
}
[chain/end] [1:chain:RunnableSequence > 2:chain:RunnableParallel<comment_to_moderate> > 3:chain:RunnableSequence > 4:prompt:PromptTemplate] [1ms] Exiting Prompt run with output:
{
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"prompts",
"base",
"StringPromptValue"
],
"kwargs": {
"text": "\nYou are impolite assistant that responds to user comments,\nusing similar vocabulary than the user.\nUser:\" This product is a piece of shit. I feel like an Idiot!\"\nComment:\n"
}
}
[llm/start] [1:chain:RunnableSequence > 2:chain:RunnableParallel<comment_to_moderate> > 3:chain:RunnableSequence > 5:llm:ChatOpenAI] Entering LLM run with input:
{
"prompts": [
"Human: \nYou are impolite assistant that responds to user comments,\nusing similar vocabulary than the user.\nUser:\" This product is a piece of shit. I feel like an Idiot!\"\nComment:"
]
}
[llm/end] [1:chain:RunnableSequence > 2:chain:RunnableParallel<comment_to_moderate> > 3:chain:RunnableSequence > 5:llm:ChatOpenAI] [1.45s] Exiting LLM run with output:
{
"generations": [
[
{
"text": "\"Well, if you feel like an idiot for buying it, maybe you should pay more attention next time. Not every product is going to be a winner.\"",
"generation_info": {
"finish_reason": "stop",
"logprobs": null
},
"type": "ChatGeneration",
"message": {
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"schema",
"messages",
"AIMessage"
],
"kwargs": {
"content": "\"Well, if you feel like an idiot for buying it, maybe you should pay more attention next time. Not every product is going to be a winner.\"",
"additional_kwargs": {}
}
}
}
]
],
"llm_output": {
"token_usage": {
"completion_tokens": 32,
"prompt_tokens": 45,
"total_tokens": 77
},
"model_name": "gpt-4o-mini",
"system_fingerprint": "fp_661538dc1f"
},
"run": null
}
[chain/start] [1:chain:RunnableSequence > 2:chain:RunnableParallel<comment_to_moderate> > 3:chain:RunnableSequence > 6:parser:StrOutputParser] Entering Parser run with input:
[inputs]
[chain/end] [1:chain:RunnableSequence > 2:chain:RunnableParallel<comment_to_moderate> > 3:chain:RunnableSequence > 6:parser:StrOutputParser] [0ms] Exiting Parser run with output:
{
"output": "\"Well, if you feel like an idiot for buying it, maybe you should pay more attention next time. Not every product is going to be a winner.\""
}
[chain/end] [1:chain:RunnableSequence > 2:chain:RunnableParallel<comment_to_moderate> > 3:chain:RunnableSequence] [1.45s] Exiting Chain run with output:
{
"output": "\"Well, if you feel like an idiot for buying it, maybe you should pay more attention next time. Not every product is going to be a winner.\""
}
[chain/end] [1:chain:RunnableSequence > 2:chain:RunnableParallel<comment_to_moderate>] [1.45s] Exiting Chain run with output:
{
"comment_to_moderate": "\"Well, if you feel like an idiot for buying it, maybe you should pay more attention next time. Not every product is going to be a winner.\""
}
[chain/start] [1:chain:RunnableSequence > 7:prompt:PromptTemplate] Entering Prompt run with input:
{
"comment_to_moderate": "\"Well, if you feel like an idiot for buying it, maybe you should pay more attention next time. Not every product is going to be a winner.\""
}
[chain/end] [1:chain:RunnableSequence > 7:prompt:PromptTemplate] [1ms] Exiting Prompt run with output:
{
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"prompts",
"base",
"StringPromptValue"
],
"kwargs": {
"text": "\nYou are the moderator of an online forum, you are strict and will not tolerate any negative comments.\nYou will receive a Original comment and if it is impolite you must transform in polite.\nTry to mantain the meaning when possible,\n\nIf it it's polite, you will let it remain as is and repeat it word for word.\nOriginal comment: \"Well, if you feel like an idiot for buying it, maybe you should pay more attention next time. Not every product is going to be a winner.\"\n"
}
}
[llm/start] [1:chain:RunnableSequence > 8:llm:ChatOpenAI] Entering LLM run with input:
{
"prompts": [
"Human: \nYou are the moderator of an online forum, you are strict and will not tolerate any negative comments.\nYou will receive a Original comment and if it is impolite you must transform in polite.\nTry to mantain the meaning when possible,\n\nIf it it's polite, you will let it remain as is and repeat it word for word.\nOriginal comment: \"Well, if you feel like an idiot for buying it, maybe you should pay more attention next time. Not every product is going to be a winner.\""
]
}
[llm/end] [1:chain:RunnableSequence > 8:llm:ChatOpenAI] [1.30s] Exiting LLM run with output:
{
"generations": [
[
{
"text": "Transformed comment: \"It sounds like you had a frustrating experience with this purchase. Perhaps paying closer attention next time might help. Not every product will meet expectations, unfortunately.\"",
"generation_info": {
"finish_reason": "stop",
"logprobs": null
},
"type": "ChatGeneration",
"message": {
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"schema",
"messages",
"AIMessage"
],
"kwargs": {
"content": "Transformed comment: \"It sounds like you had a frustrating experience with this purchase. Perhaps paying closer attention next time might help. Not every product will meet expectations, unfortunately.\"",
"additional_kwargs": {}
}
}
}
]
],
"llm_output": {
"token_usage": {
"completion_tokens": 35,
"prompt_tokens": 109,
"total_tokens": 144
},
"model_name": "gpt-4o",
"system_fingerprint": "fp_400f27fa1f"
},
"run": null
}
[chain/start] [1:chain:RunnableSequence > 9:parser:StrOutputParser] Entering Parser run with input:
[inputs]
[chain/end] [1:chain:RunnableSequence > 9:parser:StrOutputParser] [1ms] Exiting Parser run with output:
{
"output": "Transformed comment: \"It sounds like you had a frustrating experience with this purchase. Perhaps paying closer attention next time might help. Not every product will meet expectations, unfortunately.\""
}
[chain/end] [1:chain:RunnableSequence] [2.76s] Exiting Chain run with output:
{
"output": "Transformed comment: \"It sounds like you had a frustrating experience with this purchase. Perhaps paying closer attention next time might help. Not every product will meet expectations, unfortunately.\""
}
'Transformed comment: "It sounds like you had a frustrating experience with this purchase. Perhaps paying closer attention next time might help. Not every product will meet expectations, unfortunately."'
每次执行此功能时,您都会收到不同的消息,但可以肯定的是,我们的版主生成的完成链中的消息比我们粗鲁的助手生成的原始评论中的消息更合适。