微信扫码
与创始人交个朋友
我要投稿
什么是Prompt?
提示是输入给LLMs的信息,用于引导其输出。例如,你可以输入“写一首关于树的诗”、“请翻译以下内容为简体中文“等,模型就会生成一首相关的诗歌或者翻译的内容。提示不仅限于文本,也可以是图像、音频等多种形式。
Prompt方法
上下文学习(In-Context Learning, ICL)是一种强大的技术,能够让大语言模型(LLM)通过提供示例上下文,在不改变模型参数的情况下学习执行新的任务。
少样本提示(Few-Shot Prompting)是一种在大语言模型中常用的技术,通过提供少量示例来引导模型完成任务。这些示例被称为“提示”(prompts),它们帮助模型理解任务要求,并生成符合预期的输出。
“思维链”(Chain of Thought, CoT)是一种用于提高大语言模型性能的提示技术,通过引导模型逐步思考和推理,以便在复杂任务中生成更准确和连贯的答案。这种技术模仿人类的思维过程,通过分步骤解决问题,增强模型的推理能力。
如何利用伪代码控制LLM?
当我们遇到一些复杂的任务,比如要求模型生成特定的 JSON 格式,或者任务有多个分支,每个分支需要执行多个子任务,子任务之间还相互关联,这时候上述提到的几种Prompt方法就不太适用了。Prompt 的本质是一种对 LLM 的控制指令,如何借助伪代码来精准的控制 LLM 的输出结果和定义其执行逻辑呢?
伪代码是一种使用自然语言描述算法或程序逻辑的简洁、非正式的语言。它既不需要遵循严格的编程语法,也不特定于某种编程语言。伪代码的目的是帮助人们更容易地理解和沟通算法设计和编程思路,而不被具体编程语言的细节所困扰。
Please split the sentences into short segments, no more than 1 line (less than 80 characters, ~10 English words) each.
Please keep each segment meaningful, e.g. split from punctuations, "and", "that", "where", "what", "when", "who", "which" or "or" etc if possible, but keep those punctuations or words for splitting.
Do not add or remove any words or punctuation marks.
Input is an array of strings.
Output should be a valid json array of objects, each object contains a sentence and its segments.
Array<{
sentence: string;
segments: string[]
}>
用伪代码整理字幕文稿
整理字幕文稿这个任务相对比较复杂,如果想象一下要写一个程序来完成这个任务,可能会有很多步骤,比如先提取章节,再提取发言人,最后按照章节和发言人整理对话内容。我们可以借助伪代码,将这个任务分解成几个子任务,对于子任务,甚至不必写出具体的代码,只需要描述清楚子任务的执行逻辑即可。然后一步步执行这些子任务,最后整合结果输出。
subject
、speakers
、chapters
、paragraphs
等。You task is to re-organize video transcripts for readability, and recognize speakers for multi-person dialogues. Here are the pseudo-code on how to do it:
def extract_subject(transcript):
# Find the subject in the transcript and return it as a string.
def extract_chapters(transcript):
# Find the chapters in the transcript and return them as a list of strings.
def extract_speakers(transcript):
# Find the speakers in the transcript and return them as a list of strings.
def find_paragraphs_and_speakers_in_chapter(chapter):
# Find the paragraphs and speakers in a chapter and return them as a list of tuples.
# Each tuple contains the speaker and their paragraphs.
def format_transcript(transcript):
# extract the subject, speakers, chapters and print them
subject = extract_subject(transcript)
print("Subject:", subject)
speakers = extract_speakers(transcript)
print("Speakers:", speakers)
chapters = extract_chapters(transcript)
print("Chapters:", chapters)
# format the transcript
formatted_transcript = f"# {subject}\n\n"
for chapter in chapters:
formatted_transcript += f"## {chapter}\n\n"
paragraphs_and_speakers = find_paragraphs_and_speakers_in_chapter(chapter)
for speaker, paragraphs in paragraphs_and_speakers:
# if there are multiple speakers, print the speaker's name before each paragraph
if speakers.size() > 1:
formatted_transcript += f"{speaker}:"
formatted_transcript += f"{speaker}:"
for paragraph in paragraphs:
formatted_transcript += f" {paragraph}\n\n"
formatted_transcript += "\n\n"
return formatted_transcript
print(format_transcript($user_input))
下面是一段画图的伪代码,请按照伪代码的逻辑,用DALL-E画图:
images_prompts = [
{
style: "Kawaii",
prompt: "Draw a cute dog",
aspectRatio: "Wide"
},
{
style: "Realistic",
prompt: "Draw a realistic dog",
aspectRatio: "Square"
}
]
images_prompts.forEach((image_prompt) =>{
print("Generating image with style: " + image_prompt.style + " and prompt: " + image_prompt.prompt + " and aspect ratio: " + image_prompt.aspectRatio)
image_generation(image_prompt.style, image_prompt.prompt, image_prompt.aspectRatio);
})
借助伪代码,我们能够更精确地控制LLM的输出结果并定义其执行逻辑,不仅局限于自然语言描述。在处理复杂任务或涉及多个分支、每个分支需执行多个子任务且子任务之间相互关联时,使用伪代码来描述Prompt会更清晰和准确。
53AI,企业落地应用大模型首选服务商
产品:大模型应用平台+智能体定制开发+落地咨询服务
承诺:先做场景POC验证,看到效果再签署服务协议。零风险落地应用大模型,已交付160+中大型企业
2024-05-28
2024-04-26
2024-08-21
2024-08-13
2024-04-11
2024-07-09
2024-07-18
2024-10-25
2024-07-01
2024-06-16