I've been thinking about a talk I'm planning to give later this summer on AI and creativity. I'm still working on the presentation, but I wanted to share a few brief thoughts on AI prompt engineering since it's pivotal to how we interface with large language models (LLMs) like ChatGPT and honestly, writing my thoughts out, it helps me think through the talk.
Prompt engineering involves more than just drafting quick prompts for working with AI large language models, it's a process that can be, and often is, iterative (much like the writing process). For instance, the same prompt can be used to generate different outputs, and the same output can be generated using different prompts. So it's a process that also requires a lot of thought, care, curiosity, and perhaps even, patience.
In short, prompts initiate LLMs and trigger text generation. In other words, prompts are the inputs we provide LLMs which aim to predict the most likely sequence to follow the prompt; more specifically, a prompt can include certain instructions, context or examples for the model to follow. Inputs are often in the form of text but can also be in audio form. Outputs can match the input format but don't have to as some models are capable of accepting text to generate audio or even images.
Below are some simple, prompt examples. They're not the holy grail by any stretch, but each influences the LLM a bit different for what output text it generates from the user input.
Question: {the question} Answer: {the AI's answer/response}
Instruction: Answer the question based on the context below. Keep your response concise. Context: {your custom context} Question: {the question} Answer: {the AI's answer/response}
The more vague or general the prompt, the more likely the model will generate a response that is also vague or general. The more specific the prompt, the more likely the model will generate a response that is also specific. Similarly, the more examples (or context) provided in the prompt, the more likely the model will generate a response that is similar to the examples provided.
The benefit of providing context varies; for instance, you might want to give the LLM context into the domain you're using it under, for example, for research purposes. Depending on your needs, you may benefit from providing context, or you may be fine without it. Again, prompt engineering is an iterative process.
While there's so much more to unpack and dig into with prompt engineering and LLMs, such as zero shot
learning, few-shot
learning, hallucination
, temperature
and more, I'll save that for a future post. For now, I'll underscore that prompt engineering helps us refine the LLM we're using to cater to diverse queries, from research and play to writing and beyond, much like in the context of my upcoming talk: the intersection of AI and creativity.
Prompt on 🤘🏽