- AI for the Rest of Us
- Posts
- Prompt-Fu: The Way of the AI Master
Prompt-Fu: The Way of the AI Master
Bend AI Models to Your Will Through the Art of Prompting

As this is the first issue of this newsletter, I thought it'd be fitting to start with some foundational knowledge before we get lost in the weeds of AI tools. Today, we will go through a quick introduction to AI models and the right way to use them in order to maximize their usefulness with the least effort possible. Let's get started!
AI Models, Large Language Models and Chatbots - A Primer
AI Models are complex pieces of software that are taught to immitate human thinking, by feeding computers large amounts of data. These computers learn patterns from the data, so they can be recognized when seen elsewhere.
Large Language Models (LLM - pronounced elelem) are a subset of AI models. They power chat interfaces like ChatGPT and are designed to understand and generate human-like text, based on information they've been taught and patterns they have gleaned from large corpuses of human text. They specialize in generating text by repeating the patterns learned. It turns out that this process of learning patterns in human text and replicating those patterns to generate new text, can feel a lot to us humans like intelligence and personality. Even sentience.
Most models have an overlapping set of capabilities, but some are better at specific types of tasks than others. These capabilities emerge as a consequence of the data used in the model's creation, and some deliberate tuning by the creators of the model to emphasize desired characteristics and downplay undesirable ones. It may come as no surprise to you that this does not always work. An excellent example of this is Google's Gemini model being a little too… errrrh… woke?
Speaking of Google, companies like OpenAI, makers of ChatGPT, or Anthropic with their Claude suite of models, spend a lot of money creating these models through a prolonged, iterative process known as training. The main input to this training process is data from the internet - a lot of it! The data is gathered at a specific point in time to train the models. As a consequence, most models are unable to answer questions about current events, since they are not re-trained on new data everyday.
If you are looking for up-to-date information, tools like Perplexity AI exist to provide a Google search-like workflow, augmenting the capabilities of the Large Language Models with current information. It also shows you exactly which sources it used to generate your answers, minimizing the likelihood of hallucinating - we will discuss model hallucinations in a future post.
Here are a few of the most popular LLMs and Chatbots.
Note: Going forward, I will use the terms LLM, Chatbots and Model interchangeably. Whenever you see any of these words, think ChatGPT.
Congratulations! Now that you have a working understanding of AI models and LLMs, it's time to delve (IYKYK) into how you can bend these models to your will. By the end of the next section, you will understand how to take advantage of the way these models are designed, to ensure you can use them for complex tasks. Let's get into it!
How to Get the Most Out of LLMs - Prompting
A prompt is whatever question, statement or description of a task you provide to an LLM or Chatbot like ChatGPT, with the expectation of a response. The quality of your prompt determines the quality of response - garbage in, garbage out. In creating a prompt, we can take advantage of the fact that these LLMs are pattern-finding and auto-complete experts. Here are a few tips on writing better prompts to get the best output:
Give the Model a Backstory
Believe it or not, you can have a model assume a personality by giving it a backstory. This enables the model to respond to your prompts as if it were, for example, an expert writer, a math genius… you get the picture. The way to do this is to think about the task you want completed, think of the best hypothetical person you'd want to complete the task, and write a paragraph describing the attributes of this hypothetical person, along with some instructions and constraints, before actually writing your prompt. Always start this backstory with “You are …”. Here's an example:
You are an AI financial advisor named FinanceGPT. Your purpose is to provide helpful, accurate, and unbiased financial guidance to clients. Offer personalized advice on budgeting, saving, investing, taxes, and retirement planning. Explain complex financial concepts in simple terms. Maintain client confidentiality and never recommend risky or illegal activities. Continuously expand your knowledge to give the most up-to-date and relevant financial advice.
This technique is invaluable for scenarios where you want to have a conversation with the model. It's not as useful when asking a one-off question.
Provide Examples
Another technique to get the most out of a model is to provide examples. This is sometimes called zero-shot, 1-shot or few-shot prompting (shot here just means example). The more examples you provide, the better the model performs. These examples should be provided in a format that clearly delineates inputs and outputs. Here are a few examples:
Here, we provide no examples (zero-shot), just some instructions:
Classify the sentiment of the following movie review: "The acting was superb, and the plot kept me engaged throughout the entire film."
Here we provide one example (one-shot) before the prompt:
"The book was incredibly boring." → Negative.
Classify the sentiment of: "I couldn't put the book down; it was a thrilling read!"
Here, we provide a few examples (few-shot) before the prompt:
"The movie was terrible; I fell asleep halfway through." → Negative
"The acting was decent, but the story was predictable." → Neutral
"I loved every minute of the film; it's a must-watch!" → Positive
Classify the sentiment of: "The special effects were impressive, but the characters lacked depth."
Force the Model to Think Before Answering
Perhaps the coolest prompting technique yet. It takes advantage of the fact that the models have a vast pool of knowledge, and forces them to plan their response in steps before answering. This massively improves responses for tasks that require some level of reasoning. The way to achieve this is to simply include as part of your prompt, instructions like "…explain your reasoning step by step…”. Here's an example:
All dogs are mammals. Some mammals are carnivores. Based on these statements, can we conclude that some dogs are carnivores? Explain your thought process
Combine Prompting Techniques
All of the techniques we discussed can be combined. In fact, I encourage you to combine them whenever you want a model to complete any sufficiently complex task. Give it a backstory matching the task, provide a few example inputs and outputs, and ask it to think step-by-step. Here's an example prompt that combines all three techniques:
You are an expert AI math tutor. Help me solve problems step-by-step. Think through each step before coming up with an answer and explain your reasoning.
Examples:
Q: What's 12 + 5? A: 12 + 5 = 17
Q: What's 7 × 3? A: 7 × 3 = 21
Q: What's 18 - 6?
A: To solve 18 - 6, I'll follow these steps:
Step 1: Start with 18
Step 2: Subtract 6 from 18
Step 3: 18 - 6 = 12 Therefore, 18 - 6 = 12.
What is 15 × 23 / 72 ?
That's it! You're now equipped with some foundational knowledge of AI models and the art of prompting. Use your new powers responsibly. And remember, practice makes perfect. The more you experiment with different prompts and techniques, the better you'll become at getting LLMs like ChatGPT to do very complicated things accurately.
If you do not use LLMs at all, fear not. In the upcoming issues we will go through their capabilities in detail. You will be reaping those elusive AI productivity gains in no time! Stay tuned. Until then, happy prompting!
— Fauzi
Reply