Few Shot Prompting | Use Cases & Examples

Few-shot prompting is a technique where you provide an AI model with examples (usually 2–5) before asking it to perform a similar task. The goal is to steer the model’s behavior by showing it the pattern you want, instead of relying on instructions alone.

This approach is particularly useful with large language models (LLMs), allowing them to infer patterns and produce more accurate outputs. Few-shot prompting is used in a variety of applications, including classifying text, rewriting content in a specific style or tone, and generating code snippets.

Few-shot prompting example
Classify the sentiment of each message as Positive, Neutral, or Negative:

Message: I love croissants!
Sentiment: Positive

Message: Donuts are okay.
Sentiment: Neutral

Message: I can’t stand muffins.
Sentiment: Negative

Message: Bagels are delicious.
Sentiment:

Shot based prompting

Shot-based prompting refers to the practice of providing the AI model with varying numbers of examples to guide its responses. The word “shot” refers to an example.

Engineers can use shot-based prompting to instruct the AI models they’re working on. Generative AI users can also use this method to get more precise results from chatbots like QuillBot’s AI Chat or ChatGPT.

What is zero shot prompting?

Zero-shot prompting involves giving the AI only instructions, with zero examples. The AI relies entirely on its pre-trained knowledge to perform the task. This method works well for straightforward tasks or when the AI is expected to understand general instructions.

Zero-shot prompting example
Classify the sentiment of each message as Positive, Neutral, or Negative:

Message: Bangkok is an amazing city.
Sentiment:

Zero-shot prompting is fast and simple and generally fine for simple tasks, assuming the AI model has been trained with enough data beforehand. For more nuanced or complex tasks, on the other hand, zero-shot prompting may be less accurate than one-shot or few-shot prompting.

What is one shot prompting?

One-shot prompting provides the AI with one example before asking it to complete the task. This example acts as a reference, showing the format and expected output.

One-shot prompting example
Classify the sentiment of each message as Positive, Neutral, or Negative:

Message: Bangkok is an amazing city.
Sentiment: Positive

Message: Barcelona is okay—pretty but too many tourists.
Sentiment:

One-shot prompting often improves accuracy compared to zero-shot because the AI sees a specific example of the task.

Few shot prompting

Few-shot prompting continues on from zero- and one-shot prompting; now, you provide the AI with a few examples.

What is few shot prompting?

Few-shot prompting provides—usually—2–5 examples to the AI model, giving it enough context to recognize patterns while still allowing flexibility. This approach strikes a balance between guidance and adaptability, making it an effective technique for many tasks.

Few-shot prompting example
Classify the sentiment of each message as Positive, Neutral, or Negative:

Message: Bangkok is an amazing city.
Sentiment: Positive

Message: Barcelona is okay—pretty but too many tourists.
Sentiment: Neutral

Message: I hated Brussels.
Sentiment: Negative

Message: Istanbul is fascinating.
Sentiment:

How few shot prompting works

Few-shot prompting relies on in-context machine learning (or “learning by example”), where the model infers the task and how to structure its output from the examples provided. Each example acts as a micro-training signal inside the prompt.

Step-by-step, few-shot prompting works like this:

  1. You give examples. Each includes an “input” and a corresponding “output.” In the first example above, the input would be “Bangkok is an amazing city,” and the output would be “Positive.”
  2. The AI model analyzes patterns across examples. It looks for repeating patterns, like the label types and formatting you’ve used. It also looks for any implicit rules present in the examples (e.g., tone, level of detail, and reasoning steps).
  3. The model infers the task. Based on the patterns it identifies in the analysis step, the AI infers what you want it to do next.
  4. The model generalizes the pattern. The AI next attempts to generalize based on the examples you’ve given it. In other words, it anticipates how it could apply the logic and constraints present in the examples to other inputs it hasn’t “seen” yet.
  5. You give a new input. Sometimes people include this in the few-shot prompt, in which case the AI will address it once it’s analyzed the complete examples. In other cases, people provide the complete examples and then send another message with the input they want the AI to generate output for.
  6. The model generates an output in the same format. By following the patterns from the examples, the model produces the final result that matches the structure and logic you’ve demonstrated.

This approach works because LLMs are trained on vast amounts of text and can generalize patterns. But, while LLMs can “remember” broad knowledge from their training, they don’t know automatically which pattern you want in a specific situation.

This is why few-shot prompting helps: It narrows the space of possible behaviors by providing concrete examples.  Essentially, you are telling the AI, “From all the behaviors you could choose, here is the one I want. Do it just like this.”

Note
Under the hood, LLMs rely on contextual inference. This means that during generation, the model predicts the next token (unit of text) based on the entire prompt—including all examples—and attempts to replicate the pattern that statistically best fits those examples.

Applications of few shot prompting

Few-shot prompting is versatile and can be applied across multiple domains, especially those that rely on creative outputs or specialized knowledge. Applications of few-shot prompting.

Applications of few-shot prompting
Application What it does Why few-shot works well
Text classification Assigns labels like sentiment (e.g., “positive”), topic, urgency, or category (e.g., “spam”) Examples show the model the label set and the formatting you want.
Information extraction Pulls structured data (e.g., dates, names, entities, product specs) from unstructured text Examples demonstrate the exact desired fields and output format.
Style and tone transformation Rewrites content to match a specific tone, style, or persona Examples show the target voice more precisely than instructions alone.
Summarization Produces summaries with consistent length, structure, or emphasis Examples set the expected summary length and level of detail.
Machine translation Translates text while preserving terminology or style conventions Examples guide the model toward domain-specific vocabulary or translation norms that generic instructions can’t specify precisely.
Code generation Produces functions, snippets, or patterns in a consistent style Examples teach formatting, naming, and logic patterns.
Data formatting or normalization Converts messy text into consistent formats (e.g., dates, lists, addresses) Examples demonstrate strict formatting rules exactly as you want them.
Domain adaptation Makes the model behave like it is tuned for a niche field (legal, medical, finance) Examples encode the norms, terminology, and expected output conventions of the domain.
Creative content generation Produces copy or other content in a specific style or format (e.g., marketing content generation) Examples signal the desired creativity level, structure, and stylistic boundaries.
Specialized domain adaptation Adjusts outputs to discipline-specific requirements such as regulatory language, technical standards, or scientific phrasing Examples restrict the model to field-appropriate terminology and precision.
Question-answering systems Generates answers using a specific response style (concise, detailed, structured, citation-style, etc.) Examples indicate the expected answer format and depth, reducing ambiguity.
Conversational scenarios Produces dialogue that follows a tone, persona, or interaction style (e.g., AI customer support) Examples demonstrate how the conversation should flow, including pacing, politeness level, and role behavior.
Tip
Before getting started with few-shot prompting, it might be worth checking if there’s an AI tool designed specifically for the task you want to complete. For example, QuillBot’s Paraphraser can rewrite text according to your desired tone, and Translate can translate text between 50+ languages.

If task-specific tools don’t give you the output you’re looking for, then it may be worth trying few-shot prompting with an LLM (e.g., AI Chat or ChatGPT) to show the AI examples of exactly what you’re looking for.

Benefits and limitations of few shot prompting

Few-shot prompting has some benefits and limitations you should keep in mind when getting ready to use it.

Some of the benefits of few-shot prompting are:

  • Requires fewer examples than traditional training
  • Provides flexible, real-time guidance
  • Generally more accurate than zero- or one-shot prompting
  • Helps the model adapt to a variety of tasks without retraining

On the other hand, some of the limitations of few-shot prompting are:

  • Example quality directly affects the accuracy of output
  • Complex tasks may require additional techniques like chain-of-thought prompting
  • The model might focus on superficial patterns rather than understanding the task
  • May cause the model to overfit its output to the examples (i.e., copying the examples too rigidly instead of generalizing the underlying pattern)

Best practices for few shot prompting

If you want to try few-shot prompting, here are some best practices you should follow to get optimal results:

  • Use 2–5 examples. Using too few examples doesn’t give the model enough information to infer the pattern, while using too many introduces noise and increases the risk of overfitting. Most LLMs perform best when the examples are minimal but representative.
  • Keep examples short, clear, and clean. Avoid verbosity and complex or ambiguous examples. Check your grammar and spelling to make sure the text you provide is easy to understand. Format all examples the same way so as to not confuse the AI model. Even small inconsistencies (e.g., shifting formats or variations in tone) can cause the model to mix patterns or produce unpredictable outputs.
  • Include diverse examples. This is especially important when the task has variation. For example, if you’re classifying sentiment, don’t only show examples that are obviously positive or negative. Include borderline cases, neutral statements, mixed-emotion statements, and even ambiguous inputs. This prevents the model from assuming that only extreme sentiments exist and reduces the chance that it will force new inputs into the limited patterns shown in the prompt.
  • Test the model’s performance and adjust. Run multiple inputs through your prompt and look for consistent errors, like misclassifications, formatting drift, or overfitting. Then tweak the examples, reorder them, or replace weak ones until the model reliably produces the behavior you want. Few-shot prompting is iterative, and small adjustments often lead to substantial gains in output quality.

Few shot prompting examples

Here are some examples showing how few-shot prompting works in practice across real-world applications. To demonstrate how these work with an LLM, all examples were put through QuillBot’s AI Chat, and its output is bolded.

Information extraction

Information extraction is a common use case for few-shot prompting, allowing AI models to pull structured data from unstructured text. By providing a few examples, you can teach the model to extract data how you want it.

Few-shot prompting for information extraction example
Extract the relevant details and display them in a table:

Input: John Smith bought 3 tickets for the concert on 11/12/2025.

Output:

Name          | Tickets | Date
John Smith  | 3          | 11/12/2025

Input: Jordan Keats bought 5 tickets for the concert on 11/27/2025.

Output:

Name             | Tickets | Date
Jordan Keats | 5           | 11/27/2025

Input: Sarah Lee purchased 2 items on 10/11/2025.

Output:

Name         | Tickets | Date
Sarah Lee  | 2          | 10/11/2025

Style and tone transformation

Style and tone transformation is a powerful application of few-shot prompting, enabling AI models to rewrite text according to a desired voice or level of formality. You can show the AI how to consistently produce output in the same style for new sentences.

Few-shot prompting for style and tone transformation example
Rewrite the following sentences in a formal tone:

Input: Hey, can you send me that report?

Output: Could you please send me the report?

 

Input: Do you wanna grab lunch later?

Output: Would you like to get lunch together later?

 

Input: I’m gonna be late to the meeting.

Output: I will be late to the meeting.

Code generation

Few-shot prompting can also be applied to code generation, helping AI models produce functions, scripts, or snippets that follow a consistent style and logic. You can guide the model to use the correct syntax, naming conventions, and structure for new code inputs.

Few-shot prompting for code generation example
Input: Calculate the area of a rectangle with width 5 and height 10

Output: function rectangleArea(width, height) { return width * height; }

 

Input: Calculate the area of a triangle with base 8 and height 6

Output: function triangleArea(base, height) { return (base * height) / 2; }

 

Input: Calculate the area of a circle with radius 7

Output: function circleArea(radius) { return Math.PI * radius * radius; }

Creative text generation

Creative text generation is a popular application of few-shot prompting, allowing AI models to produce original content while following your specific guidelines. Your examples guide the AI to maintain consistent mood, structure, and creativity across new outputs.

Few-shot prompting for creative text generation example
Complete a short poem in a happy tone:

Input: Roses are red, violets are blue,

Output: Sunflowers smile, and skies are bright too.

 

Input: There once was a girl from Kentucky,

Output: And in love she was very lucky.

 

Input: Rain falls softly, clouds above,

Output: Bringing fresh hope and kindness with love.

Few shot learning

Few-shot learning is a machine learning approach that enables AI models to learn and generalize from a small number of examples. This is different from traditional training, which requires massive datasets. Few-shot learning attempts to mimic humans’ ability to learn new concepts after receiving minimal examples.

Few-shot learning underlies few-shot prompting. While few-shot prompting refers specifically to giving examples within a prompt to guide an AI model, few-shot learning is the general ability of a model to learn and generalize from a small number of examples.

In other words, few-shot learning refers to the broad concept of training LLMs with limited examples, while few-shot prompting refers to a specific technique used to do that.

Frequently asked questions about few shot prompting

What is the difference between zero shot and few shot prompting?

Zero-shot prompting is when an AI model is given only instructions with no examples, relying entirely on its pre-trained knowledge. Few-shot prompting, on the other hand, provides the AI with 2–5 examples to show the desired pattern or output.

Few-shot prompting often produces more accurate results for complex tasks because the model can infer the structure and formatting from examples.

Want to try zero-shot or few-shot prompting today? Try it with QuillBot’s AI Chat.

Cite this Quillbot article

We encourage the use of reliable sources in all types of writing. You can copy and paste the citation or click the "Cite this article" button to automatically add it to our free Citation Generator.

Santoro, K. (2025, December 19). Few Shot Prompting | Use Cases & Examples. Quillbot. Retrieved December 28, 2025, from https://quillbot.com/blog/ai-prompt-writing/few-shot-prompting/

Is this article helpful?
Kate Santoro, BS

Kate has a BS in journalism. She has taught English as a second language in Spain to students of all ages for a decade. She also has experience in content management and marketing.

Join the conversation

Please click the checkbox on the left to verify that you are a not a bot.