- Nits-AI
- Posts
- In-Context Learning in Large Language Model (LLM)
In-Context Learning in Large Language Model (LLM)
How AI learn on the Fly
In-context learning is like giving an AI a crash course right in the moment—show it how, and it does the rest without ever retraining.
Imagine you’re teaching a friend how to tie a tie. Instead of sending them to a course or having them read a manual, you do a quick demo right before a big event. They watch you, then immediately try it themselves. That’s a lot like in-context learning (ICL) for large language models (LLMs)
In-context learning (ICL) is a capability of large language models (LLMs) that allows them to perform new tasks without any updates to their internal parameters. Instead, the model "learns" directly from examples or instructions provided within the input prompt during inference.
In other words, In-context learning (ICL) is when an LLM learns how to do a task—like classifying movie reviews or translating a sentence—just by seeing examples and instructions within your prompt. No extra training, no retraining the model’s internal “brain”. It’s as if the model says, “Okay, got it”, and then tries to replicate that behavior right away.
In-context learning Techniques
Zero-Shot Learning
The model is given an instruction or request without any task-specific examples. It relies entirely on its pre-trained knowledge and how clearly you phrase your instruction.
Please classify the sentiment of the following text as Positive, Negative, or Neutral:
Text: "I am extremely excited about this opportunity."
One-Shot Learning
The model is given exactly one example of the task in the prompt before it must perform the task on new input. This single example illustrates how the task should be done.
Classify the sentiment of the following text as Positive, Negative, or Neutral.
Example:
Text: "I love sunny days, they make me so happy!"
Sentiment: Positive
Now classify:
Text: "I'm disappointed with the slow service."
Sentiment:
Few-Shot Learning
The model is provided with a few examples (input-output pairs), demonstrating the task more extensively. Additional examples help the model better understand and perform the task, often leading to better results than one-shot learning.
Classify the sentiment of the following texts as Positive, Negative, or Neutral:
Example 1:
Text: "This movie is fantastic; I enjoyed every minute."
Sentiment: Positive
Example 2:
Text: "I can't stand the taste of this coffee."
Sentiment: Negative
Example 3:
Text: "It's fine, nothing special."
Sentiment: Neutral
Now classify:
Text: "I was really impressed by the presentation today."
Sentiment:
Choosing which technique to use depends on the Model capability and the task
![](https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/6c9ef4fc-9a3a-43b1-832d-56def53875da/Screen_Shot_2025-01-19_at_6.19.22_PM.png?t=1737307885)
Advantages of In-context learning
From these above techniques, three advantages come to mind
No Extra Training Needed: Saves time and computational resources by eliminating the need for fine-tuning or retraining.
Flexible: A single large language model can handle diverse tasks by simply adjusting the prompt.
Fast Experimentation: Allows quick iterations on different tasks or instructions without specialized model updates.
But like any Technique, In-context learning has its limits
Limitations of In-Context Learning:
Context Window Limits: Only so many examples or instructions can fit before hitting the model’s input size restriction.
Prompt Sensitivity: Small changes in how you phrase or structure the prompt can drastically affect performance.
No Persistent Memory: The model doesn’t retain new knowledge beyond each session unless prompts or external tools reintroduce it.
Potential Use Cases
Question Answering: Providing a few examples of Q&A pairs to guide how the model answers.
Classification: Showing labeled examples (spam vs. not spam) in the prompt to classify new inputs.
Summarization: Including a sample article and its summary, then asking for a summary of a new article.
Code Generation: Providing code snippets to demonstrate a coding style or framework usage.
💡 Bottom Line: In-context learning is like giving a Model a quick crash course, right in the prompt. It’s easy, immediate, and doesn’t require fiddling with the AI’s deeper internals. By mastering prompt design and example selection, you can shape how an LLM behaves for countless tasks—without ever touching its underlying code.
Reply