Context Engineering for Agents


An Introduction to Context Engineering

According to @dexhorthy, "everything that makes agents good is context engineering." Check out his talk from the AI Engineer World's Fair from June 3, 2025.

We agree.

You shall know a word by the company it keeps. ~ John Firth, A Synopsis of Linguistic Theory (1957)

Today, we might say:

You shall know an LLM application (or agent) by the context it keeps ~ AI Makerspace

We were pumped to see Dex - the man who coined the phrase - jump into our Discord last week following our event on Context Engineering in July!

video preview


Grasp Context Engineering in Four Ideas

1️⃣ Everything is context engineering

The GPT-3 paper entitled “Language Models are Few-Shot Learners” (2020) by Brown, et al. introduced the world to the idea of in-context learning for GPT-style transformers.

In-context learning, refers to a model's ability to temporarily learn from prompts. ~ Wikipedia

"Emerging Architectures for LLM Applications” (2023) by Bornstein and Radovanovic outlined a useful framework we will use to think about the modern LLM application stack.

“The stack we’re showing here is based on in-context learning which is the design pattern we’ve seen the majority of developers start with (and is only possible now with foundation models).”,

The three key patterns for building, shipping, and sharing production LLM applications today are: Prompt Engineering, RAG, Agents

Def Prompt Engineering = Giving the LLM instructions in the context window
~= In-Context Learning
Def Retrieval Augmented Generation (RAG) = Giving the LLM access to new knowledge
~= Dense Vector Retrieval + In-Context Learning
Def Agents = Giving the LLM access to tools
~= Enhanced Search and Retrieval + In-Context Learning

*There are other ways to define agents that are less practical, as we've discussed elsewhere.

2️⃣ There is no in and out, there is only context

Think of interacting with an LLM as a single, contiguous sequence of tokens rather than a strict “input” followed by a strictly separate “output.”

LLMs process this sequence from left to right, attending only to a fixed number of tokens at a time—this is the size of its context window.

Thus context, typically thought of as aligned with LLM inputs, is better thought of as a continuous stream of information flowing from input, through the LLM, to the output.

In a multi agent system, we can think of context as flowing in one LLM (agent), out of another, and back again, in cycles, until a final response is ready to be delivered as output of the application to the user.

3️⃣ Attention is All You Need (as long as you attend to context)

In classic NLP, there are two types of embedding models: those that consider context and those that do not.

Def Context: the position of words within a sentence, sentences within a paragraphs, paragraphs within a chapter, and so on ...

The classic example of an embedding model without context is a Bag of Words embedding model. The now iconic example of a model that considers context is Word2Vec. A simple Multi-Layer Perceptron (MLP) or Feedforward Neural Network also tracks context. Attention, as in the Transformer, is particularly good at enhancing our ability to track context, even if it's subtle and exists across large swaths of text

Crucially, it contains long stretches of contiguous text, which allows the generative model to learn to condition on long-range information. ~ GPT Paper, 2018

4️⃣ Managing Context is harder with Agents

Enhancing search retrieval with tool use, "remembering" stuff (a.k.a. Agent Memory), and sharing conversation threads across multiple agents, all make for a more complex space of context to engineer.

“Agents often engage in conversations spanning hundreds of turns, requiring careful context management strategies.” ~ Anthropic, How we built our multi-agent research system

Managing conversation threads, shared or unshared, across multiple agents or entire agent teams with different permissions to different context, is the kind of work that an AI Engineer must do today.

To optimize the context of a multi-agent system, a lot more engineering is needed than the simple In-Context Learning we get out of the box when we pull LLMs off the shelf.

It's no longer just the position of a word relative to another. It's more like ... well, the position of a bunch of words relative to another in the form of conversations, over time, and in sync with each other.

So, I guess, it's all context in the end; just more engineering now; like classic engineering stuff.

As @dexhorthy put it:

Agents are software.

PS ... here's a fun quote to end on:

"As the ecosystem matures, AI systems will maintain context as they move between different tools and datasets, replacing today's fragmented integrations with a more sustainable architecture." ~ Anthropic, Introducing the Model Context Protocol

Conclusion

1️⃣ Everything is context engineering

2️⃣ There is no in and out, there is only context

3️⃣ Attention is All You Need (as long as you attend to context)

4️⃣ Managing Context is harder with Agents

AI Engineering today is (mostly) Context Engineering.

“Context engineering” … is effectively the #1 job of engineers building AI agents.” ~ Cognition, Don't Build Multi-Agents
“[Context engineering is the] ”…delicate art and science of filling the context window with just the right information for the next step.” ~ LangChain, Context Engineering for Agents
You shall know an LLM application (or agent) by the context it keeps ~ AI Makerspace

The AI Engineering Bootcamp, Cohort 8

Want to learn more about how to build, ship, and share production LLM applications that leverage the patterns of RAG, agents, context engineering, and that use tools like the LLM app stack, LangChain, and MCP?

43 seats remain available in Cohort 8 of The AI Engineering Bootcamp!

Apply, complete The AI Engineer Challenge in < 72 hours, and enroll before August 31st to save $500 before the price increases!

Cheers,

Dr. Greg
Co-Founder & CEO