Sitemap

What is Context Engineering in AI?

3 min readSep 25, 2025
Press enter or click to view image in full size

A few months ago, Shopify’s CEO Tobi Lütke tweeted about “context engineering” — a concept that’s reshaping how we think about AI interactions. We’re all familiar with prompt engineering, where crafting the right input determines your AI output quality. But what happens when your perfectly designed prompt still misses the mark because the AI lacks crucial background information?

Think about virtual customer support agents. Even with well-crafted prompts, they can struggle to provide relevant responses without understanding user intentions or previous support interactions. This is where context engineering comes in.

Here’s how prompt and context engineering complement each other:

Press enter or click to view image in full size

Simple TasksComplex TasksPrompt EngineeringQuick wins with well- crafted, one-off instructionsLimited effectiveness without context; struggles with nuanced requestsContext EngineeringOften unnecessary — adds complexity to straightforward requestsGood for requests that require situational or specific domain awareness

Building Context for AI

Context engineering means considering the full picture around a user query: prior interactions, relevant data points, and the specific outcomes you’re trying to achieve. The information fed to an AI model needs careful curation to ensure it’s highly relevant for delivering desired results.

So how does AI actually get this context? It comes down to three key approaches:

RAG (Retrieval-Augmented Generation)RAG improves generative AI by retrieving relevant external data from open sources (like the internet) or closed systems (like internal databases). This grounds AI responses in reliable sources, ensuring the LLM connects accurate information to user queries and explains its relevance.

Fine-tuning — This involves further training a pre-trained LLM on smaller, specialised datasets to adapt it for specific tasks or domains. Think legal document summarisation or medical diagnosis support — the model learns the nuances of that particular field.

Prompt engineering — The process of designing, refining and optimising prompts to improve AI responses without modifying the underlying model. This remains crucial but works best when combined with proper context.

The real magic happens when these approaches work together. Context engineering isn’t about replacing prompt engineering — it’s about giving your prompts the background they need to be truly effective.

Main learning point: Context engineering isn’t about replacing prompt engineering — it’s about giving AI the background knowledge it needs to deliver truly relevant responses. The real breakthrough happens when you combine well-crafted prompts with the right background information.

Related links for further learning:

  1. https://praella.com/hi/blogs/shopify-news/the-art-of-context-engineering-revolutionizing-ai-interactions-for-optimal-performance
  2. https://departmentofproduct.substack.com/p/context-engineering-for-ai-agents
  3. https://simonwillison.net/2023/Feb/21/in-defense-of-prompt-engineering/
  4. https://www.news.aakashg.com/p/rag-vs-fine-tuning-vs-prompt-engineering
  5. https://www.datacamp.com/tutorial/fine-tuning-large-language-models
  6. https://docs.aws.amazon.com/bedrock/latest/userguide/custom-model-fine-tuning.html

--

--

MAA1
MAA1

Written by MAA1

Product person, author of "My Product Management Toolkit" and “Managing Product = Managing Tension” — see https://bit.ly/3gH2dOD.

No responses yet