Top tips: RAG isn’t the problem, context is. Here are 3 fixes.
Top Tips is a weekly column where we highlight what’s trending in the tech world and list ways to explore these trends. This week, we’ll be talking about how we can improve our retrieval-augmented generation (RAG) systems using contextual engineering.

Prompt engineering has gained a lot of attention in the past year, and it’s finally time to move on to a better experience that transforms the way AI results are provided to us. Data has influenced a lot of decisions lately and rightfully so. Techies have been working on ways to improve AI predictions and recommendations, and contextual engineering is one of them. This week, we explore how RAG systems can be improved using contextual engineering.
Engineer the context
By now, you may have found out that AI is not as smart as you thought it would be. Current GenAI tools optimize their results according to your prompt, essentially requiring prompt engineering. With contextual engineering, you can optimize the prompt that is fed into the LLMs for better-optimized results.
This could mean adding context to the prompt such as source, date, recency, and more, depending on the use case of the LLM.
For example, let’s say a user is talking to a chatbot on a health insurance page and wants to know the list of hospitals their insurance covers. The LLM should retrieve relevant and up-to-date details from the database. Contextual engineering, with the right back-end structure, would ensure that the right information is provided, irrespective of whether the user provides the right context.
Feed summaries into the LLM
Feeding a summary into the LLM is another way of optimizing results. Instead of feeding it raw chunks of data, you can use contextual engineering to feed summaries of data so that the results are optimized, improving accuracy.
For example, if a user wants to know whether their insurance requires a deductible, the LLM doesn’t have to scan through the entire 60-page document to find this out. Feeding the system summarized data helps the results arrive quicker and improves accuracy.
Add negative context
This is an underused strategy when it comes to LLMs. Feeding the LLM instructions on what it shouldn’t do improves results.
For example, prompts including instructions like, “Do not retrieve data from policies older than 2024” work well and reduces the chances of receiving outdated information. This can be applied to LLMs across many use cases and can be extremely helpful in fetching the right information.
A final word
We are constantly finding ways to optimize LLMs for faster and better results, and it’s extremely important to use them in our businesses to leverage available data and make informed decisions. It’s time we do this through contextual engineering.
Contextual engineering is a powerful progression from prompt engineering, and incorporating it into our RAG systems is a smart move.