Making LLM context actually work for you
What you feed your AI tools matters - here's how AI uses context
Understanding how âcontextâ works in large language models (LLMs) will have a massive impact on what you get out of AI tools.
Coding assistants like Cursor are built to manage context behind the scenes so you can concentrate on coding. This is the reason theyâre so much more useful than just using a chat tool like ChatGPT.
Mastering context means getting way better code suggestions with less guesswork. In todayâs deep dive, Iâll get into:
Understanding context in LLMs
Token limits and context windows
How LLMs use context you provide
How tools like Cursor inject context on your behalf
Managing and reducing irrelevant context
Understanding context in LLMs
In the world of LLMs, context is what the model âseesâ before producing a response. Think of it as the modelâs working memory or even a scratch pad. Unlike a human programmer, an AI doesnât truly remember past conversations unless you include them again; it only knows what you feed it right now (plus whatever it learned during training). In practical terms, context includes things like:
The conversation history (previous questions and answers).
Your current prompt or instructions (what youâre asking it to do).
Any code snippets, error messages, or file content you provide as reference.
LLMs process this input text in pieces called tokens, which are basically chunks of words or characters. They predict results token-by-token based on the patterns in the context. The context is the AIâs short-term memory (like RAM), separate from its long-term training data. Provide a clear and relevant context, and the model can give a focused, accurate answer. Provide a poor or vague context, and the model might get confused or start making things up.
Types of Context (Intent vs. State): It helps to know there are generally two kinds of info you give to an AI:
Intent context â Your instructions or goal. This is prescriptive. For example, telling the model what you want, like âExplain why this function is slowâ or a system message like âYou are an expert Python assistant.â This sets the high-level direction
State context â The current state of the world or problem. This is descriptive background info, like code files, stack traces, or configuration data you supply. It tells the model what exists right now (the code or error it should consider).
Omit the state (e.g. forget to show the code or error), and the AI may hallucinate a solution using generic knowledge. Omit the intent (donât tell it clearly what you need), and it wonât know what to do with the information.