Context Decoded: Explaining the "Trillion Dollar Opportunity"

We recently hosted a webinar on one of the most important problems in enterprise AI: context. VCs have called it the trillion-dollar problem. Here's what we covered.
What Is Context and Why Does It Matter?
When we first started using LLMs, they were toys. We'd test jokes, rewrite emails in the style of famous authors, see what weird things they'd say. That era is over. Today, conversations with AI are tasks. Real work. "Turn this broadcast script into a web article." "Draft an email to Sarah to go tomorrow morning." "What's the status of Project Alpha?"
The problem: these tasks don't happen in isolation. They happen inside layers of context. There's the individual user making the request. The team they're on. The project they're working on. The company's standards. The market they operate in. All of these layers influence how a task should be completed.
When you ask a colleague to "draft an email to Sarah to go tomorrow morning," they know who Sarah is, what format she prefers, what time zone she's in, and what "tomorrow morning" means in your company's culture. When you ask an AI the same thing, you get: "I'd love to help! Can you tell me who Sarah is? What's the email about? When exactly should it be sent?"
This gap between what you say and what the AI needs to know is context. And no matter how powerful foundation models become, they will never read your mind. They can't know your company's deal structures, your team's project management conventions, or that Sarah likes bullet points and hates long intros. That information has to come from somewhere.
Current Approaches: Memory and Context Graphs
The market has coalesced around two main approaches.
Memory is what you've probably encountered in ChatGPT. The model stores facts about you: "Mike prefers his coffee this way." It learns from your conversations and tries to remember preferences over time.
Context graphs take a different approach. They try to read everything happening across your organization and build a complex web of facts and relationships. "Sachin is the CEO of elvex. He has authority over these decisions. This account is connected to that opportunity."
Both approaches emerged for logical reasons. Memory mimics how humans work. Context graphs attempt to capture the full picture of organizational knowledge.
The Problems with These Approaches
Memory fails at the organizational level. Current implementations are individual only. There's no group memory, no team memory, no company memory. And even if there were, how would they combine? Does a team preference override an individual preference? When? The rules aren't clear because no one has figured them out.
Memory also has limited user control. You might be able to see what's been stored and edit it, but you rarely know what's being added or why. And it misses a ton of information. If memory only forms from your direct conversations with the AI, it can't capture the work happening in Slack threads, Zoom calls, or the decision your VP made last quarter about how deals should be structured.
Context graphs fail at transparency. They're 100% opaque. You have no understanding of how the system is building up its knowledge or making inferences. Users have almost no control. And while context graphs capture more information than memory, they still miss things that happen outside of source systems.
Most context solutions work under the hood. elvex works in the dashboard, with you. Context should be a human asset first.
How We're Approaching Context at elvex
We're taking a different approach, and we're currently prototyping what we think is a better way.
Our core belief: context should be a human asset first. That means it needs to be transparent, simple, editable, and consistent.
Transparent: You can see exactly what context the AI is using at any time. Company context, personal context, project context. It's all visible in the interface.
Simple: We treat context as documents. You don't need to understand graph databases or memory architectures. You read a document. You edit a document. That's it.
Editable: If you don't want the agent to pay attention to something, you change the file. If you know how deals should be structured at your company, you write it down. The system doesn't have to infer it from breadcrumbs across your tech stack. Most importantly, you don't need to be technical in order to understand how this all works, or how to get it to do what you want it to do.
Consistent: The main thing here is system reliability. It should do what you expect it to do - and being able to understand and control what's going on is a huge part of that.
We're building a structured opinion on how context layers interact. When you're working on a specific project, the system goes deeper on that project's context. When you're doing general work, it pulls from company and personal context. The rules are explicit, not hidden.
The prototypes we're working through let users see their context stack at any time. Company-level context shows who's editing it and what's inside. Personal context shows what the system has learned about you. Project context tracks what you're working on, who's involved, and what tools and conversations are relevant.
We're also prototyping context-building flows where the system interviews you, extracts context from documents you provide, confirms it got things right, and shows you exactly where it saved that information. No black boxes.
This is a different mental model. Memory approaches are bottom-up: they germinate only from conversations. Context graph approaches are integration-down: they look at software and infer from usage patterns. We're trying to do both, but through documents that are visible and editable, not systems that are opaque and automatic.
We don't have all the answers yet. But we believe the path forward requires giving humans control over the context that shapes how AI behaves on their behalf.


