Legibility is Leverage: Why Your Data Structure is Blocking AI
The current narrative around AI adoption in the enterprise is narrowly focused on model capability. Executives and engineers alike are asking, “Is the model smart enough to do this job?” But this question overlooks the fundamental reality of how work actually happens in most organizations. While current models are perfectly capable of acting as helpful chatbots, the requirements shift dramatically when we move to advanced agentic uses—where AI is expected to actually execute work, not just discuss it.
The failure mode of AI agents today isn’t a lack of intelligence; it is a lack of legibility. Most enterprise agents are “sophisticated amnesiacs”—capable reasoning engines dropped into an environment designed exclusively for human eyes, where the data they need to function is hidden, scattered, or entirely undefined.
The Sophisticated Amnesiac
An agent without access to proper state information is as helpless as a chatbot where every conversation starts from zero. It doesn’t matter how high the model’s IQ is; if it cannot read the history of the project or the current status of a task, it is forced to hallucinate or restart. State is the difference between a chat session and actual work.
To unlock the promise of autonomous agents, businesses must stop obsessing over the IQ of the AI and start fixing the environment it operates in.
The Chaos of Undefined Work
The first barrier to agent adoption is the chaos of undefined processes. In many organizations, critical business logic is not written down in any system of record. It is encoded in tribal knowledge—the “Ask Sarah” phenomenon.
When an agent attempts to execute a workflow, it inevitably hits a decision point where the available data contradicts itself. It infers from the information it can gather that the instruction is effectively: “Actually, Finance owns that part, but only on Tuesdays, and you have to ask Sarah.” Because this logic exists only in human brains and not in a documented artifact, the agent is forced to halt. It cannot query a rule that was never written down, so instead of finishing the job, it must pause and wait for human feedback.
However, relying on constant human intervention defeats the purpose of an autonomous agent. Even if a human unblocks the process, the agent faces a second barrier: a lack of rigorous completion criteria. In software engineering, “done” is binary: the code compiles, and the tests pass. In business operations, “done” is often a vibe—a subjective feeling that “it looks good.” An agent cannot validate a vibe. Without objective, testable criteria for success (a “Published” status in the CMS, a generated invoice number, or a passed compliance checklist), an agent cannot self-correct; it must simply guess and hope the human approves.
We are trying to automate workflows that were never actually defined. We have scattered our truth across email threads, mental models, and disconnected admin panels, creating an environment of disorder that no amount of prompt engineering can fix.
Text is the Universal Interface
The issue creates a massive “tax” on AI implementation because of how we present data. For decades, we have optimized business software for human convenience, building layers of abstraction (dashboards, portals, CMSs) to hide the messy underlying data.
While these abstractions make software accessible to humans, they render it opaque to AI. In a typical business tool, the state of work is hidden behind a click-path. A document in “Draft Mode” might look slightly grayed out to a human, but to an agent, that state is often a hidden variable buried behind a complex DOM structure.
Agents do not thrive in these opaque environments. They require Text. Text is the universal interface for intelligence. Unlike a GUI where state is inferred, agents need formats like raw code, Markdown, JSON, and logs—places where the state is explicit, changes are trackable (diff-able), and actions are reversible. When work is trapped in a GUI, the agent has to “guess” the state. When work is stored as a text artifact, the agent can “read” it with perfect clarity.
To bridge this gap, Tools become critical. Agents can be equipped with specific tools to bypass the GUI wall, directly querying the databases and APIs that drive your KPIs and dashboards. By exposing these data sources via tools, you allow the agent to ignore the pixel-perfect dashboard and instead read the raw numbers that act as the source of truth.
Crucially, agents aren’t just passive consumers of this data; they can generate their own software tools to manipulate it. Once an agent has access to raw data via a tool, it can write ad-hoc Python scripts to perform complex financial modeling, generate custom visualizations from logs, or run statistical analyses that would be impossible in a standard dashboard. This is the ultimate leverage of a text-based interface: the agent doesn’t just read the data; it can write the code to understand it.
The Virtuous Cycle of Structure
There is an unexpected side effect to reorganizing data for agents: it drastically improves the human experience.
When an organization is forced to define its “primitives”—to clearly document the definition of done, to create a single text-based source of truth for project status, to build tools for querying data, and to implement traceability logs—the chaos dissipates for people, too.
- State becomes visible: Everyone knows exactly where a project stands without scheduling a status meeting.
- Traceability becomes standard: Mistakes are easily identified and reverted, reducing the anxiety around making high-stakes decisions or launching campaigns.
- Onboarding becomes trivial: New employees (and new agents) can read the documentation rather than needing to shadow “Sarah” for three weeks.
Fortunately, the burden of creating this documentation has never been lower. Ironically, the very LLMs that need this structure can help you build it—interviewing team members, drafting procedure documents, and synthesizing scattered notes into cohesive protocols.
This shift toward text and artifacts changes the responsibility map for the entire organization. Technical staff must architect the “bridges” that make this work legible—setting up pipelines that convert messy Office documents into agent-readable formats and building the specific tools agents use to query internal data. In parallel, business leaders must adopt the discipline to treat these written records as the single source of truth.
If your business processes rely on human intuition and hidden states, your agents will remain expensive, error-prone toys. But if you can move your work into “artifacts”—whether that is native Markdown or Office documents rigorously processed into plain text—you create the foundation necessary for agents to function.
You cannot automate a mess. The companies that win in the agentic era won’t be the ones with the most powerful models; they will be the ones that have disciplined their organizations enough to write the work down.