Context-Aware Brain: Why Generic AI Failed Me

March 02, 202611 min read • by renerocksai

(Author’s Note: This is part two of my series on OMS. In Part 1, I defined the “Fluid Workflow” for tracking my work. Now, I explore the Intelligence Layer.)

Loneliness of the Long-Distance Coder

Being a “One Man Show” has a hidden cost that no one talks about: Intellectual Isolation.

It’s not about social interaction; I have friends for that. It’s about the crushing weight of decision fatigue. There is no Senior Engineer to verify if my architecture will melt down in six months. There is no Project Manager to tap me on the shoulder and say, “Hey, we promised this by Tuesday.” I am the only brain in the room, and sometimes, that brain gets tired.

When AI chatbots like ChatGPT and Claude arrived, I felt an immense wave of relief. Finally, I had a Rubber Duck that talked back. I could bounce ideas, debug errors, and brainstorm features at 2 AM.

But the honeymoon phase ended quickly. The problem wasn’t intelligence; it was Amnesia.

The Context Dance

Don’t get me wrong, the tools are getting better. Modern coding agents like Claude Code or Cursor have largely solved the Codebase Context problem. They can read my current files; I don’t have to copy-paste main.py anymore.

But they still lack Historical Context.

They don’t know why I chose this architecture three months ago. They don’t know that I solved a nearly identical problem on a different client project last year. They see the code as it is now, but they are blind to the journey of how it got there.

I found myself stuck in a “Groundhog Day” loop, constantly explaining the past. “Remember, we tried that library last time and it failed because…” The friction was killing the flow.

Concept: AI with Long-Term Memory

The solution wasn’t a better model. It was better memory.

I realized I was sitting on a goldmine: the Work Journal I built in Part 1. Because I had been “Logging” my work rather than just checking off boxes, I had built a structured database of my history where Activities represent the current context and Logs represent the historical context—across all my projects.

In OMS Next, my AI isn’t just a text generator. It is an Agent with Read-Access. It has “Tools” (Function Calling) that let it query my work history just like I do. I don’t paste context. The AI retrieves it.

Building the Interface: More Than Just a Chatbot

The CLI wasn’t enough for the big picture. I needed an interface that felt like a natural extension of my thought process. I built a dedicated “AI” tab in the frontend that connects to the backend via OpenAI’s new Responses API (the real-time streaming one).

The UI mimics a standard chat interface but with superpowers:

  1. Streaming Markdown: As the AI thinks, it streams the response token-by-token. Because my system is built on Markdown, the chat renders tables, code blocks, and bold text in real-time. It feels alive.
  2. Context-Aware Sidebar: It keeps a history of our conversations. I can jump back to a chat from last Tuesday about the “API Refactor” without losing context.
  3. Transparent Thinking: When the AI decides to look something up, the UI shows it: “Querying activities for project techlab.homepage…” or “Looking up logs from last week…”. It makes the “thinking” process transparent.
  4. Web Search: Crucially, the AI isn’t limited to my internal data. It has a web_search tool. If I ask “How do I fix this specific Pydantic error?”, it can search the live web for the solution and combine that external knowledge with my internal code context. This has far-reaching implications: it becomes a bridge between my private knowledge base and the world’s collective intelligence.
  5. Export & Portability: Every single AI response, or even an entire conversation, can be downloaded instantly in Markdown format. This ensures that the insights and generated content are never locked inside the tool, maintaining full control and portability of my intellectual output.

Command Center

I also built a slash-command system to trigger common workflows instantly. Typing /help reveals the capabilities available out of the box:

  • /briefing: A comprehensive summary of recent work and priorities.
  • /standup: Generates a status report (what I did, what I’m doing, blockers).
  • /find : Deep semantic search across all my work history.
  • /lastweek / yesterday: Quick temporal summaries.
  • /overdue: Checks for missed deadlines or stale items.
  • /stats: Visualizes my productivity patterns.
  • /project : Focuses the context on a specific project.
  • /urgent: Filters for immediate fires.

This turns the chat into a command line for my business logic.

Use Case 1: The One-Click Briefing (Instant Context)

The most frequent way I interact with the AI isn’t even typing a question. It’s clicking the “Get Briefing” button.

I usually have multiple projects active at once—a client backend, an open-source library, and a writing project. In the old world, switching between these mental states took immense effort.

In OMS, I click “Get Briefing.”

[NOT TODO: Insert Screenshot: The ‘Get Briefing’ AI response showing a strategic summary]

The system grabs the current tasks and recent logs across all selected contexts, injects them into the AI, and generates a unified strategic summary:

Over the last weeks you’ve made big strides turning OMS + FJ into a true “Strategic AI Business Advisor.” On the OMS side you built out the AI chat with OpenAI Responses + tools, added rich slash commands (/standup, /overdue, /priority, /economics, etc.), and wired in project summaries/descriptions so the assistant has real strategic context. You also fixed key UX and reliability issues (PWA versioning, auto-refresh, dark mode, mobile layout, markdown bugs, API auth/timezone issues), making OMS feel more like a polished product than an internal tool. In parallel, you delivered the full three-layer integration vision: FJ now exposes a secure REST API, supports bank CSV import, reconciliation (including multi-invoice payments), a Banking web UI with search and proper formatting, and even a balance endpoint that feeds new bank-aware AI tools like burn rate and runway.

Your focus has been very clearly on “closing the loop” between work, billing, and cash. Recent work logs show a tight sequence: FJ API + auth, OMS-FJ AI tools, aggregate financial tools, then bank import, reconciliation, web banking, and finally bank-aware AI tooling in OMS. At the same time you’ve progressed client projects (…) and kept operational/techlab matters moving (infrastructure fixes, router, etc.). The pattern is: deep investment into your own strategic infrastructure while still shipping on external projects and talks (tb.1000X, tb.tigervibes-2 findings).

Priority items to review right now

  • List with links to activities
  • List with links to activities
  • List with links to activities

Next steps

In the very short term, clear the urgent/time-sensitive items: finalize XXX, then lock in the YYY slides so your upcoming event presence is stress-free. Next, move ZZZ “field-ready” by executing the test/deploy checklist and aligning with … In parallel, pick one infrastructure hardening task (e.g., S3 + DB-backup PR) to close the loop on reliability. Finally, schedule focused time to … finish an article.

It doesn’t just list tasks; it synthesizes them into a battle plan. It downloads the state of the business into my brain in 10 seconds. I am back in flow immediately.

Use Case 2: Intelligent Rubber Duck (Problem Solving)

Take the classic “Intelligent Rubber Duck” moment. I hit a nasty Alembic migration sync error. In the old world, I’d be stuck googling generic error messages or wading through Stack Overflow threads from 2018.

Now, I simply ask my AI, “How did I fix that migration sync error last time?”

[NOT TODO: Insert Screenshot: Chat interaction showing the AI recalling a specific fix from past logs]

The AI queries my Logs, finds the entry from four months ago, and tells me exactly what happened: I encountered this on the ecommerce-api project, identified it as a multiple head revision issue, and fixed it by running alembic merge heads. This is the difference between “Search,” which finds a file, and “Intelligence,” which finds an answer. The AI acts as an extension of my own memory.

The Virtuous Cycle: AI Writing for AI

You might be wondering: “Who has time to write such detailed logs?”

The answer is: I don’t. But my AI agents do.

I spend my life in the terminal, usually deep in Neovim and tmux. These days, I almost always have a side pane open with a coding agent like Claude Code running. It’s not because I want it to write the code for me at all times—often I don’t let it touch the files at all. It’s because I need to reason about what I’m doing.

The interaction is less like prompting a bot and more like live-streaming my work to a silent partner. I explain my intent, I paste in the ugly error logs, and we debate the logic. “If we move this state here, won’t it cause a re-render loop?” “Yes, but if we memoize it…”

I’m not writing documentation; I’m just working. I’m treating the agent as a super-powered Rubber Duck. But because this entire dialogue happens inside the agent’s context window, I am inadvertently creating a perfect, high-fidelity record of the decision-making process.

When the feature is done, or the bug is squashed, I don’t have to switch context to write a report. I simply type into that same side pane: “Log this.”

The agent looks back at our conversation—the logic we debated, the tests we ran, the files we changed—and synthesizes it into a structured command:

oms log "Refactored auth middleware to handle race condition" --body "Detailed explanation of the mutex lock implementation..."

I don’t have to break my flow to document. The system captures the intellectual exhaust of the work automatically. This creates a virtuous cycle: I use AI to generate the work, and the AI captures the context of that work, making the next AI session smarter. My history becomes rich and searchable with zero extra effort.

Use Case 3: Technical Decision Support (Strategic Choice)

It’s not just about fixing bugs; it’s about avoiding bad decisions based on faulty memory.

I was recently spinning up a new project and wondering if I should use the fastapi-users library again. My gut feeling said, “Yeah, I used it last year, it was fine.” But when I asked the AI to summarize my experience, it surfaced the cold, hard truth I had conveniently forgotten.

It reminded me that I spent 12 hours debugging custom user models and logged three separate entries regarding high frustration with the documentation, ultimately concluding it was “too rigid for custom auth.” Based on this reality check, I decided not to use it. The AI saved me 12 hours of future pain by simply remembering the past better than I did.

Use Case 4: Automated Standup

Standups are usually performance theater for managers. But for a solo founder, a Standup is a Sanity Check. It forces me to confront reality: What did I actually do yesterday? Am I lying to myself about my progress? But writing a report for myself felt silly, so I skipped it, and then I drifted.

I automated the ritual. I type /standup. The AI looks at my Logs from the last 24 hours and my Urgent Activities for today. It synthesizes a narrative, highlighting that yesterday I shipped the new landing page CSS but spent three hours fighting a CORS blocker, and today my focus is the urgent database migration. It mirrors my work back to me, keeping me honest without the drudgery.

[NOT TODO: Insert Screenshot: The /standup output showing a clean daily summary]

Control over Context

Let’s address the elephant in the room: I am sending my work journal to OpenAI.

For some, that’s a dealbreaker. For me, it’s a trade-off I accept, provided I have control.

The “Privacy” here isn’t about air-gapping; it’s about Agency. In a standard ChatGPT session, you often paste huge dumps of context blindly. In OMS, I use the Lenses I described in Part 1 to curate exactly what the AI sees. If I’m working on techlab.homepage, I enable that project lens. The AI sees that context and nothing else. I don’t accidentally leak my private journal entries or my other client’s confidential roadmap because the system respects the boundaries I set.

Furthermore, my laptop is powerful enough to run models like gpt-oss-120B locally, via LMStudio and Ollama. I plan to implement a “model switcher” soon and test various open-source models that are good with tool calls. This will provide a truly privacy-sensitive option, allowing me to process highly confidential context entirely offline when needed.

Conclusion: From Chatbot to Co-Founder

This integration changed the dynamic completely. By connecting the AI to the full state of my work—both the historical Logs and the active Activities—I gave it visibility into the entire trajectory. It sees not just where I’ve been, but where I’m trying to go.

This transforms the system from a passive archive into an active partner. I stopped being alone in the void and gained a collaborator that sees my entire journey—past, present, and future—and never sleeps, never forgets, and knows exactly what I was doing yesterday.


In Part 3, I will connect the final dots. I have Work (Part 1) and Intelligence (Part 2). Now I need Reality. I will show how I connected the system to my Bank Account and Invoices to build my “Strategic CFO.”


Building a Memory for an AI That Wakes Up Fresh Every Day   •   Strategic CFO: Why I Needed to Automate My Reality Check   or   Back to the Homepage