Building Moats in the Age of AI
Beyond Big Models: The Three Pillars of Sustainable AI Advantage
Everyone's worried that OpenAI or Anthropic will crush their AI startup with their next release. Each new model launch triggers the same social media hysteria: "This changes everything!".
But does it really?
The most successful AI products today are still straightforward chatbots. The developer ecosystem, with its integrated development environments and agents, offers a glimpse of what's coming for everyone else – and reveals something crucial about building lasting advantages in AI.
The giants of AI are building increasingly powerful thinking engines. These models excel at processing information and generating responses – ask them about cooking, the universe, or complex concepts, and they'll impress you. The common belief is that as these models get cheaper and more powerful, they'll eventually do everything.
But raw thinking power is just one piece of the puzzle.
Enter agents – the bridge between our human-only past and an AI-driven future. These agents need three critical capabilities: access to relevant information, the right tools for the job, and understanding of human context and intent. They are how we "package" our tasks and intent for that thinking power. Currently, we're using PhD-level intelligence as a universal hammer for everything. It's like using a supercomputer to run a calculator – possible, but absurdly inefficient.
The future of AI isn't about having the biggest model. It's about specialization, context, and understanding how humans actually work. Here's why.
The Three Pillars of AI Moats
1. What to Think About: The Power of Context
While AI models are great at thinking, they need something to think about. In the real world, we always operate within specific contexts – analysing company reports, reviewing legal documents, managing customer relationships, making strategic plans, conducting market research. There is always a context.
You've probably experienced this limitation firsthand: asking a question in a chat interface, realising it can't answer, then copying and pasting relevant information so it can reason about real things. This process is remarkably inefficient – like working with paper spreadsheets instead of Excel. Just as Excel revolutionised both calculation and data storage, we need better systems which incorporate AI.
Retrieval Augmented Generation (RAG) represents the first step in this direction, retrieving relevant information from databases before generating answers. But current solutions are primitive, treating text as random chunks to be split and retrieved. More sophisticated approaches are emerging that create information graphs, enabling precise document retrieval through understanding how different pieces of information connect.
Building such systems requires deep domain understanding. It's like creating a detailed map of information and its interconnections. Companies are already building moats this way – Exa.ai is mapping the internet for AI consumption, while we at Starwatcher.io are building knowledge graphs of companies and investors. The key isn't just collecting data; it's making it meaningfully accessible for AI systems.
And while we are at it, long context windows do not help. For the engine to operate efficiently, it has to have some map about the content.
2. How to Think: Capturing Decision Intelligence
The most overlooked aspect of AI development is understanding and systematizing how humans make decisions. We take our decision-making for granted – it's intuitive, based on years of experience and countless subtle factors.
Every company has its unique DNA – whether they're serving food, investing in startups, or teaching math to grown ups. It's reflected in their processes, priorities, and perspectives. Anyone who's managed a team knows you can't just expect people to "do the right thing" without guidance. The same applies to AI – without an explicit agenda, it can't align with your values, strategy, or vision.
Our agenda and accumulated experience seems obvious to us, but machines have no access to this implicit knowledge. They can't pick up on informal cues or unwritten rules. We have to make everything explicit, either by showing examples or allowing AI to monitor specific activities.
Think about all the factors that influence your decisions: Are you an environmentalist or libertarian? Detail-oriented or big-picture focused? Soft-spoken or direct? What patterns emerge from your past decisions in particular contexts? These characteristics shape how decisions are made, yet they're rarely documented explicitly. AI agents will serve as observers, documenting workflows and building comprehensive models of how different people approach similar problems – capturing not just what decisions are made, but why.
There is an interesting project called Boardy. It literally asks questions to understand your agenda and background.
3. Division of Knowledge Work: The Rise of Specialized AI
As Adam Smith described efficiency gains from division of labor in manufacturing, we're seeing a similar revolution in knowledge work. Complex tasks can be broken down into smaller, even very tiny specialised components handled by purpose-built AI models.
These specialised models offer compelling advantages:
Lower computing costs
Faster response times
More consistent results
Better accuracy within their domain
Consider a model trained specifically to identify business models from company descriptions. It doesn't need GPT-4's broad knowledge – it just needs to excel at one specific task.
"I fear not the man who has practiced 10,000 kicks once, but I fear the man who has practiced one kick 10,000 times."
- Bruce Lee
Big models serve as teachers to smaller models. PHD level model basically trains the small model on specific, very well defined task. This is called model distillation and people suspect that Deepseek did this to OpenAI models.
Building Your Moat
Big models are becoming commoditised pretty quickly. The real moat isn't in the model itself – it's in how you apply it and shape for specific problem.
Building sustainable competitive advantage in the AI era requires answering three critical questions:
What specific domain do you understand deeply?
What decision processes in your industry could be systematised?
What specialised tasks could be optimised with focused AI models?
Essentially it comes down to building integrated and unified user experiences. Like with Excel where we have single place for numbers and we don't think about tool and storage as separate things. The model should fade into the background – users shouldn't have to think about whether they're using GPT-4, Claude, or a specialised model. They should just get their work done.
The next wave of successful AI companies won't win by having the biggest models. I think this fight is becoming irrelevant for most of people. They'll win by building systems where AI is invisible yet indispensable – where users focus on their work, not the technology behind it. As Alex Albert from Anthropic points out, this is like building new factories from the ground up rather than adding robots to old assembly lines.
This aligns perfectly with Stanford's insight that "Large Language Models Get the Hype, but Compound Systems Are the Future of AI" The winners won't be the companies with the most powerful AI – they'll be the ones who build the most thoughtfully integrated systems that just work.
In short - forget the models. Solve problems with right tools.
p.s. At Starwatcher.io we are mapping companies, capturing context and building agents on top. We are raising. Let us know if you are interested.
Ernest