- Published on
The Real AI Bottleneck Isn't the Model. It's How We Enter Data.
- Authors

- Name
- Mitch Metz
I recorded a quick video walking through these ideas. If you prefer watching over reading, here's the breakdown.
I'm working on a CRM overhaul for a client right now. You can probably relate to the SMB CRM journey... years of customer data, scattered notes, half-filled fields, duplicate contacts...
And it hit me that this problem is founded on a paradigm that AI will disrupt.
We've been hand-drawing pixels when AI can generate the whole image.
What I mean by that
Think about how most CRM data gets created. Someone has a sales call, then manually adds info, moves the prospect along to the right stage, adds the tags... They try to remember what the prospect said about budget. They copy-paste an email address. They forget to log the follow-up.
This made sense when humans were the only ones reading the data. You enter what a human needs to retrieve later.
But AI can work with vastly more information, and different kinds of information. Voice recordings. Full conversation transcripts. Unstructured notes. The AI doesn't need you to distill a 30-minute call into bullet points and tags. It can read the whole thing and pull out what matters.
The constraint isn't AI capability anymore. It's that our tools and processes haven't caught up.
The 80% sitting unused
Here's a stat that puts this in perspective: unstructured data makes up 80-90% of enterprise information, and less than 1% of it is being used for AI.
That's the semantic data I'm talking about. Voice notes, SOPs, random ideas, sales conversations. All the stuff that's too messy for traditional databases but contains most of the actual insight.
The companies that figure out how to organize and contextualize this data are going to have a serious advantage. Not because they have better AI, but because they're feeding their AI better information.
Context engineering, not prompt engineering
Gartner has started using the term "context engineering" for this. The idea is that the breakthrough isn't writing better prompts. It's designing systems that give AI the right data and context so it understands what you need without manual intervention.
By 2027, Gartner predicts organizations will use small, task-specific AI models three times more than general-purpose LLMs. Why? Because contextualized solutions work better than generic ones.
What AI-first data looks like
Back to that CRM overhaul. The old approach was structured fields: name, email, budget range, project type. Manual entry. Human readable.
An AI-first approach looks different:
Memory systems. The AI reads past conversations and maintains context across interactions. Instead of pulling up all 50 previous messages, it pulls up synthesized notes about this customer. Their stated budget. What they actually seem willing to spend. Concerns they mentioned three calls ago.
Self-updating context. The AI writes to the database, not just reads from it. It's constantly updating its own information about each client based on new interactions.
Semantic search. Instead of rigid categories, the AI can search for meaning. "Show me everyone who mentioned kitchen renovations and seemed hesitant about budget" becomes a real query.
Enterprises using these AI memory systems report 3x higher adoption rates and 2.5x better task completion accuracy compared to traditional AI implementations.
Delegation is scary
Pre-AI, delegation often felt like a tradeoff. You could scale, or you could maintain quality. A salesperson managing 10 accounts could give each one real attention. A salesperson managing 100 accounts had to triage.
Now, with the right data infrastructure, you can delegate in a way that actually serves the client. The AI remembers what a great employee would remember. It catches what a great employee would catch. It doesn't forget that someone mentioned their anniversary is coming up, or that they had concerns about timeline.
McKinsey describes RAG as harnessing the strengths of both humans and machines. That's the right framing. It's not replacement. It's amplification.
Trust and security concerns don't make AI optional
Construction, the industry I work in, is relationship-heavy. People hire builders they trust. There are real concerns about AI-assisted memory, data security, and what happens when systems get it wrong.
Those concerns are valid. They need solutions. But they don't make AI optional.
Every business will need to adapt. 40% of enterprises are expected to double their investment in semantic infrastructure by 2026. The question isn't whether to adopt these systems. It's whether we build them in ways that actually serve people better, with the right error control, interpretability, and oversight baked in.
What this means practically
The constraint isn't whether AI can do the work. It's whether you have systems that give AI what it needs to do the work well.
That means:
- Moving away from manual data entry toward automatic capture
- Building infrastructure for unstructured data, not just structured fields
- Thinking about AI memory as a core system, not an add-on feature
- Designing for semantic search, not just keyword lookup
The model is already there. The tools and the marketplace haven't caught up yet.
So we can be the companies that do it first, and help a lot more people than everyone else is.
