Only 0.6%.

That's the percentage of a client's support cases with enough resolution detail to actually learn from. Out of 12,500 cases in the ticketing system, 69 contained information you could use to train the next person.

The system knew what happened. It had no idea how it was diagnosed.

If you manage a revenue team, the same ratio probably applies to your CRM. I sat with a CRO last week β€” did a live deal inspection on one of his best rep's biggest opportunities. We pulled everything from Salesforce into an AI prompt. The AI came back with: critical event unknown, decision process not visible, champion not tagged, close date has slipped four times. The CRO's reaction: "We don't have any of these fields β€” they're buried under billing instructions."

His best rep knew the champion's name, the internal politics, the real timeline. None of it was in the system. The intelligence lived in the rep's head and was scattered across email threads.

That's the same 0.6% problem in a different system. And until recently, there was no scalable way to get at it.

But with AI, that has changed.

Where the real knowledge lives

The client is a B2B technology company with a 50+ person support operation. Their ticketing system is ServiceNow β€” a serious platform with built-in AI features designed to auto-generate knowledge articles, suggest resolutions to agents during live cases, and power virtual agents for customer self-service.

All of those features need one thing to work: detailed resolution narratives in the ticket record.

The data didn't exist. Not because the team wasn't solving problems β€” they were solving them constantly. But the resolution field would say "rebooted" or "back online" or "self-healed within SLA." That's enough to close the ticket. It's not enough to teach the next agent what to do when the same issue comes back.

So where was the knowledge?

In a Microsoft Teams group chat. Two years of it.

When a support agent hit something they couldn't figure out, they posted in the team chat. A senior engineer β€” usually the same person β€” would respond with diagnostic steps, workarounds, or escalation paths. The problem got solved. The chat scrolled away. No ticket updated. No article written. No record that anyone could search later.

We analyzed one month of that chat: 927 messages. Of those, 81 (8.7%) contained real troubleshooting exchanges β€” the kind of problem-and-resolution content that belongs in a knowledge base. The other 846 were operational coordination: shift handoffs, ticket assignments, status updates, team banter.

From those 81 messages, we extracted 7 distinct topic clusters and drafted 8 structured knowledge base articles β€” formatted for direct import into ServiceNow.

One month. One data source. Eight articles that didn't exist before.

If that month is representative, the full two-year chat archive likely contains material for 100–200 structured KB articles. A complete, searchable knowledge base that currently exists only as scroll-back.

The sources hiding in plain sight

This client's support chat is just one example. Every customer-facing team generates unstructured data that carries intelligence your structured systems miss. The sources differ by function, but the pattern is the same: the system of record captures the transaction, and the reasoning that produced the outcome lives somewhere else.

For support teams: Ticketing systems capture volume, categories, and assignment. The diagnostic reasoning lives in internal chats, email threads between agents and engineers, and the unstructured notes agents type into free-text fields that nobody mines.

For sales teams: CRMs capture deal stages, close dates, and revenue. The actual buying signals β€” objections, competitor mentions, champion dynamics, pricing sensitivity, why a deal stalled β€” live in call recordings, email threads with prospects, and the free-text fields buried under billing instructions that nobody scrolls down to read. Even when qualification fields exist, reps don't fill them out because the CRM layout prioritizes operational data over deal intelligence. The information that predicts deal outcomes is almost never in the fields that reports pull from.

For customer success: Health scores and renewal dates sit in your CS platform. The context that explains them β€” adoption blockers, stakeholder changes, expansion signals, the reason someone is actually at risk β€” lives in QBR recordings, onboarding session transcripts, and the Slack messages between your CSM and the customer.

For marketing: Campaign metrics tell you what performed. The voice of customer β€” the actual language buyers use to describe their pain, the way they frame value to their leadership, the words that make them stop scrolling β€” lives in sales call transcripts, customer interviews, and support conversations.

Industry estimates put unstructured data at 80–90% of everything an enterprise generates. IBM's CEO recently said 99% of enterprise data is currently unavailable to AI, with the majority of it unstructured. Microsoft's Satya Nadella put it more directly: "The world is just too messy for SQL."

That's the core tension. Structured databases require that you know what to ask before you ask it. You need pre-defined schemas, clean categories, standardized fields. Most business knowledge doesn't work that way. Contracts have custom clauses. Emails carry intent. Customer calls carry emotion. A resolution that says "rebooted" means something completely different depending on what got rebooted and why.

Why the gap isn't a people problem

The 0.6% resolution detail rate at this client wasn't a performance failure. It was a structural reality.

Their VP of operations put it plainly: agents write "self-healed" or "rebooted" because the ticket needs to close, not because they're lazy. But did you reboot the application? The computer? The network switch? What diagnostic steps came before the reboot? That level of detail takes time to document, and there's no incentive in the moment.

In 26% of human support cases, agents actually wrote richer narrative in the Status field than in the Resolution field β€” the right information in the wrong place. ServiceNow's KB generator looks at the Resolution field. The useful content was sitting one field over. A process gap, not a knowledge gap. Addressable.

The same dynamic shows up everywhere:

A sales manager has 8 reps making 25 calls a week each. That's 200 conversations. Reviewing them all at 2x speed would take 75 hours. The manager has maybe 4 hours a week for coaching. So the intelligence in those calls β€” the objection patterns, the deal risks, the coaching opportunities β€” goes unreviewed. The CRM gets a stage update. The recording sits in a folder.

A CS manager runs a QBR and the customer signals expansion interest, mentions a new VP who's skeptical, and says the implementation took too long. That intelligence might make it into a health score update β€” but the specifics that would let someone act on it? They're in the recording and the CSM's memory.

The problem compounds with concentration. At this client, 54% of operational knowledge was concentrated in 5 people. The person every agent turned to for complex issues was a VP β€” a senior vice president of operations responsible for the entire support organization. He appeared in 16.5% of all chat activity. In the formal ticket system? 36 rows across the entire year. Fourteen assigned cases.

A VP's troubleshooting expertise β€” years of institutional knowledge about edge cases, device quirks, customer-specific configurations β€” lived almost entirely outside the record. In a group chat. If he left, the knowledge left with him.

That's a risk you can measure. And it has direct costs.

If that VP spends even 5 hours a week answering questions in chat that a knowledge article could handle, that's an estimated $19,000–$26,000 per year in VP-level compensation absorbed by front-line troubleshooting. Not strategy. Not team development. Not customer relationships. Chat messages about rebooting cameras.

And that's just his time. It doesn't account for the other four top contributors carrying 37% of the remaining load, the delay cost when agents sit idle waiting for an expert to respond, or the customer experience impact when the VP is in a meeting and the chat goes unanswered for two hours.

That's the employee experience cost: agents who can't resolve issues independently because the knowledge lives in someone else's head. And the customer experience cost: slower resolutions, inconsistent answers, and the risk that a known fix doesn't get applied because the one person who knows it isn't available.

What AI changes (and what it doesn't)

Before LLMs, you could search a chat log for keywords. Find every message that mentioned "camera" or "firmware." That's useful but limited β€” it tells you which messages contain a word, not what they mean in context.

Now you can ask: What were the recurring diagnostic patterns for camera failures across all of January, and which resolution paths actually worked?

That question was unanswerable 18 months ago. We answered it from one month of chat data.

The AI didn't just find keywords. It identified that two senior engineers recommended contradictory diagnostic paths for the same camera symptom β€” one pointed to firmware, the other to a USB cable. Both paths were documented in the same draft KB article with a flag for the SVP to validate before publishing.

It flagged a batch defect pattern β€” the same camera color-path failure recurring across multiple customers and device types β€” that individual tickets had treated as isolated hardware problems.

It surfaced an unstandardized process: one senior agent had told the team in chat to reproduce and document workflows before opening bug escalations, but that instruction never became formal policy. It lived as a single chat message.

And it identified the 8-case "spinning wheel" pattern β€” the same connection failure appearing across different rooms at one customer, all on the same day, all assigned to the same agent. That single-day cluster was the strongest pattern in the dataset, and exactly the kind of recurring issue that a KB article would prevent from consuming that much capacity again.

All from one month of a group chat.

What AI doesn't change: It can extract and structure knowledge at speed. It cannot validate whether the resolution is actually correct. The VP made this point directly: "Work with the senior-level folks first. Establish what the closure template should look like. Then we can train the rest of the team."

That's the right instinct. AI handles the extraction and pattern recognition. Humans validate the content and own the quality standard. The expert doesn't stop being valuable β€” they stop answering the same question for the fifteenth time. Their judgment gets focused on the 20% of cases that genuinely require expertise instead of the 80% that a structured article could handle.

The bottom-line math

The business case here isn't theoretical.

VP and expert time recovered. The VP at $19K–$26K per year is one person. Add the other four top contributors carrying 37% of the load, and you're looking at $75K–$100K+ in senior capacity currently absorbed by repeatable questions. That's not a technology line item. That's people doing work that a searchable knowledge base could handle β€” freeing them for the strategic work they were hired to do.

Faster resolution times. When agents can search a KB instead of waiting for someone to respond in chat, mean time to resolution drops. For a support operation handling 700+ human cases per year, even a 15–20% improvement compounds into real customer experience gains. Fewer escalations. Fewer "sorry for the delay" messages. More first-contact resolutions.

Reduced knowledge continuity risk. 54% of knowledge concentrated in 5 people is a succession planning problem hiding in an operational process. A structured KB doesn't replace those people β€” it captures what they know so the organization isn't one resignation away from a capability gap.

Platform ROI activation. This client already paid for ServiceNow's AI capabilities β€” Now Assist, Virtual Agent, 1-click KB generation. Those features sat idle without structured content to work from. The KB extraction doesn't compete with the platform investment. It's the prerequisite that activates it. Every dollar already spent on the platform was underperforming because the content layer was missing.

These economics apply to every function. A sales team that systematically mines call transcripts can identify coaching opportunities without managers listening to every recording β€” and the patterns AI finds across 50 calls are patterns no human could spot by sampling three. A CS team that extracts expansion signals from QBR transcripts can prioritize accounts based on what customers actually said, not a health score proxy. A marketing team that analyzes real customer conversations can stop guessing which language resonates and start using the words buyers already use.

And the gains compound. I've modeled this with clients: improving seven conversion stages in a sales process by just 10% each β€” slightly better prospecting, slightly better qualification, slightly better proposals β€” can nearly double total revenue. From $1.7M to $3.3M in one model. The coaching insights that produce those improvements come from exactly the kind of unstructured data most teams aren't mining. The intelligence is there. It's just not being captured in a form anyone can act on systematically.

Next week

I showed you the problem through one company's support operation. The 0.6% resolution detail rate. The knowledge trapped in scroll-back. The expert whose institutional expertise barely registered in the formal system.

Next week, I'll show you how the same pattern plays out in sales, marketing, and customer success β€” with specific results from engagements where mining call transcripts and deal conversations produced measurable revenue impact. A mid-market sales team that nearly doubled their conversion rate by analyzing 47 calls. A $175M company where 83% of deal outcomes were predicted by a single factor that existed nowhere in their CRM. And a practical 90-day framework for getting started with whatever data you already have.

The unstructured data thesis isn't coming. It's already producing results for companies that start where they are.

If you're wondering whether your team has the same 0.6% problem β€” most do, just in different systems β€” I run a 90-minute diagnostic that maps where your trapped knowledge lives and what it would take to unlock it. Reply if that's worth a conversation.

Keep Reading