This morning I had five "colleagues" working at the same time.

In one window, ChatGPT Codex was chewing through a batch of sales calls β€” scoring qualification strength, flagging deal risks, mapping MEDDPICC gaps. In another, Claude Co-work was running a lead-gen workflow: scanning for triggers, pulling relevant proof points from my case study library, drafting personalized outreach.

On the side, a browser agent β€” the kind that can open tabs, click through websites, and interact with web apps on your behalf β€” was inside my Make.com dashboard, diagnosing why an automation scenario broke and proposing the fix. Meanwhile, a deep research report was running on best practices for a new plugin feature Claude just released. In PowerPoint, another agent was updating slides for a presentation I'm giving next week.

I wasn't writing. I wasn't researching. I wasn't building.

I was setting goals, providing direction, approving outputs, and QA'ing.

If you're a sales leader managing a team of 10, 20, 50 people β€” this is what your reps' workday will look like soon. And your job will be teaching them to manage it well.

I've been building with these tools daily for over two years β€” early enough that Google recognized me as an early adopter of their AI products, and Anthropic invited me into a small group of consulting power users. I'm in the top 5% of ChatGPT usage globally. I mention this because it means I've had more reps than most, and the patterns are getting clear.

I'm sharing this for two reasons. First, because I'm living a version of the near future β€” and I want to show you what your workday is going to look like sooner than you think. Agents aren't a roadmap item anymore. They're shipping in every major platform. The pressure on leaders to figure this out β€” and on individual contributors to stay relevant β€” is real and accelerating.

Second, because the question I hear most from sales leaders is understandable β€” but incomplete.

"What tool should I buy? What's the best prompt? What's the right workflow?"

Those are valid questions. But they skip the layer that actually determines results. Getting the most out of AI is a management problem β€” and the tools and prompts are secondary.

Congratulations: You Just Hired a Team of Harvard Interns

The analogy I use in every training I run:

You just hired an intern from Harvard. Brilliant. High IQ. Trained on a massive slice of public knowledge. Knows a staggering amount about a staggering number of topics.

The problem? They don't know a thing about your business. They don't know your customers, your positioning, your sales process, your voice, or how you like to work. They've never seen your CRM. They don't know which competitors you worry about or which claims you can't make.

They have world knowledge. They have zero your-world knowledge.

The managers who invest the most in onboarding that intern β€” clear expectations, structured context, tight feedback, gradually increasing responsibility β€” are the ones who get extraordinary output. The managers who hand them a vague assignment and say "make it good" get back something that looks polished and is confidently wrong. Polished and wrong is the default failure mode.

That's exactly how AI works. The management skills transfer almost 1:1. Clear expectations, structured onboarding, tight feedback loops, checklists, escalation rules, gradually increasing autonomy β€” these are the things that determine whether your AI outputs are useful or embarrassing.

One more thing that makes this durable, not just a 2026 take: every new model release β€” GPT-5, Claude 4, Gemini Ultra, whatever comes next β€” raises the intern's IQ. They get smarter. They're more likely to get things right the first time with fewer explanations. That's real, and it will make parts of this easier over time.

But a higher IQ will never remove the requirement for your intern to understand your specific business, your specific customers, your positioning, your voice, and how you like to work. That context doesn't come with the model. It comes from you. The onboarding, the briefing, the feedback loops β€” that's permanent. The intern gets smarter, but the management stays essential.

As a former OpenAI and Meta engineer wrote recently: "It currently feels like managing a team of barely-competent interns. Soon, it will be akin to leading a group of high achievers, each of whom is more capable, faster, and smarter than you." The IQ goes up. The management requirement stays.

Managing AI agents well requires the same skills as managing people well. That's the thesis behind everything I'm about to share. The technology is new. The management discipline is decades old. And the gap between "AI disappointment" and "AI that actually works" is almost entirely a management gap.

The Weird Part

These agents don't need constant keystrokes. They work autonomously for 10, 20, sometimes 30+ minutes at a stretch. They check in when they need a decision β€” a permission, a login, a judgment call on direction. Then they go back to work.

Which means my day has become a series of approval queues and check-ins. Brief a task. Context-switch. Get pinged. Review. Approve or correct. Move to the next one.

It's productive. It's also cognitively strange.

I have ADHD. Parallel work is both my superpower and my kryptonite. I'm comfortable juggling threads β€” maybe more comfortable than most people. But I also forget agents are running. I'll scroll past a tab and think, "Oh right, that's been waiting on me for 20 minutes." The output is sitting there, ready for review, and I didn't even notice.

That's not an ADHD problem. That's a human problem. Agents just made it everyone's problem. I've had 30+ years of practice noticing when my attention drifts. Most people haven't. And agents don't ping you when you forget about them.

That's the part nobody talks about when they hype AI productivity. The bottleneck isn't effort anymore. It's attention.

The Automation That Broke (And What It Taught Me)

Here's a story that keeps me honest.

I built a task-management system β€” an agent that scans my meeting transcripts in Google Drive, extracts action items, assigns them to me or my team, writes them into a shared Google Sheet, and generates a standup dashboard. It runs three times a day. It even suggests who I should delegate tasks to, with reasons.

When it works, follow-through becomes almost automatic. Commitments from client calls don't get lost. My team can filter by their name and see exactly what's on their plate.

When it broke β€” and it did break β€” it didn't fail in some dramatic, sci-fi way. A timestamp detail drifted. A connector permission lapsed. The system quietly stopped extracting tasks, and I didn't notice for two days because nothing looked wrong on the surface.

That's the real risk of agent-powered work. The AI doesn't go rogue. It goes silent β€” and you don't catch it because you assumed it was handling things.

So I fixed it the same way you'd fix any operational gap: better management. Run summaries after every scan. Duplicate detection before writing to the sheet. A single scheduler account so two instances don't collide. A rule that I check the dashboard before every internal meeting β€” not after. And a simple heartbeat check: if task extraction hits zero two runs in a row, alert me.

Same stuff you'd do if an intern stopped showing up to work and nobody noticed.

What I Actually Do All Day Now

My workday has three modes:

Briefing. Before any agent touches a task, I write what I call a Manager Brief: what "done" looks like, what context to use, what constraints to follow, what format to produce. Six lines, maybe seven. It takes 2-3 minutes. It replaces 45 minutes of doing the work myself.

QA. Every output gets a 90-second check: Does this pass a sanity scan? Are claims cited or flagged? Does it match our voice? Would I be embarrassed if a client saw this? Score it A, B, or C. If it's not an A on accuracy, it doesn't ship.

Playbook updates. Every correction becomes a rule. Every good output becomes a template. The playbook compounds. Yesterday's fix is tomorrow's default.

That's it. Brief, QA, update. Repeat.

What This Does To Results (Real Numbers)

I ran my call-analysis workflow on a set of enterprise sales calls recently. The agent scored each call against MEDDPICC, cited evidence from the transcript, identified gaps, and produced coaching recommendations.

Across 10 calls spanning 7 deals, the average score was 19.2 out of 40 (48%). When I ran a stricter pass β€” scoring only what had direct buyer-quote evidence β€” the effective average dropped to 31.3%, with only 1 of 7 deals reaching "Qualified" status.

That's a diagnostic. It tells you exactly where qualification is breaking down, which gaps are systemic vs. deal-specific, and what questions to ask on the next call. It took about 10 minutes of my time. The agent did the analysis; I did the QA and the coaching interpretation.

A year ago, that analysis would've taken me a full day per client. Now it's a workflow I run before lunch.

The Future Isn't a Billion-Dollar One-Person Company

You've heard the hype. "AI will create the first one-person billion-dollar company." I don't buy it. The coordination costs alone make that unrealistic for anything beyond a niche software product.

But what I do see clearly: a small team with an agent crew can operate at a level that used to require a much larger org. The agents aren't perfect β€” but the humans become better managers, and better managers get compounding returns from every system they build.

Jason Lemkin at SaaStr proved this publicly. After his last salespeople quit, he replaced a 10-person team with 20 AI agents and 1.2 humans β€” and maintained the same performance for an 8-figure business. The remaining human's job? Management overhead: prompt refinement, daily monitoring, quality control. Sound familiar?

The shift is simple:

From: doing tasks (research, notes, drafts, updates, admin).

To: supervising tasks (approve, correct, escalate, and reuse what works).

I've started calling this the "deal conductor" model. You're not playing every instrument. You're running the orchestra β€” making sure each section comes in at the right time, at the right quality, in the right key. The agents are the sections. You're the conductor.

Vercel's COO coined the term "agent manager" to describe this exact shift. They trained an AI agent on their best sales rep and restructured a 10-person team around one top performer plus agents. Her line: "The future is you might graduate from college and you're a manager now. We're all going to have to learn to delegate, to break down tasks." That future is already here for the people paying attention.

And like any conductor, your value comes from how well you direct the ensemble β€” a management skill, through and through.

This All Comes Back To Management

I said it at the top: managing AI agents requires the same skills as managing people. Every section of this piece proves it.

The broken automation? That's what happens when you don't have check-in cadence and monitoring β€” same as an intern who stops doing work and nobody notices.

The Manager Brief? That's a delegation contract. Good managers have been writing those for decades.

The QA loop? That's a 1:1 review meeting, compressed to 90 seconds.

The playbook that compounds? That's institutional knowledge β€” the same thing great sales orgs build when they document what top performers do differently.

The AI is a brilliant intern. Harvard brain, zero context. It will produce impressive-looking work that's confidently wrong if you don't give it guardrails, examples, constraints, and a clear definition of "done."

Most people are disappointed in their AI outputs and think they need better prompts. They don't. They need better management.

One Template You Can Use Today

If you take nothing else from this, take the Manager Brief. I call it that deliberately β€” because the moment you stop thinking of it as a "prompt" and start thinking of it as a delegation, your outputs improve.

In management terms, the Brief serves two purposes:

Task delegation β€” every time you ask an AI to do something specific (research an account, score a call, draft an email), you're giving an assignment. The Brief is what a good manager writes before handing that assignment to a direct report. Without it, you're saying "make it good" and hoping for the best.

Agent onboarding β€” when you build a CustomGPT, a Claude Project, a Gemini Gem, or any persistent agent, you're onboarding a new hire. You're defining their role, what they know, what they have access to, how they should work, and what "good" looks like by default. The Brief becomes the standing instructions β€” the intern's handbook.

Same management discipline, two scales. One is a single assignment. The other is a job description.

The 6-line Manager Brief:

  1. Outcome: What does "done" look like? For whom?

  2. Context: ICP, deal stage, product, voice, constraints.

  3. Inputs: What docs/notes/data should it treat as source of truth?

  4. Checkpoints: Plan first, then draft. Where do you want to review?

  5. Acceptance criteria: What rubric will you score it against?

  6. Output format: Template, length, structure, examples of good and bad.

Takes 2-3 minutes to fill out for a single task. Takes maybe 20 minutes to build into a persistent agent. Either way, this is the management move that changes output quality more than any tool selection or prompting trick.

Next Time: The Playbook

In Part 2, I'll share the full operating system β€” the QA loop, the rubrics, the feedback language, and two complete workflows you can copy:

  1. Account research β†’ personalized outreach (with the brief, rubric, and checklist)

  2. Call analysis β†’ MEDDPICC scoring β†’ deal risk map (with the scoring template and coaching output)

Plus the "attention management" layer I use to keep parallel agents from turning into chaos β€” which, if you've ever had 5 browser tabs doing work without you, you'll appreciate.

If you're already experimenting with agents, reply and tell me what you're building. I'm genuinely curious what workflows people are running.

Victor Adefuye is the founder of Dana Consulting, where he helps B2B sales teams improve productivity through AI adoption, sales methodology, and coaching. He writes the Superintelligent Sales newsletter.

Keep Reading