| 9 min read

When to Use AI vs Ask a Human: A Decision Framework for Managers

Forget whether AI replaces managers. Decide which specific tasks go to AI and which don't. 2-axis framework that gives you a clean answer every time.

Last month, a manager I know asked ChatGPT to draft a termination letter. He had a real situation, a real underperformer, a real HR process to run. He figured: this is what AI is for, right? Save me an hour.

The draft was fine. Professionally worded, legally cautious, emotionally neutral. He made a few edits and sent it to HR for review. HR pushed back. The letter was technically correct but tonally completely wrong for this specific person, on this specific team, after this specific 18-month working relationship. He rewrote it from scratch. Hour saved: zero. Hour spent questioning whether to use AI again: ongoing.

That is the question most managers are silently wrestling with right now. Not whether to use AI. Not whether AI is smart enough. But which specific tasks in their job should actually go to AI, and which should not.

You need a framework. Here is one.

The real question is not “can AI do this?”

The hype cycle frames the question as capability: can AI write this email, generate this plan, analyze this data? The answer to “can” is almost always yes, which makes “can” a useless filter. A Ferrari can go 180mph. That does not mean you should drive it to the grocery store.

The better question: should I use AI for this? And the answer comes from looking at two dimensions most managers never separate.

The 2×2: reversibility × context dependency

Every task you do as a manager sits somewhere on two axes.

Axis 1: Reversibility. If the output is wrong, how cheaply can it be undone? Sending a typo in a Slack message is highly reversible (you send a correction, done). Firing the wrong person is not reversible (legal, emotional, team, personal). Most managerial work is reversible in a few minutes of rewriting. Some of it is irreversible across years.

Axis 2: Context dependency. Does this task require specific knowledge of your team, your company’s politics, your specific people, or the history of a specific relationship? Writing a summary of a generic article has low context dependency. Giving feedback to a specific person after a specific incident has extremely high context dependency.

Plot those two axes and you get four quadrants, each with a clear rule.

Quadrant 1: Reversible + Low context → AI is a genuine leverage play

Examples: Summarizing an article, translating a document, brainstorming ideas, drafting a first version of a team update, generating interview questions, writing a meeting agenda from bullet points, reformatting a messy document.

For this quadrant, AI is legitimately a 3-5x productivity multiplier. You can review the output quickly, fix anything that’s off, and ship it. If it’s wrong, you just rewrite it, and the cost is minutes, not relationships. This is where almost every manager should be using AI aggressively.

Our guide on how to use AI as a new manager covers specific workflows in this quadrant with copy-paste prompts.

Quadrant 2: Reversible + High context → AI drafts, you heavily edit

Examples: Drafting a specific piece of feedback, writing a 1-on-1 agenda for a specific person, preparing talking points for a tough conversation, writing a performance review first draft, crafting a Slack message for a sensitive situation.

AI can save you time here, but only if you treat the output as raw material, not a finished product. Your job: read what AI produced, notice where it got the context wrong, rewrite the parts that matter, keep the parts that do not. Most managers who burn out on AI do it in this quadrant by shipping the AI output directly. The person reading it will not say “this sounds like AI.” They will say “this feels generic. Did you even think about me?” And that is worse than no feedback at all.

A practical rule: for Quadrant 2 work, AI should reduce your effort by 40-50%, not 90%. If it feels like AI is doing 90% of the work, you are probably shipping something that lands wrong. Our article your AI isn’t underperforming, you’re undermanaging it goes deeper on the editing discipline this quadrant requires.

Quadrant 3: Irreversible + Low context → Ask a human expert

Examples: Legal questions about an employment decision, tax implications of a compensation change, compliance around a specific regulation, medical leave questions, immigration status for a hire.

AI will give you an answer. The answer will sound confident. The answer may be wrong in a way you cannot evaluate because you do not have the expertise to evaluate it. And the cost of being wrong is not “rewrite the paragraph.” It is “attend a deposition” or “pay a fine” or “explain to your VP why you followed ChatGPT’s advice instead of asking HR.”

For Quadrant 3 work, the expert is cheaper than the mistake. Always. Ask HR, ask legal, ask your compliance contact. If you do not have one, find one. AI should not be making decisions where the downside includes the word “liability.”

Quadrant 4: Irreversible + High context → Human only. No exceptions.

Examples: Firing someone, promoting someone, denying a raise, telling someone they will not be considered for a role, breaking trust-sensitive news, handling a harassment complaint, giving end-of-year compensation news, rejecting a candidate after final round.

This quadrant is sacred. Not because AI “can’t” help. Because the act of doing these things IS the management job. Delegating them to AI is not efficiency. It is refusing to do the part of the work that requires a human being accountable to another human being.

A firing letter you generated with AI, even if you edited it heavily, is a firing letter you did not actually write. The person reading it will feel it. They will not know how, but they will know. That is because the craft of finding the right words for this specific person, in this specific moment, after this specific history, IS the work. Outsourcing it is not faster. It is incomplete.

Our article on how to tell an employee their work isn’t good enough covers why the hard conversations have to come from a human voice to work at all.

The second test: who owns the outcome?

The 2×2 covers 90% of the decisions. For the edge cases, use a second test: who is accountable if this goes wrong?

If the answer is “me” and the stakes are high, then the work is human work. AI can assist the preparation, but the output needs your fingerprints all over it. You cannot put accountability on something that cannot be held accountable.

A subtle corollary: anything that appears in a written record (performance reviews, formal feedback, HR documentation, promotion justifications) is output you may need to defend six months or two years from now. If future-you will need to explain the specific wording, present-you should not have outsourced the wording. AI drafts are fine. AI outputs are not.

Specific manager tasks, mapped

To make this concrete, here is how common manager tasks land across the quadrants:

Quadrant 1 (AI-heavy): Meeting agendas, first-draft updates to the team, summarizing long documents, generating interview questions, formatting your 30-60-90 plan, brainstorming off-sites, outlining strategy docs.

Quadrant 2 (AI drafts, you rewrite): First draft of performance reviews, first draft of 1-on-1 prep notes, first draft of feedback messages, first draft of difficult Slack conversations, first draft of a conflict mediation script.

Quadrant 3 (ask an expert, not AI): Legal implications of a performance issue, tax questions on comp changes, compliance questions, immigration for hires, medical leave policy specifics.

Quadrant 4 (human only): The actual firing conversation, the actual promotion call, the actual salary denial, the actual harassment complaint handling, the actual “you didn’t get the role” email to a final-round candidate.

Three rules to operationalize this

  1. Never let AI produce anything that will live in someone’s personnel file without your heavy editing. If a sentence would appear in a performance review, a PIP document, a promotion packet, or a termination letter, you wrote that sentence. AI helped you outline. You wrote the final version.

  2. For Quadrant 4 work, never let AI anywhere near the actual moment. You can use AI to practice the conversation (ask it to role-play the employee, to stress-test your wording). You cannot use AI to send the actual message, write the actual letter, or script the actual conversation. The person on the other side deserves the version of you that did the thinking.

  3. When you are unsure, ask this one question: “if this goes wrong, who gets the call?” If the answer is “I do,” the work is yours. AI is a tool you use, not a decision-maker you defer to.

The top 5 books on AI for managers covers deeper frameworks from Mollick, Woods, and others on how the capability-vs-should question keeps coming up across industries.

The close

Your team will notice whether you are using AI as leverage or using it as avoidance. The tool is the same. The signal is completely different. Leverage looks like “my boss got back to me with a clear, specific, well-thought-through response faster than I expected.” Avoidance looks like “this feels generic. She didn’t actually read what I wrote. She didn’t actually think about me.”

The difference is not the tool. It is whether you stayed in the loop for the parts that required you. AI handles the brainstorming. AI handles the formatting. AI handles the typing. You handle the thinking, the judgment, the presence, and the accountability. That is the whole deal.

Get that right, and AI makes you dramatically more effective. Get it wrong, and AI makes you dramatically more generic. Nobody wants a generic manager.

Your team does not want less of you. They want the version of you that got the leverage without skipping the thinking.

Free · Weekly · 52 Weeks

Become a Better Manager in One Hour a Week

Join 52 Weeks to Better Manager — a free year-long program that gives you one focused lesson per week. Start at Week 1, finish as a confident leader.

Learn more about the program