A colleague recently replied to a long, LLM-polished technical message with two words: "AI;DR."
It's a riff on TL;DR, and it's the most honest feedback I've seen on a growing team habit: pasting raw AI output into cross-team chat. The joke landed because everyone recognized the shape of the thing. If a new piece of slang forms organically around a behavior, the behavior is already a problem.
This isn't an anti-AI post. I use AI every day. This is about a specific failure mode: AI that makes the writer's job easier by making the reader's job harder.
Two shapes of the problem
1. AI replacing "I'm stuck"
A developer hits a 401 on an internal reporting API. Instead of pinging the owner with "I'm getting 401 on every call, here's the error - any idea?", they paste an 800-word AI-generated root cause analysis into the channel:
Every call to the
reporting-internalAPI returns HTTP 401 at the gateway, even though the OAuth handshake succeeds.Layer analysis:
- OAuth / token issuance - OK. The token is minted; login completes cleanly.
- API gateway routing - OK. We get a structured JSON error envelope, not a transport-level error.
- Scope enforcement - FAILS. The gateway evaluates the authenticated principal against a scope literally named
reporting. The principal does not carry that scope.What this tells us about the backend design: There is a dedicated scope gating all reporting calls, implying scope enforcement at a gateway middleware layer rather than per-endpoint. Identity resolves correctly, so the bearer token is accepted and mapped to a user - the mapping is just missing the scope grant.
Suggested follow-up: (1) Grant the scope. (2) Document token refresh semantics. (3) Improve the error envelope with a
codefield. (4) Add a pre-flight/capabilitiesendpoint. (5) Audit scope granularity -reportingis coarse;reporting:readvsreporting:writewould be healthier.
The actual information the owner needed was one line:
Getting 401 on every
reporting-internalcall - error says "token does not carry the reporting scope." Can you grant it? Usernamealex.
The AI output wasn't wrong. It was 790 words of scaffolding around a 10-word problem, and the owner had to read all of it to confirm the 10 words were in there somewhere.
2. AI replacing "can you help?"
Worse variant: someone from outside the technical domain pastes a step-by-step "fix" for a problem they don't own.
ISSUE: The billing export uses "delegated tokens" - these are tokens obtained per user via OAuth. My token can only export as me, not as the finance team. The finance service account has never completed the OAuth flow, so there is no token for it.
STRUCTURAL FIX (already deployed):
- Export route -> now uses
getServiceAccountId()(impersonation-aware)- New: Client Credentials Flow added - an app-level token that allows the platform to export on behalf of ANY user in the tenant, without each user completing OAuth individually.
- Fallback chain: per-user token -> legacy token -> client credentials token.
HOWEVER: this only works if the platform app has the correct permissions. In the IdP admin console: App registrations -> select the Billing app -> API permissions -> Add a permission -> Finance Graph -> Application permissions -> search
Export.ReadWrite-> Add -> "Grant admin consent" (important!).This is a one-time setup. Once done, the platform can export on behalf of any user.
The reply, from an engineer:
AI;DR;
Now the engineer has to reverse-engineer two things: what the actual issue is, and which parts of the AI's confident prescription are real vs. hallucinated. The claim "already deployed" is doing huge trust-work: was it? By whom? Tested how? The sender didn't know. The AI didn't know either. But the output reads like it does, and that confidence is the problem.
What the message should have been:
Billing export is failing for the finance team - they can't pull the April file. Can someone take a look? Happy to share my screen.
Why this specifically hurts team communication
The case against pasting AI output into team chat isn't that AI is bad. It's that:
- Length is no longer a signal of effort. A 1000-word message used to mean someone thought hard. Now it might mean they pressed a button. Readers can't tell, so they treat all long messages as suspect - which penalizes people who did think hard.
- Confidence is no longer a signal of expertise. AI writes with the voice of authority by default. A non-technical person pasting AI output sounds like an architect. The reader has to do the expertise check the sender skipped.
- Assumptions get laundered into facts. The AI fills gaps with plausible guesses. Those guesses arrive in your chat looking identical to the parts that are true. Separating them is the reader's job now.
- The bias is toward more output, not better output. AI lowers the cost of writing, not the cost of reading. Team chat runs on a shared attention budget, and unfiltered AI output is a tragedy of the commons.
A simple rule
Before pasting AI output into team chat, ask: would I have written this much if I had to type it?
If no, cut it to what you would have typed. The AI can help you think; it shouldn't be the voice you send on your behalf.
For technical problems specifically: the useful message is almost always a clear description of the problem, not a proposed solution. The investigation belongs to the person who owns the system. Your job is to describe what you observed, not to diagnose it for them.
What to do instead
- Use AI as a personal thinking tool. Draft, rubber-duck, summarize for yourself. Then write the message in your own voice.
- Send the problem, not the hypothesis. "401 on every call" beats "here's my analysis of the backend's scope enforcement layer."
- If the AI output is genuinely the message, say so. "AI helped me draft this, flagging the parts I'm unsure about: [...]" is honest. Pasting it unmarked is not.
- Cut 80%. The useful signal is almost always in the first two sentences.
Closing
The pushback is already cultural. "AI;DR" works as a reply because everyone has spent the last few months skimming AI-polished paragraphs for the 10% that's actually relevant. That's the signal.
AI is a power tool. Point it at your own thinking and it makes you sharper. Point it at your teammates' inboxes and it makes you the reason someone replies "AI;DR."
Keep it human, keep it short, keep it yours.