Why We Made Copilot Mandatory (and Would Do It Again)

Everyone said "just try it," so we did. Then we made it non-optional. Not because we love AI, but because we hate wasting smart people’s time on dumb stuff.

Copilot everywhere

The funny thing is, I wasn't even sold on Copilot at first.

I'd used it a few times. Sometimes it felt like magic, sometimes like Clippy with a caffeine addiction. So when someone on the team asked if we should roll it out more widely, I said, "Sure, we'll try it with five people." No fanfare. Just curiosity.

Three months later, it's mandatory across the team. Here's what changed.

We did the rollout in slow motion

  • First 5 people. No process, no hand-holding. Just: turn it on and tell us what breaks.
  • Then 10 more. We started having short calls to share how people were using it. That's when things got interesting.
  • Then everyone. Non-negotiable install. Not because we're control freaks, but because the ROI was already obvious.

This wasn't a vibes-based decision. It was about surface area. More users = more edge cases, more weird workflows, more useful feedback. And we knew Copilot wasn't a personal tool, it was a team accelerant. That only works if everyone's playing.

We didn't just let people use it. We pushed them

This is the part most teams skip.

They assume people will adopt new tools organically. Some will. Most won't. Because they're busy. Or skeptical. Or just never saw a good example.

So we made time for usage syncs. Literally, 30 minutes every Friday. Not to judge, to learn from each other.

The meeting prompt was simple:

"How are you using Copilot this week?"

And every other week, someone shared something new:

  • "It wrote a Terraform module I couldn't debug for two hours."
  • "I use it just for PR reviews, way faster to spot issues."
  • "I make it write my Dockerfiles. I haven't touched one manually in weeks."
  • "It removes all my console.logs. I love it."

That last one hit me. Removing console.logs. Dumbest possible task, but everyone does it. If Copilot saves you five minutes doing that, great. If it does that 50 times a week, suddenly it's worth $20/month just for being your cleanup crew.

Everyone used it differently, and that was the point

A few patterns showed up:

  • Some devs used it as an autocomplete steroid.
  • Others used it to write tests, fix bugs, generate boilerplate.
  • A few just wanted help with commit messages and .gitignore files.
  • And one person used it for documentation and literally nothing else.

One dev hated inline suggestions but liked its problem explanations. Another used it for DevOps. Another only for reviewing PRs.

Which is exactly why pushing it team-wide worked, because no two people used it the same way. It became a mirror for how they work.

So? Did it help?

Yeah. Objectively.

We ran a rough productivity comparison: time-to-merge, small-task turnaround, and we saw an average 20% increase in output. Some people had less. Some more. But as a team? Twenty. Percent.

For $20/month. Per dev.

That's probably the best ROI we've gotten on a tool, ever.

But it's not perfect

Let me be clear, Copilot is annoying sometimes.

  • VSCode crashes. Context resets. Suggestions are dumb.
  • Some people still prefer ChatGPT for project-specific stuff.
  • A few devs feel it slows their machine or makes irrelevant comments in PRs.

That's fine. You don't need 100% buy-in. You need 60% adoption and a culture of sharing the wins. The rest follows.

Now we're trying Amazon Q

Not because Copilot failed but because we want to see what better tooling looks like when it understands infra. Especially for Terraform, CI/CD, that whole swamp of config hell.

We'll probably do the same thing, test it, sync at regular intervals , push learnings. Maybe it sticks, maybe it doesn't.

But I'll say this:

We no longer ask "should we try this AI tool?"
We ask: "how fast can we learn what it's good for?"

Mastodon