Microsoft is rolling out a new feature in its Viva Insights platform that lets managers see how extensively their teams are using Microsoft Copilot. The new Copilot Benchmarks allow organizations to compare adoption levels internally (between departments or regions) and externally—against industry peers. Metrics include the percentage of active Copilot users, adoption across apps (Word, Excel, Teams, Outlook), return user rates, and comparisons by role, region, or manager.
Microsoft assures that external benchmark data is aggregated from at least 20 companies to preserve privacy—no single organization can be identified. Yet the company’s messaging hints that Copilot usage could soon become a new KPI for employee performance.
How Do Copilot Benchmarks in Viva Insights Work?
The feature appears in the Copilot Dashboard within Viva Insights, offering organizations insights such as:
- Percentage of active Copilot users,
- Adoption rates across Microsoft 365 apps (Word, Outlook, Teams, etc.),
- Return user indicators (how regularly employees use AI),
- Comparisons by role, region, and manager type.
External benchmarks rely on aggregated data from at least 20 companies to ensure anonymity.
At the same time, Microsoft is expanding Copilot Analytics, another tool aimed at linking Copilot adoption metrics with business outcomes—such as sales, marketing, or finance KPIs.
Goals and Challenges – Meaningful KPI or Pressure Tool?
Microsoft’s intent seems to be to make AI adoption itself a measure of efficiency.
Critics warn, however, that this approach risks abuse, pressure, and misinterpretation.
Tracking who “uses AI” and who doesn’t could become a trap if disconnected from quality, context, and actual results.
Recent studies also question whether AI tools truly improve productivity.
A controlled study by METR (Model Evaluation & Threat Research) examined 16 experienced open-source developers performing 246 real tasks—with and without AI assistance. Despite participants feeling faster when using AI, their actual task completion time increased by 19% on average.
Titled “Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity,” the study suggests that AI may create an illusion of efficiency, as users spend extra time verifying, correcting, or interpreting AI outputs.
Other research points to a growing flood of low-value content—dubbed “workslop”—well-formatted but shallow material generated by AI.
Not Just Slower – Also Less Creative
Concerns about AI’s impact on quality extend beyond IT.
A 2025 study by Stanford University and BetterUp Labs found that 40% of U.S. employees encountered “workslop” — an overload of low-quality AI-generated content.
While quick to produce, such content often requires human revision, costing organizations an average of $186 per employee per month.
In practice, this means companies may be paying for visible AI activity, not tangible business benefits.
How Can Organizations Avoid the Pitfalls?
1. Add Quality and Context to Metrics
- Combine quantitative data (usage frequency) with employee feedback.
- Correlate AI metrics with real business KPIs, not just tool usage.
2. Use Benchmarks Wisely
- Peer comparisons can be motivating, but shouldn’t become rigid standards.
- Maintain privacy and anonymity when presenting results.
3. Build a Culture of Conscious AI Use
- Teach teams when AI helps and when oversight is needed.
- Highlight the risks of “workslop” and encourage proper AI documentation.
4. Test and Iterate Deployment
- Start with small teams and compare outcomes.
- Track the operational costs of corrections and review processes.
Conclusion
The new Copilot Benchmarks in Viva Insights mark another step in Microsoft’s effort to embed AI deeper into corporate culture.
The tool lets managers track AI adoption within their organizations and benchmark it externally.
But without deeper analysis of quality and context, raw numbers can be misleading.
Studies show that AI doesn’t always speed up work—and can even slow it down for experienced professionals.
The “workslop” phenomenon reminds us that output quality matters more than AI presence.
As companies adopt tools like Copilot, they must look beyond usage metrics and focus on measurable benefits.
Otherwise, AI risks becoming just another KPI—one that looks good on dashboards but adds little real value.

