“Just use AI" is a management failure – how AI can help neurodiverse teams collaborate more effectively

Published on 20 March 2026 Written by Jay Spence and Lisa Colledge


Moving from AI adoption to AI maturity: using meeting analytics to strengthen clarity, alignment, and shared understanding

A neuro-inspired AI note-taker as a case study in how teams can use meetings to better effect

Summary

Artificial intelligence (AI) adoption often fails to deliver as expected not because the technology is flawed, but because it exposes weaknesses in work design that were already present. When expectations are unclear and teams are overloaded, employees use AI to try to fill the gap, producing high-volume, low-value output sometimes described as “workslop.” This creates a cycle: vague goals lead to AI-generated noise, which further depletes the cognitive capacity of the team.

Meetings are one of the most visible pressure points in this cycle. First generation AI meeting tools help by recording and summarizing what was said, reducing the burden on memory. But improving outcomes requires more than capturing content. It also requires building a common understanding of how discussions should be interpreted.

In this article, we use Evro – a second generation AI meeting platform developed by one of us (Jay) – as a case study to illustrate how newer tools can support process-level clarity by making ambiguity explicit and strengthening shared understanding.

 

AI as a system signal: why so many leaders feel “something is off”

Across sectors, leaders are being pushed to accelerate AI adoption – often by boards or executive teams without a shared definition of what good AI use actually looks like. The result is a pattern many organizations recognize: pressure to use AI everywhere, combined with very little clarity about where AI creates value, how quality should be judged, or what success means.

Recent research captures this tension clearly. One company president described being told to drive AI usage “everywhere every day,” while having no concrete way for teams to be viewed as successful. An executive vice president at another organization went further, noting that quantity of AI use was being prized over quality or effectiveness, a dynamic so frustrating that it made them consider leaving altogether [1].

This is not a failure of ambition or intent. Most organizations are still operating with leadership systems and cultural norms that evolved for stability, predictability, and linear execution. But today’s environment looks very different. Work increasingly depends on coordination across roles, disciplines and cultures, while accounting for different cognitive and personality styles – and all under constant cost pressure and uncertainty.

When AI is introduced into this context without structure, it does not resolve the mismatch. It magnifies it by accelerating the strain between how work has traditionally been designed and what modern work actually demands.

 

Why “just use AI” increases distance between people: what the evidence shows

Research into so-called “AI workslop” helps explain why many AI rollouts feel disappointing in practice. The findings are consistent: when organizations encourage AI use without task-level guidance, quality standards, or psychological safety, output degrades rather than improves.

In a large survey of full-time employees, 41% reported being encouraged to use AI without clear instructions on how to apply it to their work. More than half admitted to sending workslop that they knew was low-quality or unhelpful, largely as a way to cope with overload and ambiguity [1].

Crucially, the authors conclude this is not a technology failure. It is a management and work design failure.

Workslop thrives under three conditions:

  • Unclear expectations.

  • Depleted cognitive capacity.

  • Low trust or safety to ask for clarification.

When guidance is absent from work design, people with a high tolerance for ambiguity tend to move faster. Others pay a much higher “translation tax”, which is the hidden effort of interpreting implicit information about what is really expected, what matters most, and how their contribution will be judged.

This uneven cognitive load has a much larger negative impact on:

  • Anyone working across cultures or disciplines.

  • Chronically overloaded managers.

  • Employees in remote or hybrid environments.

  • People navigating menopause, fatigue, or brain fog.

  • Neurodivergent employees, such as those with ADHD, autism, or dyslexia.

 

As one senior leader put it when reflecting on AI adoption in a stretched organization:

When teams are already less connected and less inclusive, AI does not fix that. It amplifies it. People who can easily assert themselves move ahead; others are left further behind” (personal communication to LC).

 

Meetings: where the cost shows up fastest and the opportunity is clearest

Meetings are where the impact of poor AI design becomes visible very quickly. They absorb a significant proportion of organizational time and energy, and they already place heavy demands on attention, memory, and interpretation.

Data from Calendly’s State of Meetings report [2] shows the scale of the issue. Most workers now spend three or more hours per week in meetings, and 46% are in three or more meetings every day. 69% report working outside normal hours to make up for time spent in meetings.

At the same time, the characteristics of ineffective meetings are strikingly consistent:

  • No clear agenda.

  • Missing or misaligned attendees.

  • No follow-up notes or clear ownership of actions.

As a result, meetings often overload working memory, rely heavily on inference and recall, and reward speed, confidence, and verbal dominance.

When AI is layered onto this environment without design intent, it tends to multiply ambiguity rather than reduce it, thus producing more text, more follow-ups, and more cognitive overhead.

One further signal matters here. Younger generations report liking meetings more than their senior counterparts and are generally more enthusiastic about AI [2]. Without redesign, this combination risks increasing overall meeting volume over time, along with the stress and fragmentation that follow.

 

The hidden tax of implicit norms: why one meeting can result in multiple interpretations.

Successful organizational meetings usually rely on the assumption that every participant shares the same interpretive lens. This "implicit norm" assumes that people can intuit the unspoken details of a conversation:

  • Priority: which points really matter.

  • Quality: What a "good meeting” produces.

  • Continuity: How decisions will travel and be actioned after the meeting ends.

  • Misalignment: difficulty inferring priorities.

  • Inaction: ambiguity around who owns which task or next step.

  • Fatigue: cognitive overload from trying to track fast-moving, unstructured discussion.

  • Fragmentation: a reliance on individual memory and personal notes rather than shared artifacts.

For example, in a common fast-moving meeting, a person whose processing style aligns with the dominant culture will often understand key actions and be able to pick up on subtle tensions. However, someone with a different perspective may spend additional time after the meeting using AI to "decode" the notes and still miss critical points that were not stated explicitly.

AI does not remove this ambiguity by default. In many cases, it encodes and amplifies the confusion, reflecting the dominant communication styles and reinforcing existing power dynamics. If "what good looks like" is not made explicit, AI will simply mirror the same interpretive biases already embedded in the system.

 

Freedom within a framework: a practical design move

So how do we draw these threads together – AI as an amplifier of weaknesses in systems, meetings as a major driver of inefficiency, and implicit norms that affect everyone at some point?

 

The core question is not: which AI meeting tool should we use?

It is: what decisions and clarifications should a meeting reliably produce, and how do we support that if each person will view the meeting from their own perspective and biases?

For the purposes of this article, we will keep the focus deliberately narrow and look at the meeting itself, rather than what happens before or after.

 

A useful meeting reliably generates:

  • Decisions.

  • Owned next steps.

  • Shared understanding.

This requires explicit answers to:

  • What must be clear rather than inferred?

  • What high quality outcomes and actions look like?

  • What shared record of truth everyone can reliably refer back to, rather than relying on individual memory or notes.

Within that clear frame, teams benefit from freedom:

  • Different preparation styles.

  • Different processing speeds.

  • Different ways of using AI support.

 

This is what we mean by Freedom within a Framework. We define what must be true for the meeting to be effective – clear decisions, explicit ownership, and a shared record of understanding – and create the freedom to allow individuals to contribute, process, and prepare in ways that align with their cognitive style.

 

Clarity is non-negotiable; sameness is not.

AI tools can enable this. Culture is what makes it sustainable.

 

AI communication analytics: a case illustration

The challenge many teams face in meetings is not motivation, but execution friction:

  • Blurred or competing priorities.

  • Ambiguous ownership and timing.

  • Multiple parallel action threads created through live discussion.

  • Plans that remain abstract, forcing reactive execution afterwards.

  • An inflated sense of what can realistically be delivered before the next check-in.

Used deliberately, AI meeting support can reduce this load on individual working memory by:

  • Making priorities and dependencies explicit.

  • Reducing reliance on individual note-taking and recall.

  • Converting discussion into a clear, sequenced set of actions.

  • Creating a single shared record of decisions and next steps that the team can return to.

First generation AI note takers such as Otter, Fireflies and Fathom have gained massive popularity for solving the problem of what was said. They focus on automated meeting transcription with AI-generated summaries and actions.

Second generation AI note takers such as Evro are taking this further by adding tools to build common understanding of how it was said. This includes making clear and explicit items that may have been expressed confusingly.

At the end of each meeting, Evro generates an overview of what was communicated clearly, and what was ambiguous and needs to be clarified. It analyses communication patterns to report on the subtleties that could easily be missed or misinterpreted.

First generation AI note takers reduce the cognitive load of the user needing to remember or capture everything that was said. Second generation note takers further reduce cognitive load by non-judgmentally:

  • Detecting communication misalignment in real time.

  • Reducing bias from individual interpretations of meeting content.

  • Converting hidden or subtle communications into clear, objective communication analytics.

  • Inviting users to learn how to improve their future communication.

Tools like Evro illustrate how AI can be used not to speed meetings up, but to optimize what comes out of them, reducing cognitive overhead in ambiguous situations like meetings where misalignment is easy to miss and costly to correct.

 

Reframing AI maturity: from adoption to work design

AI maturity is not measured by how often tools are used. It shows up in the alignment between organizational intent, cultural norms, and the realities of different cognitive styles and personality types at work.

Proactive design for complex, changing, high pressure environments is about accepting that we are not all the same, and bridging the perspective gaps caused by our individual lenses.

AI is revealing where work design needs to change. Meetings make that especially visible and offer a practical opportunity to start fixing it.

Continue the conversation with the authors

If you are exploring how to improve collaboration in environments of complexity, pressure and change, we welcome the conversation.

Dr Jay Spence is the founder of Evro, an AI meeting notetaker designed to improve communication in teams. www.evro.ai

Dr Lisa Colledge is a consultant who uses neuro-inspiration to design systems that serve everyone better in environments of complexity, pressure, and change. Her work focuses on building performance infrastructure that enables teams with diverse cognitive styles to collaborate effectively and release their collective potential.

Explore how Lisa approaches neuro-inspired team and leadership design on the Services page.

References

  1. Kate Niederhoffer, Alexi Robichaux and Jeffrey T. Hancock (2026) Why People Create AI “Workslop” – and How to Stop It. Harvard Business Review.

  2. Calendly (2024) The State of Meetings, annual report.

Next
Next

Performance infrastructure: the hidden link between strategy and results