Why Friction Matters More Than Speed: AI Automation Risks and a Frictions Matrix for Leaders

Why friction matters more than you think — and what AI automation is risking

One sleepless night the question felt frivolous: how fast does a match need to be struck to ignite? The small chase — calls to a match-maker, a chemistry professor and a thermodynamics expert — turned into a probing exercise. The manufacturer couldn’t give a simple speed; the chemist pointed to physics; and Erich Müller, a thermodynamics professor at Imperial College, reframed the problem as “minimum ignition energy” — a tiny threshold on the order of 0.2 millijoules. The exact number mattered less than the practice of inquiry: the reframing, the expert detours and the patient curiosity.

That ritual of friction — a pause, a detour, a careful question — is under pressure. Modern tech culture prizes seamlessness: instant answers from tools such as Anthropic’s Claude or a prompt to ChatGPT, AI agents that automate workflows, and AI automation that removes delay. For businesses, speed yields clear benefits. But smoothing every bump risks erasing the spaces where meaning, judgement and serendipity emerge.

Quick read — 90 seconds

Not all friction is waste. Use AI to remove drudgery, not to replace judgment. Apply a simple Frictions Matrix: automate low-ambiguity/low-impact tasks; keep humans for high-ambiguity/high-impact work; hybridize the rest. Measure beyond speed — track customer satisfaction, human override rates and long-term retention. Start with a 30/90/180-day plan: audit, pilot, scale with governance.

Why “friction” is not the enemy

Friction is the lived texture of work: the awkward question that surfaces hidden needs in a sales call, the pause a surgeon takes before a complex cut, the messy debugging session that reveals deeper system weaknesses. These moments are not inefficiencies to be eliminated; they are the training ground for tacit knowledge — what developers, clinicians and salespeople learn by doing and by feeling.

Language models and AI agents excel at pattern-matching: they predict likely continuations based on data. That’s immensely useful for drafting emails, triaging support tickets or surfacing relevant documents. But pattern-matching is not the same as embodied understanding. A model can simulate concern in a reply; it cannot inhabit the context that produced the concern.

“Pattern-matching systems produce convincing outputs but lack the embodied experience required for genuine meaning.”

When AI automation helps — and when it hurts

Use cases where AI for business shines:

  • Automating repetitive tasks (invoice processing, data entry).
  • AI agents for lead qualification that surface promising prospects for sales reps.
  • Drafting first-pass content, summarizing long reports, and extracting entities from documents.

Where friction matters and automation risks damage:

  • High-stakes decisions (medical triage, defence allowances) where split-second automation can bypass ethical judgement.
  • Customer journeys where serendipity and trust-building matter; over-optimized scripts can kill rapport.
  • Information ecosystems: as AI-generated content grows, models risk retraining on their own synthetic outputs, diluting human-authored signal.

Public sentiment already reflects unease. Surveys from organizations such as Pew Research show widespread skepticism outside tech hubs, and commentators from the AI-and-climate community have flagged dystopian metaphors that equate model training energy to human food consumption. The more we treat adoption as a KPI rather than a careful choice, the more cultural pushback and regulatory scrutiny we can expect.

A practical framework: the Frictions Matrix

Leaders need a simple, repeatable rule-of-thumb. The Frictions Matrix scores every process on two axes: task ambiguity (how clear the correct answer is) and consequence impact (how costly a mistake would be).

  • Low ambiguity / Low impact: Automate. Let AI agents handle routine queries and low-risk optimizations.
  • Low ambiguity / High impact: Automate with human verification. AI can draft or pre-filter, humans approve.
  • High ambiguity / Low impact: Human-led with AI assistance. Keep humans in the loop to capture nuance and learning.
  • High ambiguity / High impact: Human-first. Preserve friction. Use AI only for administrative aid, not final decisions.

Example: a customer-support team uses AI to auto-respond to password resets (low/low), routes complex complaints to senior agents with AI-suggested summaries (low/high), and reserves escalation decisions about customer retention offers for experienced managers (high/high).

6-step checklist for C-suite leaders

  1. Audit your top 20 processes — map ambiguity and impact, then place each process on the Frictions Matrix.
  2. Set boundaries — define what AI agents may do autonomously and where human sign-off is required.
  3. Measure beyond speed — track CSAT/NPS, human override rate, false positive/negative rates, and long-term churn tied to automated interactions.
  4. Protect data quality — deploy provenance tracking, watermarking, and mixed human-curated datasets to prevent AI-generated feedback loops.
  5. Audit and iterate — schedule periodic reviews of AI decisions and retraining datasets; require explainability for high-impact models.
  6. Communicate norms — tell employees and customers where AI is used and what recourse exists; transparency builds trust.

Metrics that matter (not just velocity)

  • Customer satisfaction (CSAT / NPS) — before and after automation rollouts.
  • Human override rate — percent of AI decisions corrected or escalated by humans.
  • Error rates — false positives/negatives in automated decisions with tolerance thresholds.
  • Retention / churn — long-term impact of automated interactions on customer relationships.
  • Provenance score — percentage of training data with verifiable human origin.

30/90/180 day action plan

  • 30 days: Run the top-20 audit; map processes on the Frictions Matrix; pick 1–2 pilots (one low-risk automation, one hybrid human-in-the-loop).
  • 90 days: Execute pilots with clear KPIs; implement provenance tagging for content and datasets; set human-in-the-loop workflows for high-impact tasks.
  • 180 days: Review pilot outcomes; scale successful patterns; establish governance (audit cadence, explainability requirements, and escalation rules).

Short case examples

Customer support: A retailer automated returns processing with an AI agent that handled routine cases end-to-end, while anything flagged as “ambiguous” — incomplete photos, conflicting policies — routed to senior agents with AI-generated summaries. Result: faster resolutions on simple cases, fewer escalations, and improved agent learning from the ambiguous cases.

Sales enablement: An enterprise deployed AI for lead scoring (AI for sales) to prioritize outreach. Instead of fully replacing reps, the system suggested touchpoints and surfaced anomalies; reps treated those leads as conversation starters rather than scripts. Outcome: higher-qualified opportunities and an uptick in deals where rep intuition intersected with AI suggestions.

Guarding against the ouroboros of synthetic signal

As AI-generated content proliferates, models risk learning from their own outputs. That ouroboros — synthetic data retraining synthetic models — will degrade the quality of knowledge unless organizations take steps:

  • Maintain human-verified datasets for core models.
  • Watermark or tag synthetic content so downstream systems can discount it.
  • Schedule data refreshes with human-in-the-loop validation to break feedback loops.

“Speeding up decisions with AI risks turning reflection into a losing proposition when every moment is optimized.”

Ethics, defence and life-or-death decisions

Putting AI agents into life-or-death roles demands layered oversight. Delegating split-second choices to automation in defence or emergency medicine raises both ethical and practical challenges: machines can process signals faster than humans, but they cannot assume moral responsibility or weigh ambiguous values in context. Where consequence impact is high, preserve human judgement, enforce explainability and create formal accountability mechanisms.

Final thought and next step

Friction is not a failure mode to be scrubbed away; it’s often the price of depth, trust and moral discernment. AI for business and ChatGPT-style assistants are powerful tools — use them to remove drudgery, not to replace the messy work that makes organizations resilient and humane.

If you’d like practical help, I can deliver either a one-page Frictions Matrix and a 90-day pilot plan for your executive team, or a LinkedIn-ready summary with pull quotes and a board-slide headline. Which would you prefer?