Beyond “Vibes”: How Change Leaders Can Measure, Test, and Shift Culture for Innovation
- Yumi

- Jan 21
- 12 min read
Updated: Jan 29
“Culture eats strategy for breakfast” gets repeated so often it has become a permission slip to stay vague. In real transformations (AI rollouts, new operating models, post-reorg integration), culture is not a mood. It is measurable behavior: how decisions route, who gets believed, how quickly help arrives, whether uncertainty triggers learning or fear, and whether cross-silo coordination is resilient.
If you lead change, the fastest way to stop arguing about “culture” is to instrument it.
In my paper, The Organizational Intelligence Loop (OIL): A Socio Technical Framework for Adaptive Enterprise AI Adoption, I frame culture as designable infrastructure, especially when you treat People Analytics (PA) and Organizational Network Analysis (ONA) as a continuously updated signal layer rather than an annual report or a one off org study (People Layer framing, Section 2.2; PDF p.53).
This post is written for change management leaders and HR or People Analytics leaders who want to move from culture talk to repeatable, measurable change.
Culture is not a vibe. It is measurable behavior.

A simple way to instrument culture: Passive ONA + Active ONA
Organizational Network Analysis (ONA) gives you a practical map of how work actually moves, not how the org chart claims it moves. The core idea is simple: enterprise telemetry and human meaning are not the same thing, so you need two complementary signal types to measure culture during change.
Passive ONA (what people do)Passive ONA comes from behavioral traces: meeting co-attendance, shared collaboration spaces, responsiveness rhythms, workflow participation, and cross-team touchpoints. It scales well and shows what the organization is actually doing over time. This is where you can quickly see where cross-silo bridges exist (or don’t), which teams are isolated clusters, and who is overloaded as the “router” for everything.
Active ONA (what people mean)Active ONA comes from lightweight, intentional input: who do you trust, who do you go to for help, who do you think is an expert, who do you rely on when things go wrong. This captures meaning that telemetry cannot reliably infer. The distinction matters because adoption drivers like trust, perceived expertise, and advice-seeking willingness cannot be assumed from interaction traces alone.
Why pulse surveys beat annual surveys during change
If you are leading transformation, you do not need a long annual survey to tell you what already happened. You need fast feedback. Short pulse surveys validate what passive telemetry cannot, and they surface where nonresponse is concentrated. Nonresponse is not noise—uneven response often correlates with workload, trust, and psychological safety. The goal is not to over-survey people; it is to translate traces into decision-grade constructs.
How to measure culture with People Analytics without turning it into bureaucracy
Measure trust density, not just activity. High activity can reflect busyness; trust density separates interaction opportunity from trust-confirmed relational reality.
Use practical probes to reveal where inertia lives. Deploy a structured probe (introductions, buddy programs, lightweight pairing, opt-in expertise matching) and measure behavioral follow-through and drop-off.
Link people metrics to business outcomes. When you connect indicators like connector health and trust density to outcomes like onboarding speed, execution latency, and escalation load, People Analytics becomes decision support instead of just reporting.
Paper map (where this is developed in OIL): This section draws primarily from the People Layer framing (Section 2.2, p.53), the measurement/construct translation logic (Part 4.3, p.22–23), the signal taxonomy (Appendix A3, p.79–80), and the supporting program/case evidence (Appendix case study, p.73–77).
The two engines that decide whether change spreads or stalls
1) Network maturity (the structural engine)
Network maturity is the organization’s hard wiring. It reflects whether there are bridging ties and credible connectors who carry new practices across silos. In the paper, I use constructs like connector density and connector health to explain why some rollouts spread naturally while others require sustained campaigning (for example, PDF p.24, plus later synthesis).
If adoption depends on a few overloaded go to people, the system is structurally fragile. You cannot communicate your way out of that.
2) Psychological safety (the behavioral multiplier)
2.1 Psychological safety is what happens when the system encounters ambiguity. Do people experiment, ask questions, and recover, or do they escalate, freeze, and avoid risk. In the OIL framing, psychological safety is not only a culture survey outcome. It shows up in how teams respond to small failures and uncertainty across real adoption trajectories.
Even strong networks underperform when fear dominates. Even imperfect tooling succeeds when teams treat friction as normal learning.
2.2 A practical test: watch what happens after a small glitch

A practical way to measure culture is to watch how the organization reacts to minor anomalies.
In one case comparison in my paper, the same small timing irregularity in an introduction/connection workflow produced radically different outcomes:
Organization A: a small delay triggered urgent escalation and fear of consequences. Adoption required prolonged internal campaigning.
Organization B: the same anomaly was treated as background noise. Participation reached near-universal levels quickly with minimal promotion.
The takeaway is not that one organization was “nicer.” The point is that network maturity and psychological safety behave like system properties. They predict whether your rollout will look brittle or adaptive.
Paper map (where this is supported in OIL): Appendix case comparison (p.73–77), with the broader measurement framing and construct translation developed in Part 4.3.
How to use this for change programs (without creating bureaucracy)
Here is the lightweight playbook:
Pick one initiativeAI rollout, new operating rhythm, or post-reorg integration. Define 2–3 adoption behaviors you actually care about (e.g., activation, repeat usage, cross-team follow-through).
Instrument the system with two inputs
Passive map: who actually interacts across boundaries (where bridges exist, where silos persist).
One short pulse: who is trusted, who unblocks, who is credible in this specific context.
Design interventions based on what you see
If bridges are missing, build them intentionally (pairing, introductions, structured cross-team routes).
If a few connectors are overloaded, redistribute load (secondary champions, alternate routing paths).
If trust is low, seed change through credible nodes, not titles.
If fear dominates, design safer recovery loops (small pilots, visible norms, rapid iteration).
Re-measure in short cycles Culture shifts faster than annual surveys can detect. Use quick refresh loops so you can adjust interventions while the change is still happening.
How this maps to classic change management frameworks
ADKAR / Prosci: your adoption behaviors are your Ability + Reinforcement indicators; the pulse validates where Desire/Confidence/Clarity are missing; interventions target barriers and reinforcement in specific teams.
Kotter’s 8 Steps: ONA helps identify the real guiding coalition (credible messengers), exposes where you need to remove obstacles, and makes short-term wins measurable by tracking actual spread.
Agile / Lean change (PDCA): “instrument → intervene → re-measure” is a PDCA loop, keeping change adaptive rather than relying on a one-time plan and annual surveys.
Behavioral science / nudge approaches: pairing, routing tweaks, and small recovery loops are behavior-level interventions that shift defaults and reduce friction, instead of relying on speeches.
Two reminders most frameworks under-emphasize
If the product/workflow is bad, motivation won’t save it. Many change frameworks assume the “solution” is good and focus on buy-in. In practice, if the tool is slow, confusing, misaligned with workflows, or creates risk, people will rationally avoid it—no matter how strong the messaging is. Instrumentation helps here: low repeat usage and drop-off patterns often signal product/workflow friction, not “resistance.”
In low-trust or low-psychological-safety environments, people avoid visibility. Even if everything else is well-designed, people may be afraid to ask questions, try something new, or admit confusion because it draws attention to them. What looks like “apathy” can be self-protection. That is why measuring trust and safety signals matters: it tells you when the correct intervention is safer learning loops and credible peer pathways, not louder top-down motivation.
How ONA helps in those situations
1) If the product/workflow is bad
What helps
Separate “change resistance” from “product friction.” If you see decent first-time trials but low repeat usage, that’s usually friction, not mindset.
Fix the top 1–2 workflow breaks first (time cost, extra steps, unclear value, risk/approval burden). Until those are fixed, change messaging is noise.
Make the product safer to try: low-stakes sandbox, reversible actions, minimal visibility, clear “this won’t be held against you” language.
How ONA helps
Find where the product actually works (clusters/roles with higher repeat usage) and treat those teams as “proof nodes.” You learn what value is real.
Detect structural adoption failure: if the tool requires cross-team cooperation but the network is siloed, the product will “feel bad” even if it’s good.
Route product feedback through credible connectors instead of only formal channels, so iteration is faster and higher quality.
Practical ONA move: overlay usage + repeat usage onto the network map. You’ll see whether drop-off concentrates in specific clusters, boundary-crossing steps, or around a few overloaded routers.
2) If trust / psychological safety is low
What helps
Reduce visibility risk. Make participation private or anonymous; avoid public leaderboards; don’t frame non-usage as “resistance.”
Start with safe pockets. Pilot in teams with healthier trust and connector structure; prove value before expanding.
Use peer pathways, not sponsor pressure. Trusted peers feel safer than top-down mandates.
Design recovery loops. Normalize small failures, shorten time-to-help, and make escalation supportive rather than punitive.
How ONA helps (and what it cannot do alone)ONA helps you see the shape of fear:
Identify trusted nodes (via a short pulse: who do you trust / who unblocks you) and use them as local carriers.
Locate fear hotspots indirectly (concentrated nonresponse, missing help-seeking ties, extreme escalation centralization).
Design safer diffusion paths (route enablement through trusted ties instead of forcing cross-silo visibility too early).
Practical move: ask “Who do you go to when things go wrong?” and compare it to passive collaboration ties; big gaps often indicate hidden fear.
But in low-safety cultures, the “why” matters as much as the “where.” You usually need to do two additional things:
(1) Identify the causes: leaders + incidents + incentivesLow safety is often rational. It forms after specific incidents (public blame, punishment for surfacing risk, political retaliation, failed reorgs) and then gets reinforced by incentives (only outcomes matter, no tolerance for learning, status games). The intervention is not only measurement—it is accountability.
(2) Confirm leadership willingness to change behaviorThis is the hard gate. If the leaders who created or reinforce fear are not willing to change—how they react to bad news, how they handle mistakes, what they reward—then “culture work” becomes cosmetic. In that scenario:
treat the initiative as risk management (minimize exposure, keep pilots private, avoid forcing public participation)
or accept that the organization is intentionally optimizing for a different culture
If leadership won’t change, what are the real options?
Hire for resilience in that environment (people who are comfortable operating under high scrutiny / low forgiveness) if the company explicitly chooses that culture and the work requires it.
But be clear: this does not “fix” psychological safety. It is a deliberate trade-off, and it can cap learning speed, innovation, and early-signal reporting.
Bottom line: ONA is valuable because it tells you where to intervene and who can carry change safely. But restoring psychological safety typically requires changing the behavior of the people who control consequences. If they are not willing, the most honest move is to design around that reality—either by limiting visibility risk or by staffing for the culture the organization is actually choosing.
Why this matters more in the GenAI era
Enterprise GenAI is forcing a culture reckoning. Many “AI adoption” failures are actually coordination failures in disguise: unclear ownership, invisible expertise, fear-driven escalation, and informal influence that does not match the org chart. When those dynamics are invisible, AI rollouts get misdiagnosed as “change resistance,” and the typical fix becomes more comms, more training, more pressure.
The more durable approach is to treat the People Layer as measurable infrastructure. ONA turns these hidden dynamics into something you can observe, quantify, and design around. When you combine passive network signals (how work actually routes) with lightweight active validation (who is trusted, who unblocks, who holds decision-rights), you get decision-grade inputs for both change programs and enterprise AI systems—so adoption can be engineered rather than hoped for.

How to shift culture: intervention design, not speeches
Culture shifts when routing behavior shifts. That means identifying credible messengers and real decision pathways so change moves through how work actually travels, not only through the org chart. Durable adoption also requires intervention pathways: small nudges and workflow adjustments tied to measurable friction. If you want change to survive reorg cycles and sponsor churn, you need a small set of leading indicators (e.g., trust density, connector health, activation speed) plus a repeatable playbook (routing, peer enablement, cadence shifts).
A practical note for HR and change leaders
Even in high-velocity organizations, HR and transformation leaders often hesitate to “wake sleeping lions.” Sometimes it is political. Often it is simply resource constraints: limited budget, limited air cover, and too many competing priorities.
That caution is rational. The goal of measurement is not to provoke controversy. It is to reduce avoidable friction and improve decision quality on outcomes the business already values: onboarding speed, cross-team execution, AI rollout adoption, and coordination during reorganizations or integrations. In the AI era, HR’s leverage increases when People Analytics and people-side insights are positioned as business infrastructure—something that helps leaders answer: who should own this, who will unblock this, where will adoption stall, and what is a safe intervention.
Practical closing for change leaders
If you want repeatable adoption, stop treating culture as a vibe. Measure network maturity and psychological safety as system properties. Combine passive ONA with lightweight active validation through pulse surveys. Translate signals into decision-grade constructs. Tie those constructs to business outcomes so you can predict and intervene—not just report.
Risks and guardrails when you take culture “Beyond Vibes”
A measurement led approach in People Analytics and digital transformation can be powerful, but it has predictable failure modes. The goal is not to avoid measurement. The goal is to measure in a way that improves trust, decision quality, and outcomes instead of creating performative behavior or backlash.
A baseline principle: do not use a single signal to judge performance. People data is contextual. For example, “not always online” does not equal “slacking.” Presence, meeting load, response time, and collaboration traces vary by role, time zone, deep work needs, caregiving, and project phase. If leaders treat one metric as a moral judgment, the system will learn to game the metric and you will lose the truth.
1) The observer effect in People Analytics
When people know a behavior is being tracked, they change it.
Risk: you get performative culture. Dashboards look healthy while psychological safety and decision quality do not improve. People optimize for what is measured, such as more meetings or more visible activity, instead of what matters, such as faster resolution and better learning.
Guardrails: measure outcomes and resilience, not just activity. Validate with multiple signals, such as passive traces plus lightweight pulses plus qualitative spot checks. Treat metrics as diagnostics, not scorecards tied to individual evaluation.
2) Over engineering the human element
There is a temptation to treat culture exactly like a software deployment.
Risk: too many experiments, too many changes, and too much measurement creates change fatigue. Constant testing can feel like instability, especially while people are already learning new tools and processes.
Guardrails: keep a clear measurement budget with a small number of indicators and a stable cadence. Change one thing at a time and hold it long enough to learn. If psychological safety is low, reduce visibility risk and increase support before you scale.
3) Data privacy and trust erosion
Moving beyond vibes can require collaboration telemetry.
Risk: if it feels like surveillance, measurement destroys trust. People self censor, avoid smart risks, and disengage, which undermines innovation.
Guardrails: be explicit about purpose, boundaries, and governance. Clarify what is collected, what is not, how data is aggregated, who can access it, and what it will never be used for. Default to team level insights rather than individual monitoring whenever possible.
4) Correlation versus causation
Culture and business outcomes often move together, but causality is hard.
Risk: leaders may credit a culture intervention for an improvement that actually came from market factors, a leadership change, or a specific high performing hire. That can lead to scaling the wrong fix.
Guardrails: use comparison thinking. Look at before and after, and compare similar teams that did not receive the intervention. Track leading indicators that logically connect to the outcome, such as unblock time, escalation load, and cross team completion, not only top line KPIs.
5) The local versus global paradox
Culture is rarely uniform across a global organization.
Risk: an approach that works in one team or region can fail in another. Centralized playbooks can become one size fits all solutions that do not fit locally.
Guardrails: measure and intervene at the team or network cluster level, not at “the company” level. Keep global principles and governance consistent, but allow local adaptation in interventions and messaging.
6) Simplifying complex dynamics into KPIs
Measurement can create a false sense of certainty.
Risk: leaders over index on what is easy to count and miss what matters, such as shared history, informal influence, and subtle warning signs that a transformation is in trouble.
Guardrails: treat metrics as alerts, not truth. When a signal moves, ask what story explains it before acting. Maintain a qualitative layer such as listening loops, structured interviews, and narrative capture. The goal is better decisions, not automated decisions.
Notes and disclaimer
One important clarification on intent.
I developed this “Beyond Vibes” approach and the ONA plus signal stack primarily to make people and behavior data usable as an enterprise GenAI context layer, so AI systems can operate with real organizational reality: who owns what, who is trusted for what, how work routes, and how knowledge moves. By default, this is not an employee performance evaluation framework. It is a knowledge management and change enablement framework.
That said, I am not naive about where the industry is heading. As AI capabilities advance, more organizations will try to repurpose these signals for HR and talent decisions. If that happens, the bar for governance has to go up, not down. The only responsible path is strong privacy protections, clear boundaries on use, bias and misuse safeguards, and an explicit norm that these signals are meant to improve system design and coordination, not become a new surveillance layer or a shortcut for judging individuals.
The examples and constructs in this post (e.g., connector density, trust density, activation speed) are based on observational evidence from LEAD deployments and related research, and are intended as design heuristics and leading indicators rather than finalized diagnostic standards. They should be validated and calibrated within each organization’s context before being used for high‑stakes decisions. All uses of ONA and people analytics should operate under clear privacy, consent, and governance boundaries, with signals aggregated and role‑appropriate so individuals are not unfairly exposed or surveilled.



Comments