People Analytics, ONA, and the Missing “People-Side KM” Layer for Enterprise GenAI
- Yumi

- Jan 20
- 5 min read
Updated: Jan 23
In my paper, “The Organizational Intelligence Loop (OIL): A Socio-Technical Framework for Adaptive Enterprise AI Adoption,” I argue that the real unlock for enterprise GenAI is to stop treating Organizational Network Analysis (ONA) as a one-off diagnostic and instead engineer it as infrastructure: a privacy-aware, continuously updated people-side knowledge layer that can be governed, queried, and used at runtime for adoption, routing, and decision quality.
ONA is one form of people analytics (PA), but it answers a different class of questions than traditional HR reporting.

When most teams say “people analytics,” they mean reporting: attrition, engagement, performance, DEI, workforce planning. That work matters, but it rarely answers the questions enterprise GenAI keeps running into: who actually knows what, who people trust, how decisions really move, and how work truly routes.
In the OIL framing, People Analytics (especially ONA) becomes the foundation of people-side knowledge management: a measurable, governable organizational context layer enterprise AI systems can use safely under real constraints.
If you’re reading this from a change management angle, I wrote a companion post on how these same people side signals let leaders measure culture and predict adoption, not just build better GenAI: Beyond “Vibes”: How Change Leaders Can Measure, Test, and Shift Culture for Innovation
1) Semantic KM: the content layer (necessary, but incomplete)

For enterprise AI to work, we absolutely need the foundations of traditional Knowledge Management.
This includes the Semantic Layer: the taxonomies, ontologies, and knowledge graphs that define what a concept is (Section 2.3). These systems provide the "truth" of the documents—metadata, provenance, and definitions.
However, a traditional ontology is static. It knows that "Project Alpha" is a "Strategic Initiative," but it does not know that Sarah is the only person anyone trusts to make decisions about Project Alpha. Without this social layer, GenAI fails at "routing"—it generates content but cannot navigate the organization’s political and trust structures.
To fix this, we need to pair the Semantic Knowledge Graph (content) with a Behavioral Knowledge Graph (people).
Traditional KM builds the semantic layer; OIL adds the people-side layer; together they become enterprise AI context.
2) People Analytics and ONA: the people-side KM layer

Passive ONA vs Active ONA (two signal classes)
To build this Behavioral Knowledge Graph, we must distinguish between two fundamentally different types of signals, as detailed in Section 2.2 of the paper:
Passive ONA (Telemetry):
These are structural traces derived from digital exhaust—meeting co-attendance, Slack responsiveness, file collaboration, and workflow time stamping. Passive ONA measures the opportunity for interaction and the structure of work.
Active ONA (Meaning):
These are relational signals collected via lightweight prompts or surveys—confirmed trust ties, perceived expertise, and psychological safety. Active ONA measures the quality and human meaning of the connection.
The Translation Gap A critical error in modern analytics is assuming that high telemetry (Passive) equals high trust (Active). I call this the "Translation Gap" (Section 4.3). Just because two people communicate frequently (high Passive signal) does not mean they influence each other; they might simply be in a dysfunctional workflow loop.
The mathematics of trust: trust density
In the OIL paper, trust density is used as a simple way to turn “trust in the network” into something a system can measure and reason about. Instead of only counting how many interactions people have, it asks: among all the relationships where people work together, how many are actually trusted?
A plain version of the formula is:

Example:
If a team has 1,000 interaction opportunities but only 10 trust‑confirmed ties, then:trust_density = 10 / 1000 = 0.01
This means the team looks very connected but has low confirmed trust. An AI system should treat this as “high activity, low trust”: it should be cautious about routing critical decisions or relying on informal advice from that cluster without additional checks.
By contrast, when trust_density is high, the system can safely lean more on that group’s internal routing and peer recommendations, because the relationships being used are not just frequent, they are also trusted.
3. Production Reality: Nonresponse is a Signal
Traditional survey-based ONA often breaks in real organizations because it treats nonresponse (people not filling out the survey) as “missing data.” In the OIL framework, nonresponse is treated as a behavioral signal that needs interpretation, not something to ignore.
If an AI system or analytics workflow relies on voluntary input, it has to account for the fact that people are busy, overloaded, or cautious. A nonresponse-aware approach looks at silence in context, for example:
High passive load + nonresponse:The person may be overwhelmed or over-committed. The system should protect their time and route optional requests away from them where possible.
Low passive load + nonresponse:The person may be disconnected from the network, skeptical of the change, or lacking psychological safety. The system should treat this as a potential engagement or adoption risk.
By treating silence as data, the system stops waiting for “perfect” response rates and starts making more robust, conservative decisions with the data it actually has. These patterns still need to be interpreted case by case and calibrated to each organization’s context, but they provide a practical starting point for using nonresponse as a meaningful signal instead of a blind spot.
4. From Standard Graph RAG to "Behavioral Graph RAG"
Graph Retrieval-Augmented Generation (Graph RAG) is already an established technique, it allows AI to retrieve related concepts ("Project Alpha" is related to "Q4 Strategy") rather than just matching keywords. But standard Graph RAG usually stops at the semantic layer; it knows how documents relate, but not how people relate.
The OIL framework extends this by feeding the system a Behavioral Knowledge Graph (introduced in Section 4.3 of the paper). This allows the AI to retrieve not just the content, but the lived reality surrounding it.
Standard Graph RAG:Retrieves the “Quarterly Product Roadmap” because it semantically matches the user’s question about upcoming features.
OIL-based behavioral Graph RAG:Retrieves the roadmap, but also checks the behavioral/organizational graph to answer two questions before responding:
Passive signal (permissions and role):“This user is a frontline support rep; historically, people in this role only see customer-facing milestones and not internal deprecation plans or M&A-related changes.”
Active signal (ownership and authority):“For this roadmap, Priya is the accountable product lead, and the VP of Product is the only role authorized to communicate certain shifts (for example, sunset decisions and strategic bets).”
The result:
Instead of dumping the full roadmap, the GenAI generates a socially and politically aware answer, such as:
“Here are the customer-facing features currently planned for the next quarter that are relevant to your support queue. Some internal roadmap details are restricted to product leadership. For clarification or exceptions, Priya (Product Lead) is the designated contact for this area.”
This is the same idea the Box CEO Aaron Levine keeps pointing to: it’s not just can the model answer the question, but should this person be able to ask this question and see this level of detail. The shift from semantic Graph RAG to behavioral Graph RAG brings role, ownership, trust, and communication authority into the retrieval and answer generation path.
What this adds up to
The contribution is not “ONA + GenAI.” It is making semantic KM plus people‑side KM usable as enterprise AI infrastructure.
Semantic KM foundations (taxonomy, ontology) are necessary, but not sufficient.
Passive and Active ONA must be treated as distinct so we do not confuse structural opportunity with relational meaning.
The Translation Gap must be handled through derived constructs, such as Trust Density, to make signals machine-readable.
The Result: Two connected graphs available at runtime—a knowledge graph on the information side and an organizational graph on the people side.
Full paper: https://doi.org/10.7916/9da8-c532
*For the change leader view of this, including how to tie these indicators to adoption speed and business outcomes, see Beyond “Vibes”: How Change Leaders Can Measure, Test, and Shift Culture for Innovation


Comments