top of page

The Organizational Intelligence Loop (OIL), a framework for adaptive enterprise AI

  • Writer: Yumi
    Yumi
  • Jan 19
  • 6 min read

Updated: Jan 20

For the past 7 years I have been building my startup, LEAD, including multiple pivots and roughly 3 years of preparation before our present roadmap, around one belief: the hardest enterprise problems are not about models or search. They are about strategy and execution, which depend on how people actually coordinate, trust, and transfer knowledge. A big part of my work has been people analytics, especially on Organizational Network Analysis (ONA): using collaboration and workflow telemetry to map expertise, knowledge flow, and informal influence at scale, and turning those signals into measurable inputs for decision-making and system design. If you’re specifically interested in people analytics or ONA, I wrote a deeper-dive companion post here.


Over the past year, I put that work into an academic frame at Columbia University’s Information and Knowledge Strategy (IKNS) program. Under the supervision of Katrina Pugh, Ph.D., a leading expert in knowledge strategy and organizational networks, I distilled that experience into a new paper:

“The Organizational Intelligence Loop (OIL): A Socio-Technical Framework for Adaptive Enterprise AI Adoption.

( You can download from Columbia Academic Commons: https://doi.org/10.7916/9da8-c532)



Why this paper exists

Enterprise GenAI is shifting from chat assistants to agentic workflows. Even with strong models, RAG, and semantic layers, outcomes in production are still uneven.

In practice, enterprise AI often fails because it lacks organizational context at runtime, even when the knowledge base is carefully governed.


The paper reframes this as an infrastructure and architecture problem rather than a tuning or UX problem. It lays out an approach that architects, platform teams, and engineering leaders can build toward, especially if you are:

• Designing enterprise search or GenAI assistants

• Orchestrating agentic workflows

• Leading adoption and governance in a large organization



The core gap: context is more than documents

Most stacks implicitly define “context” as documents plus metadata and semantics.

In production, enterprise AI also needs governed signals about:

Expertise and tacit ownership: who actually knows and maintains what

Permissions and decision rights: who is allowed to see, decide, and act

Trust and informal influence: who people follow and how work really routes

Workflow reality: dependencies, bottlenecks, and handoffs that make something truly actionable


Without this, agents do not only hallucinate facts. They also hallucinate authority, routing, and exposure. Answers can look correct while being misaligned with how the organization actually works.



What you get from reading it

This is not a generic “adoption” memo. It turns organizational theory and knowledge-management research into two main outcomes: how to drive adoption and how to design enterprise GenAI.


1. Adoption patterns and metrics

OIL for Enterprise GenAI adoption in 1 illustration
OIL for Enterprise GenAI adoption in 1 illustration

Concrete ways to use OIL to make adoption measurable and repeatable.

For adoption and transformation leaders, this includes:

• Patterns for instrumenting uptake, trust, and behavior change, beyond “monthly active users.”

• Ways to tie network maturity, connector health, and psychological safety to rollout strategies and intervention choices.

• How to use the Behavioral Knowledge Graph, Influence Architecture, and OIL loop to see where adoption is stuck (content, trust, routing, or process) and design targeted interventions instead of generic training.


2. Enterprise GenAI design patterns

How enterprise AI should be designed
How enterprise AI should be designed

Reference patterns for designing policy-aware, organization-aware GenAI systems.

For architects and engineering leaders, this covers:

• How to wire the organizational layer into LLMs, RAG pipelines, and agents at runtime, including context retrieval, routing, and escalation.

• How to design feedback and guardrails so model behavior, organizational context, and human workflows can be tuned together over time.


Within that, the paper gives three concrete building blocks:

2.1 Behavioral Knowledge Graph A privacy-preserving graph built from collaboration and work-pattern signals.For engineers, this looks like:

• A data model and service that exposes expertise visibility, knowledge flow, and operational bottlenecks as queryable signals.

• Integration patterns using existing telemetry (calendar, messaging, presence, workflow tools) without breaking privacy boundaries.


2.2 Influence Architecture A way to represent informal influence and trust pathways so routing, recommendations, and interventions follow how decisions and change actually propagate, not just the org chart. In system terms:

• A layer that can inform ranking, routing, escalation, notification targeting, and change-management scenarios.

• Signals that can be joined with content-level retrieval to decide not only “what to answer” but also “who to involve and where to route.”


2.3 The OIL Framework A continuous learning loop across four layers:

People: expertise visibility, trust pathways, influence bottlenecks

Information: ownership, definitions, provenance, governance foundations

Process: workflow telemetry, how coordination and decisions actually happen

Agentic AI Design: policy-aware, permission-aligned context and intervention pathways so humans and agents can adapt safely over time


For architects, OIL can be read as an organizational-layer reference model. It clarifies:

• What new services or data products you may need (for example, behavioral graph, influence service, coordination telemetry store).

• How those services feed into LLMs, RAG pipelines, and agents at runtime.

• Where to place guardrails and governance hooks so AI behavior can be monitored and adjusted over time.


The goal is to give engineering, product, data, and KM leaders a concrete organizational layer they can design and instrument, not only a conceptual model.



Evidence base

The framework is grounded in:

• Practitioner interviews

• Systematic analysis of public industry commentary

• Large scale enterprise behavioral and collaboration metadata

The data spans more than 2,000 organizational deployments, with a 2020–2025 subset of about 1,500 organizations, including:

• 4M calendar events

• 300M presence signals

• 4M channel memberships

• 1M peer links

• 60K surveys

• 160K anonymized profiles


Two brief case studies show, among other things:

• How tacit knowledge pathways depend on early peer connectors and collapse with attrition

• How network maturity and psychological safety shape activation speed, beyond product features or training



Startup context and guardrails

I started LEAD in 2020, before GenAI went mainstream, with a systems view: enterprise execution depends on observable coordination patterns, perceived expertise, and trust.

Today we operate two products:

LEAD.bot, which supports knowledge-sharing programs and onboarding in mid to large enterprises

Sunrize, which helps teams coordinate meetings using presence signals


We work with SOC 2 aligned security and privacy controls and process customer data only under explicit agreements and safeguards. That operational reality strongly influences how the OIL framework handles governance, privacy, and permissions.



Acknowledgments and cited leaders

This work was developed as an independent study in Columbia IKNS, advised by Katrina (Kate) Pugh, Ph.D., and influenced by discussions around KMWorld and other practitioner communities.


I am grateful to those who helped sharpen the thinking, including Mihai Pohonţu, Nabeel Ahmad, Ed Hoffman, Eve Porter-Zuckerman, Christina Libs, and several others who preferred not to be named.


The paper also engages with and cites public thinking from leaders such as:

Aaron Levie (CEO, Box)

Adam Grant (Professor, Wharton School)

Aditya Challapally (Researcher, MIT / NANDA Lab)

Al Adamsen (Founder & Director, Future of Work Project)

Alexis Fink (VP, People Analytics, Meta)

Andrew Ng (Co-founder, Google Brain and Coursera)

Cole Napper (Head of Workforce Analytics, Lightcast)

Dawn Klinghoffer (VP, HR Business Insights, Microsoft)

Helen Bevan (Chief Transformation Officer, NHS Horizons)

Jensen Huang (CEO, NVIDIA)



If you only remember one thing

Enterprise GenAI needs more than semantic retrieval. It needs an organizational learning loop: governed context that can be retrieved at runtime, plus intervention pathways that convert friction into measurable behavior change. Only then can AI performance and adoption improve together over time.


If you are building enterprise search, GenAI assistants, or agentic workflows, or leading adoption in a large organization, I am open to conversations and collaboration.



Disclaimer: 

This paper describes a reference architecture and design primitives, not a turnkey “rebuild everything” program. In practice, OIL is meant to be implemented incrementally: start with a narrow, high-value workflow, add only the minimum organizational signals needed under clear governance, and expand as you see measurable improvements in routing, safety, and adoption. The people-side layer is not employee surveillance and should never be used for performance policing. OIL assumes privacy-preserving design, explicit agreements, strict access controls, and purpose limitation; the goal is safer coordination and better knowledge flow for enterprise AI through small, governed steps and transparent evaluation.


Comments


Special thanks to our readers and supporters who make this journey possible!

© 2024 Yumi Willems' Blog. All rights reserved.                            Privacy Policy | Terms of Service

bottom of page