Decision Science

Waking up.

I think we are waking up. From my limited perch, it appears sudden clarity is emerging in businesses. The past months I have been privileged to work with clients who get it. The rebels deep within their walls are being heeded, funded, and presented with the intractable questions. Having tried everything in the MBA kit bag, they are willing to consider almost anything. One caveat before I go further. Chief Financial Officers still set the de facto business strategy for too many companies. The urgency behind the upcoming Transition Economics gathering in St. Louis is not dissipated. When considering this piece, I hesitated often because I do not want to come across as Utopian. Nevertheless, in my narrow field of knowledge management, for specific firms, something interesting is happening.

'Our market/employees/customers are changing in ways we cannot accommodate with our current processes, technologies and strategy.'

They are realizing that humans are not resources, information is not knowledge, and processes are fundamentally flawed guesses about the future. Just as problematic, and difficult to uproot: their information technology has been solving the wrong problem.

We have crafted entire industries and technologies to 'manage *' where * equals 'knowledge,' 'information,' content,' 'innovation,' etc. We dabbled in something we called "decision support systems" for generations, up until today, without understanding a critical question: how are decisions made?

We decided people must be as rational as machines, whether we said that openly (outside of the dismal science) I do not know. But that must have been the conviction behind things like taxonomies for 'browsing,' forcing new information into buckets that defined how our company sees its world. These are useful mechanisms for jobs where checklists are the best decision support tool. The last time I developed a taxonomy, it was for an HR department who wanted to ensure their far-flung workforce applied established policies consistently. In my view, this was a reasonable use for a taxonomy (although best integrated with tools for folksonomy). However, they are misleading for circumstances where humans face most problems. "Fixing search" or "refreshing the taxonomy" fails to address the core problem.

My thesis is this: we have spent generations developing information technology tools that address little more than how machines talk with other machines. As for how humans use information to make decisions - this was left to "change management," or waved off as "cultural barriers." The presumption: Craft the human to fit the machine, and value will ensue.

There is a tangential issue here, that I raise often in offline conversations. For most (not all) Chief Information Officers (CIOs)- the measure of success is highly reliable and secure systems. Uptime and availability matter most of all, and monetary rewards accompany an uninterrupted experience. Put another way - CIOs will score "blue" on their performance evaluation if they kill all their users. Humans are the messy actors in an otherwise soaring career choice. Consider how technology procurement and policy is influenced by this simple truth.

Consider the mantra for information management, sometimes blindly asserted as the goal for knowledge management: "delivering the right information to the right people at the right time." This noble-sounding vision has launched a thousand portals - and is wrong in every dimension. To borrow from a long-ago colleague: it is "spherically incorrect." The underlying presumption is one of prediction. That the designers and developers of an information system somehow can know what the right information is for any given stakeholder, in any possible circumstance - is true only for the simplest of problem sets. "OK, Glass - how do I fix a flat tire?"

Now consider how decisions are made - individually, driven by experience, emotion, unconscious biases, filters, and mood; in a group, driven by social dynamics, agendas, fear, reputation, consensus, et al. For any interesting problem set; the best tool for decision-making is the right conversation at the right time with the right people. This is true for classrooms, National Security Council meetings, design studios, auto showrooms, diplomacy, boardrooms, and lovers. And everything in between.

I believe we are starting to wake up to this understanding. What works everywhere else in our lives just may be crazy enough to work in our businesses. It may be worth a shot.

The Death of Mystery?

Actor Bruce Dern was on a show recently where he mused about his first days in Hollywood rubbing shoulders with the giants of the entertainment industry. “They were larger than life,” he told the host, “because no one knew what they were doing after school.” He finished by offering: ‘now everyone knows what happens after school, and there is no more mystery.’

The end of mystery is one outcome in these early days of the ‘social era,’ or whatever we end up calling this time. The examples are all around us;

  • Russia claims hundreds of thousands flee from Ukraine, while social media points us to a webcam that purports to show a quiet border crossing.
  • A Congressman’s private extra-marital flirtations are a mis-click away from becoming global broadcasts.
  • Young entertainers behave like young people - and ‘news sites’ thrive like parasitical sucker fish on the visual evidence of their exploits.

This goes further, however. For every news event, social media offers reassurances that our gut reaction to the news is justified. News feeds are tailored to the items that attract us, and our nascent opinions are reinforced quickly by our Facebook and Twitter feeds. So quickly, in fact, that often our views are shaped before we can imagine. Before we can ponder what events mean, if anything.

And this may be the tragedy. With news media (in countries like the USA and China, based on personal observation) positioned to tell us what we should think about events over reporting the events, it is a simple exercise to believe the “analysis” over new information. Research shows that once we hold a position on a topic, new information that conflicts with that position is not welcomed, but questioned. The more new information argues against our position, the more entrenched our opinions become.

What happens when we form opinions quickly, edging out the imagination that is a natural response to partial information? When we receive confident “analysis” that supports what we wish to believe about an event, at almost the same time we learn about that event, we skip past the part where we struggle to make sense of the new circumstance. We miss the opportunity for novel thought.

Russian troops move into Crimea; so what does that mean? See if you can find yourself in this list:

  • This is obviously a result of President Obama’s weakness on the world stage
  • This is obviously a reasonable response to protect the ethnic Russians in Crimea, who are distressed that their democratically-elected leadership was forced from office
  • This is obviously Putin, still distressed by the end of the Soviet Union, reinforcing a “near abroad” doctrine (Russia is allowed to intervene in internal affairs of its immediate neighbors - a doctrine compared to the US historical stance towards Latin America, and discussed within political science as the phenomenon of “geographic fatalism” from the persecutive of the target nations.)

And so on. Within hours, blogs were written to explain the events, to include one creative author who notices the February date parallel between these events and the 1933 Reichstag fire, and we begin to filter and form our opinions - based not on imagination and our own experience, but based on the opinions made available to us. While blogs take hours in some cases, Twitter can be counted on for immediate reactions.

What happens when we forget to imagine? What happens to novel solutions, to that overused “innovation” word? If we are training ourselves to allow others to imagine for us, usually people whose world view already matches ours, what becomes of our ability to learn or debate or be civil to those who disagree?

In fact, mystery is not dead. Whatever early confidence we develop, whatever appears certain to us, the situation itself remains uncertain. Our opinions do not change facts. Mystery is alive and well, what may have eroded is our ability to revel in it. To consider, learn, and experiment with novel ideas. Our ability to envision is something missing at many levels. We need it back.

Trusting the Terrain

This picture does not indicate a non-stop train.  One would think, if one did not speak Dutch, that you boarded the train at Tilburg, and de-trained at Den Haag Centraal.  (My actual origin point was Eindhoven a few hours earlier, but that’s part of the story.)

To expand. If one grew up on New York’s Long Island Railroad, where if you boarded a train  bound for a certain station - barring disaster that train would pull into the promised station - then one would bring that unarticulated experience and associated expectations to Holland.  (If you boarded West of Babylon, you knew you would always change trains there for points East, as the track electrification did not begin until Babylon.)

Where, say, one boards a train in Eindhoven that glides in almost silently under a station sign that reads:  Den Haag Centraal.  One then settles in the foyer area of the car, content to not wrestle luggage into a seat.  Out of sight of car notification signs and not speaking the language of the occasional announcement?  Sure.  But that sign said Den Haag Centraal, so the only mistake would be getting off this particular train.

The email load was significant, the book was interesting, I was buried in it.  And never noticed the mass exodus when we reached Rotterdam.  Nor the prolonged wait before the train started moving again.

In the opposite direction.

Wait, you say.  How on earth did I not notice it was now moving in the wrong direction?  I don’t fully know the answer, but as there is no American equivalent to the smoothness of European trains, I can honestly say I was not terribly aware of any movement.  And the book was interesting.  And I was on the right train - remember that sign in Eindhoven?  And the app seems to be reinforcing that the Intercity would “Richting” Den Haag Centraal at 14:37.  (Only writing this do I check the translation of Richting, which feels like a verb, but really only means ‘direction.’)

All the indicators upon which I relied for train travel - all the more important since I do not speak the language - told me I was on my way to my preferred destination.  Despite a host of clues screaming that I was simply wrong.  After two hours aboard the Intercity, I finally checked my position on my maps application, trying to see where I was.  Only then did I notice the dot was moving not only away from Den Haag, but was already nearing a return to Tilburg (one station short of my origin point of Eindhoven).  On the ‘return’ journey (the source for my screen capture image), I paid attention at each stop and finally noticed the exodus at Rotterdamn, along with the station sign above the train that no longer promised Den Haag.  I even had to (gasp) ask a passerby for the number of the transfer platform.

Recently, minutes from U.S. Federal Reserve minutes were released that give us more detail into the conversations inside the Fed as the 2007 recession began to take hold.  There are minority views trying to explain they were living in an outlier scenario, but these were outvoted repeatedly.  From the NYT article: “The Fed’s understanding of the crisis…was clouded by its reliance on indicators that tend to miss sharp changes in conditions.”  Clouded by its indicators.

If you think my idiocy on the Rotterdam platform was remarkable, consider how the Fed officials continued to fret about inflation as the market and housing prices crashed and unemployment began to rise.  When we are predicting, the failure to question our indicators could just mean it takes us five hours to traverse a two-hour journey - or it could mean we miss a catastrophe unfolding around us.

In the movie Gravity, the stranded astronaut believes she has sufficient resources aboard a Soyuz capsule, until she senses a change in the environment that causes her to tap on an analog gauge (an old pilot’s trick, as vital needles tended to freeze unhelpfully in obsolete positions).  As she taps, the needle falls to a more correct reading, ostensibly nearer to zero.

In these three stories - only the fictional character reacts appropriately by first questioning her indicators.  I had forgotten (and the fine economists on the Fed never learned) an old lesson I first read regarding an admonition in the Swedish Army Manual:  “Where the terrain and the map disagree, trust the terrain.”  Even when riding the famously efficient and wonderful trains of Europe, we should be careful to not become clouded by our indicators.

Just imagine how we need to think differently in less structured endeavors.


I enjoyed a pleasant email exchange recently with someone who referenced an earlier (infamous?) blog posting regarding what I witnessed as the death of Knowledge Management in the U.S. Department of Defense.  Without rehashing that work, I was interested to see that the post was circulating again. I'm happy to be updated on what I saw in 2009, and welcome any opportunity to update that observation. Within the email exchange, I was asked a question - what do I see as the difference between Information Management and Knowledge Management?  I thought I would share that answer here, offering it up to the gods of Google, in case I need it again someday.

The difference between IM and KM is the difference between a recipe and a chef, a map of London and a London cabbie, a book and its author.  Information is in technology domain, and I include books (themselves a technology) in that description.  Digitizing, subjecting to semantic analysis, etc., are things we do to information.  It is folly to ever call it knowledge, because that is the domain of the brain.  And knowledge is an emergent property of a decision maker - experiential, emotional framing of our mental patterns applied to circumstance and events. It propels us through decision and action, and is utterly individual, intimate and impossible to decompose because of the nature of cognitive processing.  Of course, I speak here of individual knowledge.

First principles, don't lose sight of how we process our world.

The difficulty is applying this understanding to organizational knowledge.  Knowledge is only in the brain, but organizations have a shared understanding (referred to as 'knowledge') as well - humans gathered in groups fit themselves into artificial decision constructs ("collaboration," "consensus") in order to leverage the collective individual knowledge to make decisions for the group.  My approach is to understand cognitive science, organizational theory, and information science to understand ways to improve group behaviors.

Are These Data?

A few years ago, I answered the phone.  I’ve since learned my lesson and silenced the landline.  When someone leaves a message there now, the tiny blue light flickers forlornly until I log on to the interwebs to listen and laugh at the voice mail.  For those particularly entertaining, I forward to my wife’s email for her bemusement.  But on this day, I answered the phone.  On the other end I found an individual conducting a survey on behalf of Freddie Mac and Fannie Mae.  For reasons I can neither recall nor fathom, I listened and agreed to participate.  Once told of the subject, I told the person that I had no connection or experience with these organizations.  It turned out, that did not matter.  She continued to ask me questions about the firms (whose names she read en toto for each question for the next ten minutes); probing all around my completely vacant perception of them.  I wondered aloud how useful this information was, and briefly considered making up outrages or plaudits just to make her day more interesting.

Today, there are new stories about these firms’ attempts to improve their branding and message.  I suspect my interview was part of that, and no doubt rolled up and considered insight into the public mind.  Some unnamed (and named) consultants made serious coin analyzing these results and suggesting ideas to improve the numbers.

How does my experience resemble political polls, which today make up approximately 67% of all news stories? (Statistics are fun to make up, try it yourself!)  How do people respond to questions about how they will vote in a little less than a year?  How many of them take that call as seriously as I took my Fannie Mae / Freddie Mac survey?  How is it so many people still use landline phones, apparently the only method by which these survey firms reach people?

A student of mine opined recently on the qualitative method by declaring it inferior, only useful for setting up the hypotheses for more grownup quantitative methods.  These quantitative methods feature, often, scientific polls with established margins of error.  Far better to consider the aggregate of poll results, careful diced and analyzed; over the anecdotes and full narrative of experience.  Such is the domain of the soft science.  Where “data” relies on those people who are eager to give honest answers to a stranger interrupting their day with a ten-minute questionnaire.

I don’t mean to impugn completely the survey method.  I just wonder how much of what passes for ‘data’ should be taken with a few grains of your favorite seasoning.  Layering time-honored mathematical models on top of an individual’s representation of their thoughts and intentions may not affect, it turns out, the quality of that information.

Avoiding the Hook

On occasion, I am honored to present a three-hour course on decision science as part of a regular seminar for senior feds who are in important jobs.  I once heard a comedian remark that absolutely nothing is worth doing for more than two hours, but while the gentleman obviously is not a football fan - in general I have to agree.  I always approach these speaking engagements with some trepidation, knowing how little I enjoy sitting through multi-hour training sessions or other Festivals of Talking Heads.  One of the compelling things about the Ignite series is the fact that speakers have to be off the stage in five minutes.  TED talks are worthwhile partly because their speakers take up no more than twenty minutes of your time.

Plenty has been written about PowerPoint etiquette, how some styles actually prohibit retention.  This comes about when you put a lot of text on a slide, and then compound the injury by reading the text to your audience.  This almost guarantees low retention, as the brains in your audience do not know whether to focus on reading or listening.  More often than not, they tune out.

Relying on the good work from Garr Reynolds (“Presentation Zen”) and Nancy Duarte (“Slide:ology”), as well as other research on how brains behave, I try to follow a few rules when I can.  One is to surprise the audience every ten minutes or so - although I can’t promise I always succeed at this one. The other is to use eye-catching photos and very little text.  My presentation at this seminar consists, for the most part, of embedded videos (it’s always nice to give your audience a break from you) and slides that are mainly a photo with a pithy phrase.  I don’t even read the phrase on each one, preferring to tell a story or anecdote that demonstrates the point of the slide - or sometimes offering the dry theory with a pointed reference. “Emotion plays a central role in decision-making, when we ask an expert to relate the decision logic they used in a specific situation, they lie. They don’t mean to, they can’t help it because so much of their personal decision process is unknown to them.”

What drives me to write this on a rainy football Sunday?

Well, I wanted to share with you the result of an experiment I ran this past week.  Mindful of the retention theory, I chose to demonstrate it in practice.  Since I didn’t think of it until the morning of the presentation, I went without a net.  At the end of the three-hour presentation, I showed photos from the course without any text.  One at a time, five in all. “Tell me what you learned while this slide was up.”

During the breaks, a few students asked if there was a reason for the strange approach to PowerPoint (I didn’t have the heart to tell them it was Keynote).  I had set this up perfectly, and the disappointment would be crushing. I dreaded silence, blank looks.

The class knew every slide.  By the third one, they were answering in unison.  This wasn’t just the eager students at the front of the class; every one of the 20 or so in the audience could speak to the message given on slides they had seen once, briefly, and then not again for over two hours.

I had a conversation last week with someone on Facebook who argued for the ‘standardized’ project brief format.  We all know this one.  The position was that every project used a standard brief format, the information was on the slides, and the briefing team did not spend excessive time creating unique content.  I sympathize with this approach, but cannot escape the fact that my little experiment demonstrated the theory.  If you think the ‘creative’ approach to slide-ware is not worth your time, so be it.  But if you are briefing people with some interest in having them retain your information, I dare you to repeat my experiment.  Be careful if you do, however.  Now that I’ve seen this work in person, it’s going to be hard to go back to boring my listeners.


Photo from Rob Lee’s collection on Flickr:

Summering from Behind

Some time ago, some media sources characterized the U.S. Administration's military involvement in Libya and Syria as 'leading from behind.'  I heard this phrase and thought:  "interesting, they're taking a nuanced and shared approach to a conflict where our national security interests may be threatened but not clear."  Having been honored to spend a good chunk of my career around national security policy analysts and leaders, I consider nuance to be a useful tool in a president's utility belt. Far more heard the phrase and thought:  Since when does America fight from the back of the pack?  Leading from behind makes no sense!  The mental image was a platoon where the leader is marching behind his troops, or placing a steering wheel in the rear of the car.  The metaphor was jarring and we stopped listening to one another.  Not only did the phrase fail to trigger upsetting mental images for me, I failed completely to appreciate how many people would respond to the strategy.  Having immersed myself in the implications of complexity in policy analysis for several years now, I no longer hear things the same way as before.

I am Beltway.

This is a town that lives on the shared metaphor - We declare war on drugs, war on poverty, and consider the energy crisis the 'moral equivalent of war.  For a President to fight an actual war in a way that sounds 'unAmerican' violated a shared metaphor for many of us.

What's the role of metaphor in our understanding?  Lakoff & Johnson claim that understanding 'takes place in terms of entire domains of experience and not in terms of isolated concepts.'  We cannot separate our understanding from context, and our context is extraordinarily personal.  You can try to influence how someone understands your message, but you cannot enforce the metaphor they use to understand it.  Nevertheless, you should be at least aware how your words may trigger a metaphor broadly shared everywhere in the nation - except for inside BeltwayTown.

I've been away from blogging for most of 2011's summer.  A summer that found Beltway Town struggling to place their policy objectives into metaphors that would stir the voter - or at least the voters who are called by pollsters.  We heard of hostage-taking, credit card limits, and blank checks.  Marketing and politics seek to establish shared metaphors in order to persuade.  Some decry the language and wonder why we cannot just agree on data used for our self-governance experiment - including yours truly - but this leaves the metaphor-fit exercise to the individual voter.  It is inevitable that as our politics become increasingly divisive (a regular campaign season event), the effort to enforce and influence a shared metaphor will increase as well.

The effort to navigate through personal metaphor is a personal one, and requires intention.  The effort to avoid triggering unintended and unflattering metaphor requires understanding on all sides.  More to the point, understanding requires continued conversations with those who do not share your viewpoint.  Challenge your metaphors by conversing with those with whom you disagree - lest your personal context obscure truth.


Lakoff, G., & Johnson, M. (1980). Metaphors We Live By. Chicago, IL: The University of Chicago Press.

Job-Killing Processes

I’ve been wrestling with a thought lately - if organizations are complex systems, and complex systems are continuously self-organizing, then why do we believe formal processes make these complex systems more efficient? Worse, when an organization is in need, why do we engage in process improvement - when what may be needed is process reduction or elimination? This is not the first paragraph to question process improvement, this is not some original Eureka moment.  This is a personal journey, and the enormity of the mistake is beyond what I had considered previously. Friends, more erudite than I, have used similar words before - but for some reason I’m realizing, only recently, a simple truth: the implications for the baseless faith in the machine-based approach to management and the firm are global and profound.

A process-heavy enterprise isn’t cold and impersonal - because humans are still warm and social.  Instead, a process-heavy enterprise creates the need for larger social networks.  Formal processes do not capture the natural evolving paths people take to confront their tasks.  In response, people do what is natural, they use their social network to navigate the workplace - looking inward to find others who have succeeded despite the process.  We know that excessive time spent focused inward leads to burned-out employees, who must work the “second eight” to comply with organizational reporting and the like.  On a larger scale, this wasted effort presents - at the limit - an opportunity cost for the enterprise as a whole.  Perhaps the path to efficiency is to set the conditions for processes to emerge at the point of need, rather than Six Sigma-ing the (majority of) tasks that require creativity and agility.

In the famous early mistakes in business process re-engineering, managers believed once their processes were “streamlined” and “documented” (and embedded in enterprise software tools), they could realize savings by reducing the number of humans.  For routinized tasks, this may be a reasonable assumption - however, what percentage of your workday is routine?  Look to your own environment - do you rely on your social network to find the informal work-arounds for corporate process?  When faced with a challenging problem, do you find solace in the documented process?

Work to Rule. In labor relations, there is a term called “work to rule.”  Simply stated, this means that union workers have a negotiating tool that enables them to paralyze an enterprise - by merely doing only what is considered ‘by the book.’  No creativity, no work-arounds, no focus on task accomplishment - just fealty to the process.  Consider this message:  the way to crash some enterprises is to do what is expected by procedure manuals and process charts.

Business Development. In one company, I observed a set process for preparing contract proposals:  with clear roles, authorities, assignments, formats, and process steps.  Chokepoints were established along the way, when “pencils” down would precede a murder board review to assess the quality of the proposal against the procurement specifications.  These comments were returned to the writing team, who would tackle their task anew. The information technology consisted of shared folders, and the writers laboring over each section would be required to post their documents in the appropriate folder at the required hour.  The work was intense and draining, writers were often unaware of each other’s work, and the review team invariably excoriated the team for the lack of a “single voice” or “storyline.”

In another company, the proposal response was visible at all times to the entire proposal team.  In a shared online space, the sections were worked in parallel, each writer able to observe the other’s ongoing work.  The team met daily to talk through issues, but kept in touch throughout the process through instant messaging and email. There were roles and authorities, assignments and formats here as well - but the process was determined by the writing team, and emerged and adapted based on the demands of the work and the schedule.  As the storyline evolved transparently, there were fewer surprises, people were able to lend value across the work throughout - and the end product was coherent and compelling.  This without a review team’s intervention.

Software Development. In software development, Agile methods are triumphing over waterfall or other linear methods - users are happier because their approach to their work changes as they learn what is possible from the technology solution.  The human and the software evolve together.  The old approach was to gather what people thought they needed, build the software according to specifications, and then train the humans to operate the solution.  There may be a correlation between how much training is needed and how disconnected the solution is from how people work.  When software methods allow the humans and technology to co-evolve, when humans are co-designing the solution during “development” - we seem to have happier humans.

The thoughts bouncing in my head now are:  what needs to be in place to allow for emergent processes? Formal process has a small place - compliance processes dictated by, for example, government regulation come to mind.  However, value-creating processes must emerge from the interaction of the work and the humans.  They cannot be formalized absent the humans or the situational context - if they are, then humans will circumvent them, creating a more inefficient enterprise... or follow them to the letter, and destroy value.  In a real sense, process improvement should be replaced by process enablement.  Let the approach to work emerge from the situational context.

All Social is Learning

I’ve been reflecting lately on my brief sojourn into education reform prior to returning to “the world.”  Several things I learned there, including the idea that how brains work and how people interact represent new fields of study to the Field of Education. (With apologies to any of my new Ed friends, please correct me if I heard wrong!) Yeah, I was appalled too.  Turns out it’s called there “The Learning Sciences,” and while I don’t know when it started to gain traction, people in education somewhat recently started to compare the education system we have with the stuff we’re learning from cognitive science, sociology, etc.  Pretty exciting stuff, and I can’t help but compare this welcome attention to interdisciplinary studies to the breakthrough in economics when - RECENTLY - leading economists began to realize that people are messy and don’t have consistent utility functions.  (In both cases, the system failures become a tad obvious using this lens.)

So the world is changing.  All around us.  One meme in education making the rounds is, attributed to The Learning Sciences:  “All learning is social!”  As someone mentioned this weekend on Twitter: The learning that isn’t social, isn’t worth our time studying. This remains controversial - what about human instinct, core behaviors, the idea that some of our personality traits may be inherited?  Surely these aren’t learned! But then we read that an infant, long before she can understand a language, is able to discern WHICH language is spoken by her tiny tribe.  And before she understands that she belongs to the same animal group as her parents and siblings, she can discern individual faces among primates.  Once she learns that she is one of the naked apes, the individuality among chimpanzee faces becomes invisible to her, as it is to us.

Ponder that one for a minute.  Heady stuff.

This weekend, I was struck by a logic stick.  If all learning is social, is all social learning?  We know this is not automatically so, learned that in the intro to Logic, Sets and Numbers (an actual college course I took in the 70’s).  But when we engage in a social setting, online or offline, are we ever not learning?  Let’s add in a third statement: we are constantly learning.  Even while asleep, some research indicates, the brain assembles and makes sense of what it experienced that day.  There isn’t a time when our brains aren’t rewiring themselves based on input from our environment.

We learn something from every experience.  If events occur as predicted, we reinforce that cognitive pattern for the next use (naturally, we have the ability to learn the wrong things here).  If they do not, we reconsider our pattern assessment logic.  We descend the stairs at 3 am differently once we learn the fourth step from the landing squeaks now - and will subsequently do that in another’s home without thinking.

So we’re constantly learning, and all learning is social.  (Is it?  We learned that squeaky stair avoidance thing on our own, didn’t we?  Hint:  No.)

Enter social media!  What is your social media strategy?  Does that question even make sense anymore?  Or should we ask now:  What is your learning strategy, and what role is played therein by social media, happy hours, phone calls, email, downtime, etc.?  If all social is learning, shouldn’t any associated strategy for socializing tools be focused there?


24 Nov: Update, thanks to the great comments I'm getting here.  Here is a another great resource exploring this notion that all learning is social, and questioning the value of corporate training methods as a result:

Free Yourselves from the Tyranny of the Document Metaphor!

(My title comes from a former colleague who buried this bon mot in a client deliverable - if she wishes me to name her, I shall. Else, know this headline gem is just something I wish I'd written.) I interjected myself into a listserv conversation last week, stating “documents present a barrier to knowledge - We need to move beyond the document metaphor if we're trying to cultivate knowledge.”

I was asked to explain myself, as this is considered by some a contrarian view. I first waited a few days while those more eloquent took up the cause - but here is what I responded this morning. I believe a reasonable response is to roll one’s eyes at such talk - I don’t offer a useful alternative to documents (yet), so why attend? Simple: I am trying to shake us free from the belief that improving documents will improve somehow knowledge flows and understanding. If you've already begun focusing on enabling conversations rather than uploading more documents to your portal - you have the message.

One friend offered that documents are not barriers but constraints. Here is where I part company: the document may be intended as a constraining frame, but when so much of the 'system' is omitted, this framing becomes cropping (as in image cropping). Constraint becomes distortion. The brain itself tells us why documents are cropped images of knowledge, not sufficient frames.

The brain knows spatial and temporal patterns, and predicts patterns in its environment. Language shapes expected patterns, and predisposes the brain to predict in certain ways. The marvelous thing here is that our media distinctions such as images, sound, written language, spoken language, emotion, physical response - are blended in memory. In addition: these memories are not stored as blended, but are blended at the point of recall. What is stored are fragments - all knowledge is fragmented until the point of use. An author uses her knowledge to create a document, which - if well crafted and discovered and interpreted well - will form one input for the learner.

For documents from this morning’s email to early religious texts - the context lost between author and reader is significant and meaningful. Even the term ‘context’ seems to me to be a false reference to content metadata. For the brain, context is content. This is why we know more than we can say, and we say more than we can write down. (Polanyi, Snowden.)

{ The photo below is of neolithic 'art' from Newgrange in southern Ireland. The meaning for these carvings is utterly absent now, as eons washed away all metadata, culture and context. }

But more than this, our brains make use of our bodies in ways we are only beginning to understand. The Bride and I sat sipping wine on the deck last night, during a difficult conversation. At one point, her reassuring squeeze on my forearm conveyed a silent message that got me thinking about haptic memory, pattern expectations, and the “non-verbal” communication that characterizes some of this transfer. (I compared this favorably to the times she kicks me under a dinner table, the forearm message was much clearer - or perhaps I was “listening” this time.)

Research into everything from micro-expressions to mirror neurons shows us that face-to-face conversation is the richest knowledge transfer experience. Given the flow of information, both conscious and not, during a conversation - the notion that a document can capture the richness of this flow is laughable. For simple problems, documents can be sufficient: (my most recent data point being the bookcase I successfully assembled from instructions penned in China, all the more remarkable if you know how useless I am at such tasks).

The reason I say documents are a barrier, then, comes from their omission of so much context/content - but also from our mistaken confidence in their ability to transfer knowledge of any depth. So long as we believe improving document structures or access will increase knowledge transfer - we will continue to erect barriers to true knowledge transfer and maintain the high error rate that we all swim through each day.

You Don’t Know What You Think You Know.

Remember the first time you rode a bike without help?  When the steadying hand came off the seat or your training wheels were unscrewed and set aside for a future toddler?   Remember what you were wearing?  For me, it was a tweed suit, with shorts and a cap.  And the hand coming off the seat belonged to a sibling who eased me down a driveway and into the street unattended.  The bike was a black Schwinn RollFast. Or so I remember.  The tweed suit memory may originate from a picture from sometime during that period, and “my brother pushed me into traffic” is an oft-told story that garners the desired comic effect.  I know the bike is the right memory, as I have corroborating evidence.  The rest is suspect.


I’m afraid I just played a dirty trick on you.  If you did call to mind your first bike ride just now, you are now re-creating the memory as you ‘re-store’ it.  Your memories are not movies in a vault that you watch from time to time, while not disturbing the film itself. Instead, you interact with long-term memory, and what is then ‘stored’ is the memory as you recalled it, not necessarily the ‘truth of what happened.’  Error is magnified and becomes embedded.  I may have just mucked with a precious memory of your childhood.  Sorry about that.

Our personal long-term memories are recreated when we recall them, often imperfectly.  This all comes to mind as a friend attends a week-long training course in Six Sigma (don’t get me started), and after I was honored to observe a security training course a few weeks ago.  (It is always an honor to sit among the heroes working in the intelligence or warfighting community - these occasions help me remember why I am obsessed with public sector problems.)

What are the implications for training, then, if long-term memories are subjected to this imperfect storage method - and are often triggered by seemingly unrelated stimuli?  (If I smell clove, I am back at a Thanksgiving table, immersed in those memories.)  How do we truly provide “training” that will be remembered, hopefully with some degree of accuracy, long after the PowerPoint dims?  How do we brief colleagues and supervisors without putting them into a poorly lit coma?

For my part, I use methods informed by people like Garr Reynolds, Nancy Duarte, and John Medina.  For the small group who sat through my Ignite DC talk in February, the charming fellow in the picture above makes them think of “high school diploma.”  I used the auto-associative function of the neocortex to embed the notion that what we hand high school graduates is less than attractive as they proceed to tackle college and life.

I could have used a simple PowerPoint slide with terrifying statistics to get the same point across, but it turns out storing an image with a simple accompanying message is a better way to cement the idea. Each of my slides consists of an image with very few words, since forcing someone to read your slide as you talk ensures they absorb little.  Reading the slides to your audience reduces this absorption rate to near zero.

In education, the field of ‘learning sciences’ is tackling (finally) the problem of education/training with an eye to how the brain actually works.  Perhaps it’s time to bring the ‘learning sciences’ to bear for corporate/agency training.  Perhaps your slides need to be crafted recognizing that your audience is not bored by you, but by a delivery method that ensures inattention.

Realizing we don’t have a wetware version of SharePoint in our skulls is the first step towards crafting training, briefings and conversations that will resonate, excite, and cause our colleagues to store the information more completely.  What they do with that information, as they call it to mind over time, is utterly out of your hands.  And theirs. Duarte, N. (2008). slide:ology - The Art and Science of Creating Great Presentations. Sebastopol, CA: O'Reilly Media.

Fauconnier, G., & Turner, M. (2002). The Way We Think: Conceptual Blending and the Mind's Hidden Complexities. New York, NY: Basic Books, Perseus Books Group.

Hawkins, J., & Blakeslee, S. (2004). On Intelligence. New York, NY: Henry Holt and Company, LLC.

Medina, J. (2008). Brain Rules: 12 Principles for Surviving and Thriving at Work, Home, and School. Seattle, WA: Pear Press.

Reynolds, G. (2008). Presentation Zen: Simple Ideas on Presentation Design and Delivery: New Riders Press.

Raising the Dial Tone

About 2,000 years ago, the way to communicate across distance - if you had means - was to employ a human messenger.  Lacking that, you may use smoke or fire relays to communicate along specific "lines of communication." About 100 years ago, the rule in pre-WWII U.S. (for residential use) was party lines, nicely captured in this article.  very-old-phoneThe phone in this image was designed by someone who never considered that a user would need or be able to "dial their own number."  Instead, you would pick up a phone and hear a voice.

Following WWII trunk lines, switches, and accepted protocols for area codes eliminated the need for operators to complete a call: their whose job became more sophisticated than just manually making connections (disclaimer:  both my mother and grandmother worked as telephone operators in pre-war Manhattan). The user interface disappeared and the professionals evolved.

Their job was replaced by a dial tone and phones that let you enter your own numbers. The numbers were nationally translatable such that you could theoretically dial any phone on the country. You still needed an operator for overseas calls, but eventually even this requirement disappeared as other countries signed onto protocols and became accessible.

Why is the state of today's dial tone? Where do we still need human assistance to connect? Is the assistance available?  How often do we give up, failing to reach our party?

Last week the Bride tried her hand at buying health insurance online and came away a little older. We wanted to use AARP, as they resold an Aetna product.  She signed in to AARP, authenticated there and was sent to an Aetna link. At this link, we find that our Google Chrome  browser is not supported.

And here is where our dial tone broke. 

She opened a different browser and pasted the current link. The problem, we've lost the 'breadcrumb' and now Aetna thinks we're coming directly to them. No discount from AARP. Worse, she has now 'created' her account - not associated with AARP -  and cannot undo this without speaking to an Aetna representative.

We appear to live in a "thin client" world, but in fact this presumes we all have browsers that are supported, broadband access, Adobe products, (sorry, iPhone users), etc.

Our interface today continues to confound, even as we extend the form and nature of our interactions. It's as if we were sold a new "phone" every year or so, warned that the previous model would somehow let robbers into our homes - except they now steal our very identities rather than our jewelry.  


Each new "phone" would have new features for richer connections, but mysteriously wouldn't connect us to certain numbers.

As we add browsers, Macromedia, QuickTime, Windows Media, and update each  based on vendor production schedules and security breaches - are we making more or less difficult to establish a global dial tone?

Are we converging or diverging?  Perhaps both at once - at least it can seem that way.  As our browser experience becomes more complex, our sharing of fragments - our chatter - becomes simpler.

This is what social media means to me. It raises the dial tone. I can reach/search/listen to a global conversation. People can engage using their cell phones, any browser, a myriad of apps designed against an open API, etc.

As of this writing, Twitter has achieved a party line for millions. Someone asked yesterday "does anyone know the username for the owners of Twitter?" others chimed in immediately to offer assistance, and it became obvious to me that no operator is needed to help us connect using this particular dial tone.

A Brief Meeting with My Enterprise Commensal Bacteria

Enjoyed a rather remarkable conversation yesterday.  A gentleman associated with an enterprise social software firm put a question out into the ether regarding adoption of such products.  To be specific, he used Twitter to pose the question.  The "tweet" was then visible to anyone who had already signed up to follow his musings, and anyone who searched for key terms contained in his message.  (To be more interesting, you can establish an RSS feed so that when anyone tweets and uses keywords you care about - you can get an alert.) This gentleman is in the list of people I follow, and I saw the question.  Paraphrasing:  if we deploy enterprise social software, are we establishing another stovepipe?

I could not resist, and charged in with my response.

"EXACTLY why I've been vapor-locked over the adoption of enterprise social software."

He responded.

"Still major benefits from siloed E2.0, but how to connect it more broadly?"

And then something curious happened.  Another person, who follows my messages, chimed in.

"My issue is that enterprises think, in regards to social software, that their problems are somehow different or distinct."

At one point, specific questions were posed and direct, thoughtful answers provided.

"web 2.0 silos. Thinking along 2 lines: (1) They're not connected to anything internally. (2) Many employees not on the sites"


"(1) They CAN be connected to sites internally (most of them have public APIs & services)" and "(2)The emergent and open nature of Web 2.0 software allows for employees who need the information to join the site as needed."

From there, the three of us had a conversation that touched on the need for corporate information preservation in the face of litigation, the complex nature of enterprises, and finally the notion that enterprises need to comprehend their role in their own value networks. While connecting people and information within the enterprise is essential, connecting to information generated by your suppliers, customers, partners, competition, etc., is also vital for keeping aware of trends/changes/risks/opportunities.

All of this reminded me of a recent NYT article that discussed commensal bacteria:

"Since humans depend on their microbiome for various essential services, including digestion, a person should really be considered a superorganism, microbiologists assert, consisting of his or her own cells and those of all the commensal bacteria. The bacterial cells also outnumber human cells by 10 to 1, meaning that if cells could vote, people would be a minority in their own body."

There is no question where my body ends and these bacteria begin, but is it useful and enforce the distinction?  Similarly, is it useful to establish information systems that exclude the people who help us do our job - but who are not employed by our firm?  Understanding how to connect to and collaborate with these colleagues and potential colleagues may be as important as coordinating internally with fellow employees.

All in all, this was a very successful meeting.  Three professionals, from a total of two firms, came together to check assumptions and learn from one another.  We used a Web 2.0 tool outside our firewalls, and there is even a record of our conversation - searchable from any browser.  It took up very little time, as we focused on common questions and ideas.  (There was no status report or financial impact statement on the agenda.)  One of our number had never before interacted with the other two - yet the meeting only contained people interested in the topic.

Oh, and I believe there were others in the meeting, having sidebar conversations as well.  As they could see "our" conversation, they likely offered their own perspectives privately.

If only there were a catchy name for the infrastructure and culture that allowed us to come together like this.