Category Archives: Knowledge

2014-08-19 16.28.15

Positioning — what is this thing called KM?

One of the most fruitless recurring activities in the knowledge management world is the irregular call to ‘define KM.’ I have touched on this here before, but today is slightly different.

Study at Calke AbbeyMatt Moore has produced an intriguing list of things people call knowledge management. It includes an eclectic mixture of things that don’t always sit well together. However, there are some themes. The descriptions in the predominant group refer to knowledge in some way (typically attached to verbs like transfer, sharing, retention, exchange, development or enablement). The next largest group is ‘social’, closely followed by ‘information’ and ‘learning’, then ‘collaboration’ ‘best practice’ and ‘innovation’.

Overall, the list strikes me as being very dependent on organisational context — a business that depends heavily on marketing products to customers is more likely to react well to “Multi-channel, digital information sharing strategy and customer intelligence capability” rather than “Knowledge and Process Management.” And KM teams may change their focus over time — either as a reaction to changes in the wider business or to take advantage of new tools and technologies that might be seen to further the cause. This would explain the proliferation of ‘social’ titles, for example.

Coincidentally, last week Nick Milton provided an interesting template to help organisations thinking about their approach to knowledge management. His blog post, “Knowledge of process, knowledge of product, knowledge of customer” suggests that organisations locate themselves between three extremes of a triad — process, product, customer — depending on where their efforts are (or should be) focussed.

I like Nick’s approach (even though he underplays the extent to which law firms need to be aware of client needs), but it proceeds on an assumption that organisations are self-aware enough to say honestly where they are. In my experience, few have that awareness. Instead they hoodwink themselves with aspirational assertions about their goals and the value they provide. One of the things that Cognitive Edge techniques can do for a firm is to help them avoid entrained patterns of thinking and unlock the real business culture and aspirations driving knowledge needs.

Once it is understood what (and who) the business is for, and where it is heading, the choice of knowledge activities (and perhaps their name) will flow from that.

If you are interested in exploring these techniques, I can help — get in touch.

Be irrational about irrationality

Given my focus here on challenging traditional assumptions about knowledge and the law, it would be negligent of me not to draw attention to a concise Scientific American blog from last month that points up a key flaw in much popular writing about the psychology of decision-making.

The shortcomings of our rationality have been thoroughly exposed to the lay audience. But there’s a peculiar inconsistency about this trend. People seem to absorb these books uncritically, ironically falling prey to some of the very biases they should be on the lookout for: incomplete information and seductive stories. That is, when people learn about how we irrationally jump to conclusions they form new opinions about how the brain works from the little information they recently acquired. They jump to conclusions about how the brain jumps to conclusions and fit their newfound knowledge into a larger story that romantically and naively describes personal enlightenment.

This is not a new problem, but it is enhanced by the proliferation of this kind of literature, and the way that the message of these books is amplified by blogs and tweets. I confess to being part of this chorus, so this is a conscious effort to help myself avoid being sucked into unwavering belief.

Ultimately, we need to remember what philosophers get right. Listen and read carefully; logically analyze arguments; try to avoid jumping to conclusions; don’t rely on stories too much. The Greek playwright Euripides was right: Question everything, learn something, answer nothing.

Knowledge and information are different (no doubt about that)

In one of those internet coincidences, I have encountered (or re-encountered in some instances) a number of assertions today that we need to distinguish knowledge management and information management. Largely for my own benefit I have synthesised these in the following post.

David Gurteen’s regular newsletter contained the first pointer, to a blog post by Stephen Bounds.

I don’t agree that Information Management should be primarily backwards looking. The use of BI tools like Cognos et al are squarely IM but they are just as useful for forecasting as analysis. More generally, effective IM should always be done with a view to enabling KM process improvements.

I define the difference in this way: Knowledge Management is practised through activities that support better decision-making. IM is practised by improving the systems that store, capture, transmit etc information.

In this sense, a librarian neatly captures both sides of the coin. The act of building and making a library catalogue available is covered by IM. But the transaction by which a person can approach a librarian and leave with a relevant set of data to make a better decision is covered by KM.

Stephen’s post builds on a comment he made to a blog post of Nick Milton’s, in which Nick gives vent to a self-confessed rant:

If, as many people claim, Knowledge Management is “getting the right information to the right people at the right time” then what on earth do they think Information Management is?

Management of X is not concerned with delivery of Y.

Interestingly, although I have had similar experiences to Nick’s of people muddling knowledge and information, many of the links from the linked Google search use the quoted phrase to highlight the same error. One of the clearest of those rejections is that provided by Joe Firestone in one of a series of posts exploring US Governmental Knowledge Management.

If to do KM, we must understand problem seeking, recognition, and formulation, and knowledge production (problem solving), in order to know what is “knowledge,” and what is “just information,” then why not simply recognize that a First generation KM program based on “Getting the right knowledge . . . “ is not a clean alternative that allows one to forget about problems, problem solving, and innovation, but that since it also requires knowledge of these things, we may as well pursue a version of Second Generation KM that seeks to enhance not only “Getting the right knowledge . . . “, but also how we make that “right knowledge,” in the first place.

And as long as we’re at it, let’s also make that distinction between “doing” and “managing” that is at the very basis of the field of Management, and say KM is not primarily about Knowledge Managers “making knowledge” or “Getting the right knowledge to the right person at the right time,” but rather is primarily about enhancing the ways in which knowledge workers do these things. If we do that, we in KM won’t be stepping all over the turf of other managers, who, from a point of view distinguishing managing “knowledge processing,” from “doing knowledge processing,” are some of the primary knowledge workers part of whose job it is to actually make and integrate knowledge into organizations.

Independently, and most freshly, John Bordeaux has revisited an aspect of his critique of KM in the US Department of Defense. Specifically, what is the difference between Information Management and Knowledge Management. His answer:

The difference between IM and KM is the difference between a recipe and a chef, a map of London and a London cabbie, a book and its author.  Information is in technology domain, and I include books (themselves a technology) in that description.  Digitizing, subjecting to semantic analysis, etc., are things we do to information.  It is folly to ever call it knowledge, because that is the domain of the brain.  And knowledge is an emergent property of a decision maker – experiential, emotional framing of our mental patterns applied to circumstance and events. It propels us through decision and action, and is utterly individual, intimate and impossible to decompose because of the nature of cognitive processing.  Of course, I speak here of individual knowledge.

John’s position is especially interesting for his assertion that knowledge is distinct from information in part because of its location. If I understand him correctly, once knowledge is captured, stored, or manipulated outside the brain, it ceases to be knowledge — it is information.

This makes sense to me, but it is at odds (I think) with Joe Firestone’s position, as expressed in a paper elsewhere: “My Road to Knowledge Management through Data Warehousing” (pdf).

[T]he desire to get beyond “arid IT-based” concerns and to take the human-side of decision support into account, is about a view of KM that sees knowledge as subjective and personal in character, largely “tacit” or “implicit”, and as distinct from codified expressions, which are really not knowledge, but only information. Knowledge is frequently viewed as “justified true belief” in this approach, a definition that has been the dominant one in philosophy since Plato, but which has been under vigorous attack since at least the 1930s. People who take this road to KM, view it as primarily an applied social science discipline, whose role is to “enable” better knowledge creation and sharing by facilitating the “conversion” of tacit and implicit knowledge to codified expressions.

The problem with this road to KM is that (a) in viewing knowledge as “justified true belief” it makes it dependent on the “knower” and therefore basically subjective. And (b) in restricting knowledge to beliefs in the mind, it neglects the role of management in providing a framework of rules and technology for testing and evaluating codified expressions or knowledge claims and thereby creating a basis for producing objective knowledge. In a number of other places, I’ve specified two types of knowledge found in organizations: surviving beliefs and surviving knowledge claims. In restricting attention to facilitating expressing surviving beliefs alone, this road to KM misses one of its major objectives: to enhance Knowledge Production and, in this way, indirectly improve the quality of surviving knowledge claims used in future decisions.

I am not sure that I understand Joe’s position completely, especially as his comprehension of the philosophical foundations far exceeds mine. However, the final sentence of the first paragraph above appears not to fit John Bordeaux’s position, although I think the first part of the paragraph does fit. I also struggle with the second paragraph. Even if one can separate knowledge from the ‘knower’, there remains the possibility that what is known depends on the context. As Nick Milton puts it in a comment on his original post:

I could give you a whole stack of information about the rocks below the North Sea – seismic sections, maps, core samples – but could you make an effective decision about where to site an oil well?

I think this comes to a practical problem. Capturing what is known in an objective sense would require a correlative capture of enough context to make it comprehensible by anyone at any point in the future. How much effort would that take, and at what point would it be more economical just to ask the relevant person (or even to start again from scratch)?

Asking better questions, getting better insight

Over the past few months I have been using a model that Nick Milton shared on his blog, to help people understand that the knowledge activities they have traditionally espoused only tell half the story.

I have reservationas about the tacit/explicit distinction, but that is irrelevant for now. The key thing for me is that there is a clear and meaningful difference between systems and tools that push knowledge to people and the activities that develop people’s ability to pull knowledge at the moment of need.

In another post, Nick describes advising an organisation which had over-emphasised the push side of the table. I think many law firms are in this position now. We have developed vast banks of precedents, practice notes, process guides, checklists and so on; and we have encouraged in our lawyers a dependency on these things. To a point, this is all good. These tools help people to dispose efficiently of the work that should not require great thought. But what about those areas where great thought is required. How do we build people’s capability to get to the insight and expertise that will help them solve the trickier problems that clients bring?

We can throw technology at the problem again — search engines will allow people to draw on the vast pool of work that has already been done. Sometimes that will disclose a really useful document that contains just the right information to help the lawyer arrive at a suitable answer. More often, though, it will produce nothing at all or many documents none of which actually help directly. Those many documents may, however, help to identify the right people to ask for help.

So it comes back to asking. Nick Milton has made this point in a couple of posts on his blog this week. The more recent post, “Asking in KM, when and how?”, identifies a number of situations in which asking might be institutionalised: communities of practice, after action reviews, and retrospects; but it doesn’t get to the heart of the question. What does good asking look like?

Fortunately, help is at hand. (The topic must be in the air at the moment for some reason.) Ron Ashkenas, in an HBR blog post, “When the Help You Get Isn’t Helpful“, explores what happens when someone shares their knowledge in a way that is actually useless.

Consider John, an account executive who is contemplating how to expand into a new market segment — one that is wrought with regulatory challenges. With a puzzled look on his face, he walks past Samantha, who asks, “Are you okay?” John responds, “Not really, I’m trying to figure out how to gain access for more of our products into Latin America.” Samantha immediately runs to her office and returns with a 100-page analytical report detailing the region. She then spends the next ten minutes going over a how-to guide on conducting market research. Out of respect to Samantha, John patiently listens. But despite her good intentions, Samantha’s input is counterproductive. John might have benefited from Samantha’s time if she had focused on solving his regulatory conundrum. Instead, John walks away feeling even more frustrated and perplexed.

What happened here? John presented Samantha with a problem, and she offered help. I suspect this kind of unfocused response is common. I know I have been guilty of it in the past, and I suspect I will be again in the future. The difficulty is that people are actually very poor at asking questions. Why that might be is a conundrum for a different time. Fortunately, Ron Ashkenas has some guidance to get better at asking.

Target your requests. Instead of asking whoever is available, intentionally target certain individuals. Create a list of people who have access to resources, information, and relevant experience about your problem. Expand your list to include friends and colleagues who tend to challenge the norm and see the world differently. Make a point of including people who are likely to have useful views but you might hesitate to approach because you think they are too busy or wouldn’t be interested.

Frame your question. Before asking for input, figure out what you really need: What kind of advice are you looking for? What information would be useful? Are there gaps in your thinking? Then consider how to frame your question so that you solicit the right advice.

Redirect the conversation. If the person offering advice jumps to conclusions, be prepared to redirect them. Most people will not be offended if you politely refocus them. For instance, had John interrupted Samantha’s lecture on market research by saying, “The issue isn’t our understanding of the market, it’s how to deal with the area’s regulatory restrictions. That’s where I could use some help,” Samantha could have spent the next ten minutes firing off some useful ideas.

This doesn’t feel like rocket science. Frequently, however, I see people asking quite open-ended questions in the hope that something useful will pop up. I suspect that what actually happens is that those with the knowledge to assist don’t answer precisely because the question is too vague. Yet again, the key to good a outcome here is the same as it is in many other contexts. Careful preparation and clarity of scope will generate the answer you need. (It is also important to be comfortable with the possibility that there is no answer. If you are precise and clear, the fact that no answer is forthcoming is much more likely to be an accurate reflection of there being no answer available at all.)

I think this is an iterative process:

  • Work out exactly what you need to know. What is the gap in your understanding that needs to be filled in order to resolve the issue raised by your client?
  • Who is the best person to answer that question? Do you know that person already, or will you need to seek advice from others? Plan how to ask the right question to identify that person.

Repeat until satisfied…

Knowledge ‘what’? And why?

One of the great things about using to host this blog is that Akismet, the tool they use to block spam comments, is really effective. A result of this is that I have never shut off comments on my older posts. If I had, I would never have seen a thought-provoking comment on a post of mine from 2008 by Madhukar Kalsapura:

I simply use this ; “Knowledge management is about what to DO when you don’t Know”.

Over the past few weeks, I have been contemplating this comment. I think it has much to commend it, but it also raises a slight terminological problem.

What do we do when our knowledge runs out? I don’t think we do ‘knowledge management’ as individuals. We certainly aim to develop, deepen, extend, broaden or redirect our knowledge — this is learning. Also, we don’t necessarily go to the same places for that learning every time. Organisations might aim to do things to facilitate that learning process, and we might call this ‘knowledge management’, but I am less and less certain that this is a sensible phrase.

Separately, I was reading Knoco’s recent newsletter (pdf), and found an article which builds on Yasmin Fodil’s experiences observing knowledge and learning at NASA’s Goddard Space Flight Center, which she reported on her blog (and cross-posted). In the blog post there are some useful diagrams summarising the people-centric approach used at Goddard; the whole piece is well-worth reading (and following the links to the Goddard material itself). Knoco took one of those diagrams and embellished it. I have embedded that one below (click to see the original).

This table made me reflect on my own knowledge and learning behaviours, as well as those I see around me. In the column headed “How can I learn it?” there are certainly some tools and techniques that benefit from external (call it ‘KM’) input, but the starting point (learning from one’s own experiences) depends on individual commitment.

I found it a bit more useful to show these tiers of learning as concentric circles:

I think this makes two things clearer.

Firstly, what I myself know contributes to the knowledge of my network, which in turn is part of the wider firm’s knowledge (although people’s personal networks usually include participants from outside their own organisation, or their immediate working group, I am ignoring that for simplicity here). When individuals have good personal knowledge practices (even if it is just making good notes that can be easily accessed and used in one-to-one conversations), their wider contribution is almost inevitably higher quality — to the benefit of those around them.

The second thing is that the further sources of knowledge and learning are from the individual, the more help they will need to make the most of them. I think that’s what we mean when we talk about knowledge management. But it isn’t so much ‘management’ as facilitation of knowledge. (And I am not crazy about ‘facilitation’ either. Alternative suggestions welcome.)

As a result of this cogitation, I have amended the description of KM in the comment I quoted above.

  • For everyone, knowledge development is about what to do when you don’t know.
    • When you don’t know, you need to ask: from whom can I learn? When you see people around you who appear not to know, you need to ask: how can we learn together? or what can they learn from me?
  • For the firm, facilitating knowledge development is about creating the best environment to encourage effective learning and knowledge sharing.
    • This virtuous circle of knowledge exchange and learning helps to create a more agile organisation primed to respond creatively and innovatively to client demand, legal change, and market shifts.

That last justification of our knowledge activities is one that often crops up. Better use of knowledge promotes innovation goes without saying, doesn’t it? But if that was the only reason organisations did ‘KM’ then why do all that traditional stuff around ‘best practices’, standard documents, house style and taxonomies and so on?

The commonly-stated problem with all that boring stuff (and I have been as guilty as anyone else of such comments) is that it just crystallises past practices, that if we do what we have always done, we will just get the results we always got. But sometimes that consistency and predictability is really what we (and our clients) actually want.

We (and of course, I really mean ‘I’) need to be careful not to jettison the baby with the dirty bathwater. Organisational knowledge activities (building on good individual behaviours) do contribute to innovation and creativity, but they also ensure consistency, improved quality and risk-avoidance in the boring old stuff as well. The challenge is to steer a sensible course between the two — to do the things that will achieve both aims, even when they appear to conflict.

What do we do with knowledge?

Every now and then, I discover a new way in which my assumptions about things are challenged. Today’s challenge comes in part from the excellent commentary on my last post (which has been so popular that yesterday quickly became the busiest day ever here). I am used to discussions about the definition or usage of ‘knowledge management’, but I thought ‘knowledge sharing’ was less controversial. How wrong can one be?

Table at Plas Mawr, Conwy

The first challenge comes from Richard Veryard. His comment pointed to a more expansive blog post, “When does Communication count as Knowledge Sharing?” Richard is concerned that the baggage carried by the word ‘sharing’ can be counter-productive in the knowledge context.

In many contexts, the word “sharing” has become an annoying and patronizing synonym for “disclosure”. In nursery school we are encouraged to share the biscuits and the paints; in therapy groups we are encouraged to “share our pain”, and in the touchy-feely enterprise we are supposed to “share” our expertise by registering our knowledge on some stupid knowledge management system.

But it’s not sharing (defined by Wikipedia as “the joint use of a resource or space”). It’s just communication.

I agree that if people construe sharing as a one-way process, it is communication. (Or, more accurately, ‘telling’, since effective communication requires a listener to do more than hear what is said.) In a discussion in the comments to Richard’s post, Patrick Lambe defends his use of ‘sharing’ and Richard suggests that knowledge ‘transfer’ more accurately describes what is happening. I also commented on the post, along the following lines.

I can see a distinction between ‘sharing’ and ‘transfer’, which might be relevant. To talk of transferring knowledge suggest to me (a) that there is a knower and an inquirer and that those roles are rarely swapped, and (b) that there needs to be a knowledge object to be transferred. (As Richard puts it, “a stupid knowledge management system” is probably the receptacle for that object.)

As Patrick’s blog post and longer article make clear, the idea of the knowledge object is seriously flawed. Equally, the direction in which knowledge flows probably varies from time to time. For me, this fluidity (combined with the intangible nature of what is conveyed in these knowledge generation processes) makes me comfortable with the notion of ‘sharing’ (even given Richard’s playgroup example).

In fact, I might put it more strongly. The kind of sharing and complex knowledge generation that Patrick describes should be an organisational aspiration (not at all like ‘sharing pain’), while exchange or transfer of knowledge objects into a largely lifeless repository should be deprecated.

I think Richard’s response to that comment suggests that we are on the point of reaching agreement:

I am very happy with the notion of shared knowledge generation – for example, sitting down and sharing the analysis and interpretation of something or other. I am also happy with the idea of some collaborative process in which each participant contributes some knowledge – like everyone bringing some food to a shared picnic. But that’s not the prevailing use of the word “sharing” in the KM world.

This was a really interesting conversation, and I felt that between us we reached some kind of consensus — if what is happening with knowledge is genuinely collaborative, jointly creating an outcome that advances the organisation, then some kind of sharing must be going on. If not, we probably have some kind of unequal transfer: producing little of lasting value.

Coincidentally, I was pointed to a really interesting discussion on LinkedIn today. (Generally, I have been deeply unimpressed with LinkedIn discussions, so this was a bit of a surprise.) The question at the start of the discussion was “If the term “KM” could get a do-over what would you call the discipline?” There are currently 218 responses, some of which range into other interesting areas. One of those areas was an exchange between Nick Milton and John Tropea.

Nick responded to another participant who mentioned that her organisation had started talking about ‘knowledge sharing’ rather than ‘knowledge management’.

Many people do this, but I would just like to point out that there is a real risk here – that sharing (“push”) is done at the expense of seeking (“pull”). The risk is you create supply, with no demand.

See here for more detail:

The blog post at the end of that link is probably even more emphatic (I will come back to it later on). John had a different view:

Nick you say “sharing (“push”) is done at the expense of seeking (“pull”). The risk is you create supply, with no demand.”

This is true if sharing is based on conscription, or not within an ecosystem (sorry can’t think of a more appropriate word)…this is the non-interactive document-centric warehousing approach.

But what about blogging experiences and asking questions in a social network, this is more on demand rather than just-in-case…I think this has more of an equilibrium or yin and yang of share and seek.

People blog an experience as it happens which has good content recall, and has no agenda but just sharing the raw experience. Others may learn, converse, share context, etc…and unintentionally new information can be created. This is a knowledge creation system, it’s alive and is more effective than a supply-side approach of shelving information objects…and then saying we are doing KM…to me KM is in the interactions. We must create an online environment that mimics how we naturally behave offline, and I think social computing is close to this.

Nick’s response was interesting:

John – “But what about blogging experiences and asking questions in a social network, this is more on demand rather than just-in-case”

Asking questions in a network, yes (though if I were after business answers, i would ask in a business network rather than a social network). Thats a clear example of Pull.

Blogging, no, I have to disagree with you here. I am sorry – blogging is classic Push. Its classic “just in case” someone should want to read it. Nobody “demands” that you blog about something. You are not writing your blog because you know there is someone out there who is waiting to hear from you – you write your firstly blog for yourself, and secondly “just in case” others will be interested.

Blogging is supply-side, and it’s creating stuff to be stored. OK, it is stored somewhere it can be interacted with, and there is a motivation with blogging which is absent with (say) populating an Intranet, but it is stll classic supply-side Push. Also it is voluntary push. The people who blog (and I include myself in this) are the ones who want to be heard, and that’s not always the same as “the ones who need to be heard”. Knowledge often resides in the quietest people.

This exchange puts me in a quandary. I respect both Nick and John, but they appear to be at loggerheads here. Can they both be right? On the one hand, Nick’s characterisation of supply-side knowledge pushing as something to be avoided is, I think correct. However, as I have written before, in many organisations (such as law firms), it is not always possible to know what might be useful in the future. My experience with formal knowledge capture suggests that when they set out to think about it many people (and firms) actually rate the wrong things as important for the future. They tend to concentrate on things that are already being stored by other people (copies of journal articles or case reports), or things that are intimately linked to a context that is ephemeral. Often the information stored is fairly sketchy. One of the justifications for these failings is the the avoidance of ‘information overload’. This is the worst kind of just-in-case knowledge, as Nick puts it.

I think there is a difference though when one looks at social tools like blogging. As Nick and John probably agree, keeping a blog is an excellent tool for personal development. The question is whether it is more than that. I think it is. I don’t blog here, nor do I encourage the same kind of activity at work because someone might find the content useful in the future. I do it, and encourage it, because the activity itself is useful in this moment. It is neither just-in-case nor just-in-time: it just is.

In the last couple of paragraphs, I was pretty careless with my use of the words ‘information’ and ‘knowledge’. That was deliberate. The fact is that much of what we call KM is, in fact, merely manipulation of information. What social tools bring us (along with a more faceted view of their users) are really interesting ways of exposing people’s working processes. As we learnt from Nonaka all those years ago, there is little better for learning and development of knowledge than close observation of people at work. (Joining in is certainly better, but not always possible.) What we may not know is where those observations might lead, or when they might become useful. Which brings me to Nick’s blog post.

We hear a lot about “knowledge sharing”. Many of the knowledge management strategies I am asked to review, for example, talk about “creating a culture of knowledge sharing”.

I think this misses the point. As I said in my post about Push and Pull, there is no point in creating a culture of sharing, if you have no culture of re-use. Pull is a far more powerful driver for Knowledge Management than Push, and I would always look to create a culture of knowledge seeking before creating a culture of knowledge sharing.

Nick’s point about knowledge seeking is well made, and chimes with Patrick Lambe’s words that I quoted last time:

We do have an evolved mechanism for achieving such deep knowledge results: this is the performance you can expect from a well-networked person who can sustain relatively close relationships with friends, colleagues and peers, and can perform as well as request deep knowledge services of this kind.

Requesting, seeking, performing: all these are aspects of sharing. Like Richard Veryard’s “traditional KM” Nick characterises sharing as a one-way process, but that is not right — that is the way it has come to be interpreted. Sharing must be a two-way process: it needs someone to ask as well as someone who answers, and those roles might change from day to day. However, Nick’s point about re-use is a really interesting one.

I suggested above that some firms’ KM systems might contain material that was ultimately useless. More precisely, I think uselessness arises at the point where re-use becomes impossible because the material we need to use is more flawed than not. These flaws might arise because of the age of the material, combined with its precise linkage with a specific person, client, subject and so on. Lawyers understand this perfectly — it is the same process we use to decide whether a case is a useful precedent or not. Proximity in time, matter or context contributes significantly to this assessment. However, an old case on a very different question of law in a very different commercial context is not necessarily useless.

One of the areas of law I spent some time researching was the question of Crown privilege. A key case in that area involved the deportation of a Zairean national in 1990. In the arguments before the House of Lords, the law dating back to the English Civil War was challenged by reference to cases on subjects as varied as EC regulation of fisheries and potato marketing. That those cases might have been re-used in such a way could not have been predicted when they were decided or reported.

In many contexts, then, re-use is not as clear-cut an issue as it may appear at first. My suspicion is that organisations that rely especially highly on personal, unique, knowledge (or intellectual capital) should be a lot more relaxed about this than Nick suggests. His view may be more relevant in organisations where repetitive processes generate much more value.

On the just-in-case problem, I think social tools are significantly different from vast information repositories. As Clay Shirky has said, what we think is information overload is actually filter failure. Where we rely solely on controlled vocabularies and classification systems, our capability to filter and search effectively runs out much sooner than it does when we can add personalised tags, comments, trackbacks, knowledge about the author from other sources, and so on. Whereas repositories usually strip context from the information they contain, blogs and other social tools bring their context with them. And, crucially, that context keeps growing.

Which brings me, finally, back to my last post. One of the other trackbacks was from another blog asking the question “What is knowledge sharing?” It also picks up on Patrick’s article, and highlights the humanity of knowledge generation.

…we need to think laterally about what we consider to constitute knowledge sharing. This morning I met some friends in an art gallery and, over coffee, we swapped anecdotes, experiences, gripes, ideas and several instances of ‘did you hear about?’ or ‘have you seen?’… I’m not sure any of us would have described the encounter as knowledge exchange but I came away with answers to work-related questions, a personal introduction to a new contact and the germ of a new idea. The meet up was organised informally through several social networks.

The key thing in all of this, for me, is that whether we talk of knowledge sharing, transfer, or management, it only has value if it can result in action: new knowledge generation; new products; ideas; thoughts. But I think that action is more likely if we are open-minded about where it might arise. If we try and predict where it may be, and from which interactions it might come, I think it is most probable that no useful action and value will result in the long term.

Walking into knowledge

Until this weekend, I didn’t know of Rory Stewart. Now that I do, I am not sure whether to admire him or not. His political alignment and social background are poles apart from mine. His lifetime of achievement (at the tender age of 37) makes me jealous. But I love the way he works.

Mellor ChurchStewart is, at the time of writing, Conservative prospective Parliamentary candidate for Penrith and the Border. However, at least one commentator believes that he has a more significant political future ahead of him.

You heard it here first – Rory Stewart will become prime minister of Great Britain.

I think this is a long shot. However, Stewart’s record so far  suggests that it is not impossible.

After a privileged upbringing (Dragon School, Eton and Balliol), he served briefly as an officer in the Black Watch, joined the Foreign Office, and in 2003 was appointed Deputy Governor of an Iraqi province by the Coalition Provisional Authority. By the age of 31, he had been appointed OBE for his work in Iraq. In 2004, he became a Fellow at the Carr Center for Human Rights Policy at the Harvard Kennedy School. In 2006 he was appointed by Prince Charles to run the Turquoise Mountain Foundation — an organisation working on the regeneration of an area of the Afghan capital Kabul. Most recently, he was appointed Ryan Family Professor of the Practice of Human Rights and Director of the Carr Center for Human Rights Policy.

So far Rory Stewart looks like a typical member of the new Establishment. But buried in this list of achievements is a rather unusual preference for personal learning. Rory Stewart walks. Between 2000 and 2002 he walked a total of 6000 miles through Iran, Pakistan, India and into Nepal, and then back across Afghanistan. In the process he emulated his boyhood hero, T.E. Lawrence, living with and learning from the people whose land he traversed. As a consequence, he has a view of our involvement in Afghanistan that is somewhat at odds with the political establishment. Writing in The New York Review of Books, Stewart suggests that President Obama needs to reduce rather than increase troop numbers.

A more realistic, affordable, and therefore sustainable presence would not make Afghanistan stable or predictable. It would be merely a small if necessary part of an Afghan political strategy. The US and its allies would only moderate, influence, and fund a strategy shaped and led by Afghans themselves. The aim would be to knit together different Afghan interests and allegiances sensitively enough to avoid alienating independent local groups, consistently enough to regain their trust, and robustly enough to restore the security and justice that Afghans demand and deserve from a national government.

What would this look like in practice? Probably a mess. It might involve a tricky coalition of people we refer to, respectively, as Islamists, progressive civil society, terrorists, warlords, learned technocrats, and village chiefs. Under a notionally democratic constitutional structure, it could be a rickety experiment with systems that might, like Afghanistan’s neighbors, include strong elements of religious or military rule. There is no way to predict what the Taliban might become or what authority a national government in Kabul could regain. Civil war would remain a possibility. But an intelligent, long-term, and tolerant partnership with the United States could reduce the likelihood of civil war and increase the likelihood of a political settlement. This is hardly the stuff of sound bites and political slogans. But it would be better for everyone than boom and bust, surge and flight. With the right patient leadership, a political strategy could leave Afghanistan in twenty years’ time more prosperous, stable, and humane than it is today. That would be excellent for Afghans and good for the world.

He made a similar argument in the London Review of Books.

After seven years of refinement, the policy seems so buoyed by illusions, caulked in ambiguous language and encrusted with moral claims, analogies and political theories that it can seem futile to present an alternative. It is particularly difficult to argue not for a total withdrawal but for a more cautious approach. The best Afghan policy would be to reduce the number of foreign troops from the current level of 90,000 to far fewer – perhaps 20,000. In that case, two distinct objectives would remain for the international community: development and counter-terrorism. Neither would amount to the building of an Afghan state. If the West believed it essential to exclude al-Qaida from Afghanistan, then they could do it with special forces. (They have done it successfully since 2001 and could continue indefinitely, though the result has only been to move bin Laden across the border.) At the same time the West should provide generous development assistance – not only to keep consent for the counter-terrorism operations, but as an end in itself.

A reduction in troop numbers and a turn away from state-building should not mean total withdrawal: good projects could continue to be undertaken in electricity, water, irrigation, health, education, agriculture, rural development and in other areas favoured by development agencies. We should not control and cannot predict the future of Afghanistan. It may in the future become more violent, or find a decentralised equilibrium or a new national unity, but if its communities continue to want to work with us, we can, over 30 years, encourage the more positive trends in Afghan society and help to contain the more negative.

Stewart’s perspective, which does not fit any simplistic model — whether pro or anti involvement in Afghanistan, is not the kind that arises from traditional learning processes. As such, it feels more like the kind of sensemaking approach suggested by the Cynefin framework as a response to complex scenarios. He is using a similar approach to find out more about the constituency he will seek to represent in the next Parliament. Walking around the largest and most sparsely populated constituency in England is, for him, the best way to make sense of what is going on.

Walking has given me more than I hoped: living in Cumbrian homes and experiencing the great distances between communities. It allows me to learn from a hundred people I might never have encountered by car. But it has not provided neat solutions. It is easy to see they should have listened to the gritter driver about his truck — but I’ve found out that the government has spent three times as much on upgrading a mile-long footpath as on the entire affordable housing for the district. This is not just about an individual’s decisions, it is about budget lines and regulation insurance and a whole way of looking at the world. I realise that to change government needs not just cutting regulations or giving parishes control of money, but also shifting an entire public culture over decades.

It will be interesting to see how well this works for Rory Stewart, and whether it really makes him fit for high office. There is a real possibility that his very different approach to knowledge and learning might make it hard for him to be accepted within the traditional systems of British government and politics.

Whatever comes to pass for Rory Stewart, I think there is a wider point for knowledge and learning within organisations. Getting out into the organisational community and listening to people’s stories, worries, concerns, interests, views is likely to have more of an impact than reading case-studies, theories, position papers or the like. I read something else today that makes a similar point. That’s another blog post.

Learning from failure or success

In a round up following KM Australia, back in August, Shawn Callahan has challenged the notion that we learn best from failure. I think he has a point — the important thing is learning, not failure.

Harris Hawk missing the quarry

Here’s Shawn’s critique.

During the conference I heard a some speakers recount the meme, “we learn best from failure.” I’m not sure this is entirely true. Anecdotally I remember distantly when I read about the Ritz Carlton approach to conveying values using stories and I’m now delivering a similar approach to a client on the topic of innovation. Here I’ve learned from a good practice. As Bob Dickman once told me, “you remember what you feel.” I can imagine memory being a key first step to learning. And some research shows it’s more complex than just learning from failure. Take this example. The researchers take two groups who have never done ten pin bowling and get them bowling for a couple of hours. Then one group is taken aside and coached on what they were doing wrong and how they could improve. The other group merely watches an edited video of what they were doing right. The second group did better than the first. However there was no difference with experienced groups.

I wish I could access the linked study — Shawn’s summary and the abstract sound very interesting. Here’s the abstract.

On the basis of laboratory research on self-regulation, it was hypothesized that positive self-monitoring, more than negative self-monitoring or comparison and control procedures, would improve the bowling averages of unskilled league bowlers (N =60). Conversely, negative self-monitoring was expected to produce the best outcome for relatively skillful league bowlers (N =67). In partial support of these hypotheses, positive self-monitors significantly improved their bowling averages from the 90-game baseline to the 9- to 15-game postintervention assessment (X improvement = 11 pins) more than all other groups of low-skilled bowlers; higher skilled bowlers’ groups did not change differentially. In conjunction with other findings in cognitive behavior therapy and sports psychology, the implications of these results for delineating the circumstances under which positive self-monitoring facilitates self-regulation are discussed.

Based on these summaries, I would draw a slightly different conclusion from Shawn’s. I think there is a difference between learning as a novice and learning when experienced. Similarly, the things that we learn range from the simple to the complex. (Has anyone applied the Cynefin framework to learning processes? My instinct suggests that learning must run out when we get to the chaotic or disordered domains. I think we can only learn when there is a possibility of repeatability, which is clearly the case in the simple and complicated domains, and may be a factor in moving situations from the complex to one of the other domains.)

The example Dave Snowden gives of learning from failure is actually a distinction between learning from being told and learning by experience.

Tolerated failure imprints learning better than success. When my young son burnt his finger on a match he learnt more about the dangers of fire than any amount of parental instruction cold provide. All human cultures have developed forms that allow stories of failure to spread without attribution of blame. Avoidance of failure has greater evolutionary advantage than imitation of success. It follows that attempting to impose best practice systems is flying in the face of over a hundred thousand years of evolution that says it is a bad thing.

In the burned finder scenario, success (not touching a burning match) is equivalent to lack of experience. Clearly learning from a lack of experience will be less effective than learning from (even a painful) experience. By contrast, the bowling example provides people with a new experience (bowling) and then gives them an opportunity to contemplate their performance (which was almost certainly poor). However, whatever the state of their performance, it is clear what the object of the activity is and therefore ‘success’ can be easily defined — ensure that this heavy ball leaves your hand in such a way that it knocks down as many pins as possible by the time it reaches the far end of the lane. As the natural tendency of learners at early stages in the learning process is to concentrate on the negative aspects of their performance (I can’t throw the ball hard enough to get to the end of the lane, or it keeps going in the gutter), it is understandable that a learning strategy which focuses on success could have better results than one that merely explains why the bad things happen.

In the bowling experiment, no difference was found between the negative and positive approaches when experienced bowlers were studied. All this suggests to me is that we need more work in this area, especially considering learning in the complicated or complex domains. Even for experienced bowlers, the set of variables that affect the passage of a bowling ball from one end of the lane to the other is a predictable one. There is not just one cause and effect, but the laws of physics dictate that the relationships between all the causes should have predictable outcomes. By contrast, much of what interests us with regard to knowledge and learning in organisational environments does not depend on simple causal relationships.

In those complicated or complex organisational situations, I think we can learn more from our own failures than other people’s successes (which I think is the point that Dave Snowden is making). I think Shawn is also right to suggest that we can learn from our own successes too. However, that can only be the case if we take the time to analyse exactly what was the cause of the success. So we need a commitment to learning (which brings us back to deliberate practice, amongst other things) and we need the insight into our actions and activities that allows us to analyse them effectively. I think the will to learn is often present, but insight is often missing when we consider successful initiatives, possibly because the greater distance between cause and effect means that we cannot be confident that success is a product of any given cause. On the other hand, it is usually easier to identify causes of failure, and the process of failure also provides an incentive to work out what went wrong.

As for the quality of the lessons learned from failure or success, I am doubtful that any firm conclusion could be drawn that as a general rule we learn better from failure or from success. However, as we become more experienced and when we deal with fewer simple situations, we will inevitably learn more from failure than success — we will have more experience of failure than success, and other people’s successes are of limited or no value. So, although we can learn from our successes, my guess is that more of our learning flows from failure.

It feels like there is more research to do into these questions.

Storing our future knowledge?

Over the summer, I read a couple of blog posts about knowledge storage that I marked to come back and comment on. Separately, Mary Abraham and Greg Lambert have suggested a fairly traditional approach to selection of key knowledge for storage and later access.

Dover Castle

First, Greg issued a clarion call for selectivity in information storage:

Knowledge Management should not be based on a “cast a wide net” approach to the information that flows in and out of our firms. In fact, most information should be ephemeral in nature; addressing only the specific need of the moment and not be thought of as a permanent addition to the knowledge of the firm. When we try to capture everything, we end up capturing nothing. In the end we end up losing the important pieces of knowledge because they are buried in a mountain of useless data filed under the topic of “CYA”.

I had to Google “CYA”. And thereby hangs a lesson. How can we know when we make a decision about recording the present for posterity that the things we choose will be (a) comprehensible to those who come after us and (b) meet their as yet unknowable needs?

For centuries, the study of history relied on official records and was therefore a story of kings and queens, emperors and presidents, politicians and popes. The things that were left behind — castles, cathedrals, palaces and monuments as well as documents — actually provided us with only slender insight into the real lives of the majority of people who lived at any given point in time. Only when archaeologists and social historians started to untangle more trivial artefacts like potsherds, clay pipes, bone pits and everyday documents like manorial rolls, diaries, or graffiti were we given a more rounded picture of the world of our predecessors. At the time, those things were ephemeral — not created for posterity. The lesson we always forget to learn is that we don’t get to write our history — the future does.

Because Google has access to a vast mass of ephemera, I was able to learn what “CYA” means. In Greg’s context, it is the stuff we think we might need to keep to protect ourselves — it is an information security blanket.

Mary Abraham picked up the thread by addressing the Google question:

Folks who drink the super search kool-aid will say that the cost of saving and searching data is becoming increasingly trivial, so why spend any time at all trying to weed the collection?  Rather, save it all and then try Filtering on the Way Out.  On the other hand, look at the search engine so many of us envy — Google.  It indexes and searches enormous amounts of data, but even Google doesn’t try to do it all.  Google doesn’t tackle the Deep Web.

So why are we trying to do it all?

That’s a good question, and one that Greg challenged as well. I want to come to that, but first the Deep Web issue needs to be dealt with.

As I understand it, the problem for Google is that many useful web resources are stored in ways that exclude it — in databases, behind paywalls, or by using robots.txt files. That may be a problem on the public web, but it shouldn’t be in the enterprise context. By definition, an properly set up enterprise search engine is able to get access to anything that the user can see. If there is material in a subscription service like Westlaw or Lexis Nexis, then searches can be federated so that the result set includes links into those services as well as a firm’s own know-how. Alternatively, a firm or search provider can make special arrangements to index content through a paywall. There simply should not be a Deep Web problem in the enterprise context.

But what of the main issue — by storing too much, we lose our ability to find what is important? I think Greg and Mary are right to challenge the “store everything” model. There is much that is truly ephemeral — the e-mail that simply says “Thanks” or the doodles from that boring meeting. The problem with those, though is not that keep them, but that we created them in the first place. If the meeting was that boring, should the doodler not have gone and done something else instead? Isn’t there a better way of showing appreciation than sending an e-mail (especially if it was a reply-to-all)? I think that is the bit that is broken. Some other things are ephemeral even though they do need to be captured formally. Once an expenses claim has been paid, and the taxman is satisfied, there is little need to keep the claim forms available for searching. (Although there may be other reasons why they should not be discarded completely.)

However, I am still concerned that we cannot know what will be useful in the future, or why it might have a use. At the heart of an organisation like a law firm there are two strands of information/knowledge. The first is a body of technical material. Some of this is universally available (even if not comprehensible) — statutes, cases, codes, textbooks, journal articles: documents created externally that we all have to understand. Some is specific to the firm — standard documents, briefing notes, drafting guides: our internal know-how. I think this is the material that Greg and Mary are concerned with. And they are right that we should be critical about the potential immensity of these resources. Does that new journal article say anything new? Is that textbook worth the space that it takes on our shelves? Is our know-how really unique to us, or is it just a reflection of market practice? These are all crucial questions. However, almost by definition, as soon as we fix this material in some form it is of mainly historical interest — it is dying information. The older it gets, the less value it will have for our practice and our clients.

The other strand is intangible, amorphous, constantly shifting. It is the living knowledge embodied in our people, their relationships with each other and our clients, and their reactions to formal information. That changing body is not just responsible for the knowledge of the firm, but its direction and focus. At any time, it is the people and their connections that actually define the firm and its strategic preoccupations. In particular, what our clients want will drive our future knowledge needs. If we can predict what our future clients commercial concerns and drivers will be, then we can confidently know what we should store, and what to discard. I don’t think I can do that. As a result, we need to retain access to more than might seem useful today.

Patrick Lambe catches this tension neatly in his post “The War Between Awareness and Memory.” I looked at that in my last post (five weeks ago — August really isn’t conducive to blogging). As I was writing this one, I recalled words I last read thirty years ago. This is how John Dos Passos caught the same mood in the closing words of the eponymous prose poem that opens the single volume edition of his great novel U.S.A.

It was not in the long walks through jostling crowds at night that he was less alone, or in the training camp at Allentown, …

but in his mother’s words telling about longago, in his father’s telling about when I was a boy, in the kidding stories of uncles, …

it was in the speech that clung to the ears, the link that tingled in the blood; U.S.A.

U.S.A. is the slice of a continent. U.S.A. is a group of holding companies, some aggregations of trade unions, a set of laws bound in calf, a radio network, a chain of moving picture theatres, a column of stock quotations rubbed out and written in by a Western Union boy on a blackboard, a publiclibrary full of old newspapers and dogeared historybooks with protests scrawled on the margins in pencil. U.S.A. is the world’s greatest rivervalley finged with mountains and hills. U.S.A. is a set of bigmouthed officials with too many bankaccounts. U.S.A. is a lot of men buried in their uniforms at Arlington Cemetery. U.S.A. is the letters at the end of an address when you are away from home.But mostly U.S.A. is the speech of the people.

Sometimes questions about the “laws bound in calf” or “dogeared historybooks” are less important, maybe even a distraction. The real life and future knowledge of the firm is the speech of the people. That cannot be reconstructed. We need to be aware of all the ways in which we can preserve and retain access to it, for use when a client comes up with a new conundrum for us to help them resolve.

Now and then

A couple of days ago, Patrick Lambe posted a really thoughtful piece considering the implications of heightened awareness from the new generation of social software tools as opposed to the traditional virtues of long-term information storage and access. If you haven’t read it, do so now. (Come back when you have finished.)

Laid down

The essence of Patrick’s piece is that when we focus our attention on the here and now (through Twitter or enterprise micro-blogging, for example), we forget to pay attention to the historically valuable information that has been archived away. This is not a problem with technology. He points to interesting research on academics’ use of electronic resources and their citation patterns.

How would online access influence knowledge discovery and use? One of his hypotheses was that “online provision increases the distinct number of articles cited and decreases the citation concentration for recent articles, but hastens convergence to canonical classics in the more distant past.”

In fact, the opposite effect was observed.

As deeper backfiles became available, more recent articles were referenced; as more articles became available, fewer were cited and citations became more concentrated within fewer articles. These changes likely mean that the shift from browsing in print to searching online facilitates avoidance of older and less relevant literature. Moreover, hyperlinking through an online archive puts experts in touch with consensus about what is the most important prior work—what work is broadly discussed and referenced. … If online researchers can more easily find prevailing opinion, they are more likely to follow it, leading to more citations referencing fewer articles. … By enabling scientists to quickly reach and converge with prevailing opinion, electronic journals hasten scientific consensus. But haste may cost more than the subscription to an online archive: Findings and ideas that do not become consensus quickly will be forgotten quickly.

Now this thinning out of long term memory (and the side effect of instant forgettability for recent work that does not attract fast consensus) is observed here in the relatively slow moving field of scholarly research. But I think there’s already evidence (and Scoble seems to sense this) that exactly the same effects occur when people and organisations in general get too-fast and too-easy access to other people’s views and ideas. It’s a psychosocial thing. We can see this in the fascination with ecologies of attention, from Tom Davenport to Chris Ward to Seth Godin. We can also see it in the poverty of attention that enterprise 2.0 pundits give to long term organisational memory and recordkeeping, in the longer term memory lapses in organisations that I have blogged about here in the past few weeks…

Jack Vinson adds another perspective on this behaviour in a post responding to Patrick’s.

I see another distinction here.  The “newer” technologies are generally about user-engagement and creation, whereas the “slower” methods are more focused on control and management activities much more so than the creation.  Seen in this light, these technologies and processes spring from the situation where writing things down was a time-consuming process.  You wanted to have it right, if you went to that much effort.  Unfortunately, the phrase “Document management is where knowledge goes to die” springs to mind.

In knowledge management, we are trying to combine the interesting knowledge that flows between people in natural conversation as well as the “hard knowledge” of documented and proven ideas and concepts.  KM has shown that technology just can’t do everything (yet?) that humans can do.  As Patrick says, technology has been a huge distraction to knowledge management.

I think Jack’s last comment is essential. What we do is a balance between the current flow and the frozen past. What I find fascinating is that until now we have had few tools to help  us with the flow, whereas the databases, archives, taxonomies and repositories of traditional KM and information management have dominated the field. I think Patrick sounds an important warning bell. We should not ignore it. But our reaction shouldn’t be to reverse away from the interesting opportunities that new technologies offer.

It’s a question (yet again) of focus. Patrick opens his post with a complaint of Robert Scoble’s.

On April 19th, 2009 I asked about Mountain Bikes once on Twitter. Hundreds of people answered on both Twitter and FriendFeed. On Twitter? Try to bundle up all the answers and post them here in my comments. You can’t. They are effectively gone forever. All that knowledge is inaccessible. Yes, the FriendFeed thread remains, but it only contains answers that were done on FriendFeed and in that thread. There were others, but those other answers are now gone and can’t be found.

Yes, Twitter’s policy of deleting old tweets is poor, but even if they archived everything the value of that archive would be minimal. Much of what I see on Twitter is related to the here and now. It is the ideal place to ask the question, “I’m looking at buying a mountain bike. For $1,000 to $1,500 what would you recommend?” That was Scoble’s question, and it is time-bound. Cycle manufacturers change their offering on a seasonal and annual basis. The cost of those cycles also changes regularly. The answer to that question would be different in six months time. Why worry about storing that in an archive?

Knowledge in law firms is a curious blend of the old and the new. Sometimes the law that we deal with dates back hundreds of years. It is often essential to know how a concept has been developed over an extended period by the courts. The answer to the question “what is the current position on limitations of liability in long-term IT contracts?” is a combination of historic research going back to cases from previous centuries and up to the minute insight from last week’s negotiations on a major outsourcing project for a client. It is a real combination of archived information and current knowledge. We have databases and law books to help us with the archived information. What we have been lacking up until recently is an effective way of making sure that everyone has access to the current thinking. As firms become bigger and more scattered (across the globe, in some cases) making people aware of what is happening across the firm has become increasingly difficult.

Patrick’s conclusion is characteristically well expressed.

So while at the level of technology adoption and use, there is evidence that a rush toward the fast and easy end of the spectrum places heavy stresses on collective memory and reflection, at the same time, interstitial knowledge can also maintain and connect the knowledge that makes up memory. Bipolarity simply doesn’t work. We have to figure out how to see and manage our tools and our activities to satisfy a balance of knowledge needs across the entire spectrum, and take a debate about technology and turn it into a dialogue about practices. We need to return balance to the force.

That balance must be at the heart of all that we do. And the point of balance will depend very much on the demands of our businesses as well as our interest in shiny new toys. Patrick is right to draw our attention to the risks attendant on current awareness, but memory isn’t necessarily all it is cracked up to be. We should apply the same critical eye to everything that comes before us — how does this information (or class of information) help me with the problems that I need to solve? The answer will depend heavily on your organisational needs.