Category Archives: Technology

2014-11-20 12.11.23

People, technology, location: where should law firms’ money go?

Experimental archaeology is a favourite way for TV documentaries to bring the past to life for the casual viewer. The BBC is currently showing a new series looking at the construction of a 13th century castle, together with various other related mediaeval activities. In last night’s episode, the team made a crossbow and its bolt using only techniques and materials available in the 1200s. In the commentary, they kept returning to the fact that a crossbow allowed armies to be effective with much less training (one presenter gleefully told how Richard I had been killed in 1199 by a crossbow fired by a mere boy).

Horse and cattle trough in SmithfieldBy concentrating on the saving on training that the crossbow brought, above the longbow, the programme missed an opportunity to make a wider point about the impact of technology on warfare — it changes the way money is spent.

If your army depends on archers wielding longbows, you need to invest heavily in training (and in the peripheral expenses to maintain archers in training). However, a longbow was probably slightly cheaper to produce than a crossbow, which needed components wrought from iron (not a cheap resource by comparison with wood at the time). So, as crossbows become more widely used, it is likely that the budget for archers would shift from supporting training to the new technology.

This is part of a pattern over hundreds of years. The wars of the twentieth century were probably the last in which the size of an army or navy played a part in determining the outcome (directly or indirectly). Modern warfare is waged with small numbers of people and huge amounts of costly technology. (Sadly, the impact on humanity is just as devastating — possibly more so, as civilians become harder to distinguish from combatants.)

Longbows and crossbows are still capable of causing death and serious injury. Nothing has made them less deadly, but their power is meaningless in the context of modern warfare.

Law firms are in a similar position today.

For the past few centuries, the primary cost for a firm has been lawyers and their training and upkeep. In more recent times, larger firms have also spent large sums on acquiring and maintaining high-quality offices in expensive business locations. As salaries and rents increase, clients have ultimately had to bear the cost in higher fees.

Technologies that are now readily available (or in development) are starting to eat away at the traditional model. They cost more than the basic IT tools already in use (typically email and document processing and management). It is a reasonable assumption that a firm of the future will spend a much higher proportion of its budget on technology than it did in the past, and a lower proportion on people (some of their functions being done automatically) and offices (as people work much more flexibly, using mobile devices and in non-firm locations).

So it is interesting to read in the Legal Support Network’s Legal IT Landscapes 2015 report that the top 100 UK firms are not investing hugely on their IT:

Our results show that top 100 firms spend on average 4.1% of revenue on IT (there were some that spent 8-10%, so you can imagine the other numbers). Though this metric isn’t one I’d use alone, and it puts law firms squarely alongside other professional services businesses (according to Gartner), many would say that legal businesses should be spending more, to innovate and build competitiveness. Let me put that 4.1% figure in context, too: education, media and entertainment, and banking and financial services all spend more – banking’s spend on IT as a percentage of revenue is 6.3%.

The top 100 firms are facing competition from a range of new entrants. Some are offering technology directly to legal consumers. Some have created businesses based on a different model — different in terms of office use, people employed, and technology developed. They are almost certainly all spending significantly more than 4.1% of revenue on technology — some may surpass 10%.

The future of law will involve more technology. There is no doubt about that. Some artisan law firms will continue to exist, but the bulk of legal work will be done in businesses founded on technology platforms that go beyond word processing and email. Interestingly, many firms are working alongside the new entrants in the development of new ways of working. As Simon Wardley points out in a post describing the commoditisation of part of the entertainment media, doing so betrays a completely inadequate strategy.

Such a deliberate move by a commissioning company – the chess equivalent of Fool’s mate – should never work but it does, in industry after industry. Yes, I am saying that companies often support the commoditisation of an underlying component or constraint without realising this will reduce barriers of entry into their field and ultimately commoditise them. Companies seem to act thinking of the short term with no understanding of the impacts to themselves.

Most of the problem appears to be that companies cannot see the environment (i.e. they have no map) and aren’t used to any actual form of strategic play. To be honest, this is like stealing candy from a baby except the candy is worth millions or billions. What is really frightening, is it takes a couple of hours to map out and work out such a play. There is no way on earth you should be able to get away with this and I’m afraid it gets worse.

So, incumbent law firms should be investing more in technology, but they should also do so more strategically — armed with a really good understanding of the terrain they are fighting on. If they don’t, they become as useful as the water-trough pictured at the top of this post — fit only for decoration.

2014-10-09 19.11.51

A tale of two peelers: getting the tools right

Our household batterie de cuisine covers most normal eventualities, with plenty of pots, pans and utensils. We even have three corkscrews, which will be useful if there is ever a vinous emergency. One duplication is particularly interesting, and provides a metaphor for the knowledge and collaboration tools provided by law firms or other organisations.

2014-10-09 19.11.51We have two peelers.

I am sure this isn’t surprising in itself (after all, we have three corkscrews). However, the reason why we have two peelers is interesting. My wife and I have strongly-held and divergent views on the utility of each peeler. She hates the one I prefer, and I cannot use her favourite to peel effectively.

So we both use different tools to produce the same outcome — peeled vegetables. Such a clarity of outcome is not always possible in complex organisations, but I think it is worth striving for. Without it, one can easily be sidetracked into shiny new toys whose purpose is not really clear.

Having settled on a desired outcome, one needs to work out how best to achieve it. In our household there was no consensus on this. Fortunately, peelers are inexpensive enough to be able to acquire different types to satisfy everyone.

Even in more expensive situations, I think it is important to do everything possible to meet different needs when adopting new organisational tools and processes. When I look at some firms who have invested significant amounts in knowledge or collaboration tools that are rarely used, the cause is usually either a poorly defined outcome (what is this thing for, and does the average employee care about that?) or a failure to understand how people work and how that might be enhanced by the new system.

This was highlighted (again) by a tweet from today’s Enterprise 2.0 Summit in London:

‘Small pieces loosely joined’ was at the heart of many early uses of social tools within organisations. It is an approach that allows people to choose the approach that fits them and their desired outcome best. When the organisation chooses which outcomes to favour, and implements a one-size-fits-all tool, it is almost inevitable that half or more of the people who would have used it are put off by something that doesn’t work for them. As a result, it is much less likely that the desired outcome can actually be delivered.

It is still possible for organisations to find the right tools for people to use — big platforms are not the only approach. If you are interested in giving your people the peelers that they will use, I can help — please get in touch.

2014-08-14 13.52.20-2

Legal technology, practice, theory and justice

Like all other areas of life and work, the law has been changed immeasurably by technology. This will doubtless continue, but I am unconvinced by the most excited advocates of legal technology.

The impact of technology has been felt at a variety of levels. The last 35-40 years has changed the way practitioners approach all aspects of their work. Likewise, the changes wrought by technology on personal, social and commercial behaviour and activities have driven changes in the law itself.

Clouds round the tower of lawThese trends will doubtless continue, but predicting the actual changes that they will bring is a fool’s errand.

I recently wrote an article in Legal IT Today, arguing that the most extreme predictions of the capability of legal artificial intelligence would struggle to match the abductive reasoning inherent in creative legal work. In addition to that argument, I am less confident than some about the limits of technological development, I suspect that the economics of legal IT are not straightforward, and I have a deeper concern that there is little engagement between the legal IT community and generations of legal philosophy.

Limits of technology

One of the touchstones of any technology future-gazing (in any field, not just the law) is a reference to Moore’s Law. I am less certain than the futurologists that we should expect to see the doubling of capacity for ever more. If nothing else, exponential growth cannot continue for ever.

…in the real world, any simple model that shows a continuing increase will run into a real physical limit. And if it is an exponentially increasing curve that we are forecasting, that limit is going to come sooner rather than later.

What could stop computing power from increasing exponentially? A range of things — the size of the components on a chip may have a natural limit, or the materials that are used could start to become scarce.

More interestingly, from the perspective of legal business, the undoubted growth of technology over recent years has not necessarily produced efficiencies in the law, if we use lawyer busyness as a proxy for efficiency. There are far more people employed in the law now than 40 years ago, and they appear to work longer hours. Improved computing capability has produced all sorts of new problems that demand novel business practices to resolve them. (One of these being knowledge management.)

Nonetheless, it is still possible that future developments will actually be capable of taking on significant aspects of work that is currently done by people. The past is not necessarily a good predictor of the future.

The business challenge

There is currently a lot of interest in the possibility that IBM’s Watson will introduce a new era of legal expert systems. Earlier this month Paul Lippe and Daniel Martin Katz provided “10 predictions about how IBM’s Watson will impact the legal profession” in the ABA Journal. Bruce MacEwen has also asked “Watson, I Presume?” However, one thing that marks out any reference to Watson in the law is a complete absence of hard data.

The Watson team have helpfully provided a press release summarising the systems currently available or under development. Looking at these, a couple of things strike me. The most obvious is that there are none in the law. There are medical and veterinary applications, and some in retail and travel planning. There are applications that enhance existing IT capability (typically in the area of search and retrieval). But there are none in the law.  The generic applications could be certainly be used to enhance legal IT, but there is no indication of how effective they might be compared to existing tools. And, most crucially, it is unclear how costly Watson solutions might be. That is where legal IT often struggles.

The business economics of legal technology can be difficult. Medical and veterinary systems have a huge scale advantage — human or animal physiology changes little across the globe, and pharmaceutical effectiveness does not depend significantly on where drugs are administered. By contrast, legal and political systems differ hugely, so that ready-made legal technology often needs to be tailored to fit different jurisdictions. Law firms tend to be small compared to some other areas of professional services and the demands of ethical and professional rules often restrict sharing of information. Those constraints can mean that it is hard for all but the largest firms with considerable volumes of appropriate types of work to justify investment in the most highly-developed forms of technology. As a consequence, I suspect few legal IT providers will be tempted to pursue Watson or similar developments until they can be convinced that a market exists for them.

Technology, justice and legal theory

My Legal IT piece was a response to an article by David Halliwell. His piece started with a reference to an aspect of Ronald Dworkin’s legal philosophy. Mine was similarly rooted in theory. This marks them out from most of the articles I have read on the future of legal IT. Given the long history of association between legal theory and academic study of IT in the law (exemplified by Richard Susskind’s early work on the use of expert systems in the law), it is disappointing to see so little critical thought about the impact of technology in the law.

As I read them, most disquisitions on legal IT are based on simple legal positivism — the law is presented as a set of rules that can be manipulated in an almost mechanical way to produce a result. By contrast, there is a deeper critique of concepts like big data in wider social discourse. A good example is provided in an essay by Moritz Hardt, “How big data is unfair”:

I’d like to refute the claim that “machine learning is fair by default”. I don’t mean to suggest that machine learning is inevitably unfair, but rather that there are powerful forces that can render decision making that depends on learning algorithms unfair. Any claim of fair decision making that does not address the technical issues that I’m about to discuss should strike you as dubious.

Hardt focuses on machine learning, but his point is true of any algorithm and probably more generally of any technology tending towards artificial intelligence. Any data set, any process defined to be applied to that data, any apparently neutral ‘thinking’ system will have inherent prejudices. Those prejudices may be innocuous or trivial, but they may not be. Ignoring the possibility that they exist runs a risk of unfairness, as Hardt puts it. In the law, unfairness manifests itself as injustice.

What concerns me is that there doesn’t appear to be a lively debate about the risk of injustice in the way legal IT might develop in the future (not to mention the use of technology with a legal impact in other areas of society). Do we have a modern equivalent of the debate between Lon Fuller and H.L.A. Hart? I am not as close to legal theory as I used to be, so it may already have taken place. If not, are we happy for the legal positivists to win this one by default? (I am not sure that I am.)

2014-05-27 13.45.59

The limits of technology and law

2014-05-27 13.45.59One of the first law lectures I attended, over 30 years ago, was given by Avrom Sherr. As we all settled ourselves, full of our importance as future lawyers, Avrom walked into the lecture theatre and lay on his back on the desk at the front of the hall. The hubbub subsided and there was a moment of uncertainty (embarrassment even) before he got to his feet to start the lecture.

The point of this act of theatre, we were informed, was as follows.

In previous centuries, medical students were taught from cadavers. As a consequence, everything they learned was pathology. More recently, medical science had caught up with the idea that most people were actually healthy and that there was probably more to be understood about the workings of healthy bodies than diseased and dead ones; certainly as much that would be useful to those charged with the care of the living.

Legal studies, Avrom argued, had a similar problem. By studying the pathology of the law (as found in centuries of case law), the real life of the law was lost. His impersonation of a cadaver was intended to remind us that although dissection of cases (like anatomy lessons) had a purpose in learning about law, we should not forget that the vast majority of legal actions (making contracts, administrative decision-making, etc) would never be even be the subject of litigation, let alone a reported case.

I was reminded of this experience, and the valuable lesson, by a short article in The Lawyer by Peter Kalis (chairman and global managing partner, K&L Gates), “Lawyers as robotic bores? It’s not the English way”. Mr Kalis will be writing a series of articles, and this one sets out his stall.

In future columns I’ll supply some thoughts on our evolving industry. In this inaugural venture, however, I wish to acknowledge my debt to the English legal tradition. In other words, I come in peace.

He singles out three legal academics whose work influenced him whilst at Oxford 40 years ago: HLA Hart, Sir Otto Kahn-Freund, and Mark Freedland.

Why do Professors Hart, Kahn-Freund and Freedland matter here? Their careers nicely illustrate that law is about ideas and the ability to express them, whether in service to clients or to scholarship.

In future columns you’ll see me challenge those who regard lawyers and their firms as anachronistic and those who would reduce us to automata and algorithms. It will be my way of saying thanks to Professors Hart, Kahn-Freund and Freedland, among so many others on your side of the pond.

That description of the purpose of law — “ideas and the ability to express them” — resonates with my experience as a raw undergraduate. After four years of study at Warwick, it was clearer than than ever to me that law doesn’t exist to give opportunities to judges and law reporters. As an academic discipline it can be a kind of applied social science — a combination of psychology, ethnology, economics, politics — that may help to describe how social and individual relationships might work out in the presence or absence of power. Unlike many of those other disciplines, law also has a practical life outside the academy. Its practitioners have the privilege of being able to help mediate in those relationships — supporting or opposing power as necessary.

Over the past few years, I have kept coming back to this point about relationships in my work and on this blog. I am more sure than ever that good law, sensitively practised, depends on an understanding of the people involved. That understanding requires the kind of insight into human relationships, desires and needs, power structures, that I suspect most people develop unconsciously.

Critically, though, technology struggles with this aspect of law as lived. It sometimes appears that the most vocal technology advocates forget this. As news this week about the Turing test shows, it is too easy to be blinded by overblown claims of what computers can do. The reality is usually much more limited. In this context, also, we need to know whether a piece of legal technology deals with a pathological legal problem or the real human issue that underlies the call to law. If it doesn’t look to the latter, then it will be of severely limited use. That is not to say it will be useless, just good for some things only.

 

Hiding behind technology: what kind of a job is that?

I think our relationship with technology is detracting from our capacity to work effectively. In order to change this, we need to reassert what it is that we actually do when we come to work.

One of the staples of TV drama is the workplace, another is espionage. The BBC is currently showing a short series, The Hour, in which those elements are combined with a touch of social/political comment in a not-so-distant historical setting — a BBC current affairs programme (The Hour of the title) in 1956, as Hungary is invaded by the Soviet Union and Egypt precipitates the Suez crisis. It isn’t the best thing that the BBC have done — Mad Men beats it for verisimilitude, nothing can touch Tinker, Tailor, Soldier, Spy for British cold war espionage drama, and at least one person who was in TV in the 1950s is adamant that its representation of news broadcasting is a long way from the reality. That said, it is relaxing Summer viewing.

One of the things that struck me, watching the most recent episode, is that everyone is intimately engaged with the objects of their work. Cameramen wield cameras; editors cut film; reporters write words (with a pen or pencil) on paper. And they do one thing at once. During the episode, the producer of The Hour is admonished by her superior (for having an affair with the programme’s presenter). As she enters the room, he is making notes in a file. He closes it while berating her for putting the programme, and her career, at risk. When finished, he returns to his paperwork. There is no doubt at any point during this sequence as to his focus, his priorities or his wider role.

I think we have lost that clarity. As I look around me, in all sorts of workplaces, there is little or no distinction in the environments inhabited by people who actually do very varied jobs. Generally, it looks like we all work with computers. People sit with a large flat surface in front of them, which is dominated by a box filled with electronics, umbilically attached via (in my case) ten cables to a variety of other bits of electronics, to power and to a wider network. One or two of those other pieces of hardware are really intrusive. The screens we work at (I have two) are our windows into the material that we produce — documents, emails, spreadsheets — to the information we consume, and to our connections with other people. But physically, they fail miserably to benefit our human connectivity. In my case, the screens sit between me and the rest of the occupants of my working room. We all sit in a group, facing each other, but our screens act as a barrier between our working environments. When we converse, we have to crane round the barriers, and we are easily distracted from the conversation by things that happen on the screens.

But if you asked the average law firm employee (whether a lawyer or not) what they do every day, very few would respond that they work with computers. They would speak in terms of managing teams, delivering quality advice to clients, supporting the wider business with training, information or know-how. Some of our IT colleagues might agree that they do work with computers, but some would claim instead that their role is to enhance the firm’s effectiveness and that computers are just the tools by which that is achieved. That is consistent with research conducted by Andrew McAfee, for example. At an organisational level, then, technology improves performance. However, it is also well-observed that many forms of technology, inappropriately used, can distract people and reduce their personal effectiveness. That is manifest in complaints about information overload, email management, social media at work, and so on.

The problem is that, through this box and its two big screens, I have access to absolutely everything I need — the software tools, the online information, the worldwide contacts — to do my job. Unfortunately, because everything is in the same place, it is hard to create clear boundaries between all these things. Outlook is open, so I see when email arrives even though I am working on a document. When I am focusing on an email on one project, it is sitting next to one on a different topic, so it is practically impossible not to skip to that topic before I am actually ready. We can discipline ourselves, but that actually makes work harder, and so we must be less effective.

In some organisations, the technology is configured to provide access just to the tools people need. This is typically the case in call centre environments, for example. I think this only really works when people are working through clearly defined processes. As soon as a degree of creativity is required, or where the information needs of a role are emergent, bounded technology starts to fail.

Instead, I think each of us needs to understand exactly what we need from the technology, to create a clear path to that, and to take steps to exclude the less relevant stuff.

My current role requires me to take responsibility for a group of people who have not previously thought of themselves as a single team. I shouldn’t do that from a desk which is at a significant distance from many of them. The technology may fool me into thinking that I am bridging that distance by sending emails and writing documents, but I am sure that isn’t really the case. We have technology to allow me to divest myself of the big box and its screens. I am seriously considering doing just that — doing my job, rather than working with computers.

 

The nature of the firm, and why it matters

Jordan Furlong‘s justified question, “Why do law firms exist?” is something that isn’t just relevant to partners (or potential investors in firms). Those who support the core functions of the firm need to be aware of its implications. I’ll come back to Jordan’s question, but first I want to reflect on something else.

Thanks to the generosity of Headshift, I was able to attend the Dachis Group’s London Social Business Summit at the end of March. One of the most interesting sessions that day was the presentation by Dave Gray of XPLANE. Dave outlined his current thinking about the nature of the company, which can be found summarised in the initial post on his new site, The Connected Company.

Dave is concerned about the short life span of the average company:

In a recent talk, John Hagel pointed out that the average life expectancy of a company in the S&P 500 has dropped precipitously, from 75 years (in 1937) to 15 years in a more recent study. Why is the life expectancy of a company so low? And why is it dropping?

He is also worried about their productivity:

A recent analysis in the CYBEA Journal looked at profit-per-employee at 475 of the S&P 500, and the results were astounding: As you triple the number of employees, their productivity drops by half (Chart here).

This “3/2 law” of employee productivity, along with the death rate for large companies, is pretty scary stuff. Surely we can do better?

I believe we can. The secret, I think, lies in understanding the nature of large, complex systems, and letting go of some of our traditional notions of how companies function.

The largest complex system that still seems to work is the city.

Cities aren’t just complex and difficult to control. They are also more productive than their corporate counterparts. In fact, the rules governing city productivity stand in stark contrast to the ominous “3/2 rule” that applies to companies. As companies add people, productivity shrinks. But as cities add people, productivity actually grows.

A study by the Federal Reserve Bank of Philadelphia found that as the working population in a given area doubles, productivity (measured in this case by the rate of invention) goes up by 20%. This finding is borne out by study after study. If you’re interested in going deeper, take a look at this recent New York Times article: A Physicist Solves the City.

Drawing on a study of long-lived successful companies commissioned by Shell Oil, Dave spots three characteristics of those companies also shared by cities:

Ecosystems: Long-lived companies were decentralized. They tolerated “eccentric activities at the margins.” They were very active in partnerships and joint ventures. The boundaries of the company were less clearly delineated, and local groups had more autonomy over their decisions, than you would expect in the typical global corporation.

Strong identity: Although the organization was loosely controlled, long-lived companies were connected by a strong, shared culture. Everyone in the company understood the company’s values. These companies tended to promote from within in order to keep that culture strong. Cities also share this common identity: think of the difference between a New Yorker and a Los Angelino, or a Parisian, for example.

Active listening: Long-lived companies had their eyes and ears focused on the world around them and were constantly seeking opportunities. Because of their decentralized nature and strong shared culture, it was easier for them to spot opportunities in the changing world and act, proactively and decisively, to capitalize on them.

The whole post is worth reading and reflecting on. Dave’s prescription for success, for companies to be more like cities, is to shun divisional structures, and to build on networks and connections instead. This has been refined in a more recent post into a ‘podular’ system.

A pod is a small, autonomous unit that is enabled and empowered to deliver the things that customers value.

By value, I mean anything that’s a part of a service that delivers value, even though the customer may not see it. For example, in a construction firm, the activities valued by customers are those that are directly related to building. The accounting department of a construction firm is not part of the value delivery system, it’s a support team. But in an accounting firm, any activity related to accounting is part of the customer value delivery system.

There’s a reason that pods need to focus on value-creating activities rather than support activities. Support activities might need to be organized differently.

This idea appears to be closely related to Steve Denning’s notion of Radical Management, as described in his latest book. It also reflects the way that some professional service firms organise themselves. That’s what brings us back to Jordan Furlong’s question.

Why do law firms exist? Or, more properly, why should law firms continue to exist? (One important reason why they exist is that their history brought us to this point. What might happen to them in the future is actually more interesting.)

Jordan’s post starts with Ronald Coase, but also points to a number of ways in which law firms might not meet Coase’s standards.

Companies exist, therefore, because they:

  • reduce transaction costs,
  • build valuable culture,
  • organize production,
  • assemble collective knowledge, and
  • spur innovation.

So now let’s take a look at law firms. I don’t think it would be too huge a liberty to state that as a general rule, law firms:

  • develop relatively weak and fragmented cultures,
  • manage production and process indifferently,
  • assign and perform work inefficiently,
  • share knowledge haphazardly and grudgingly, and
  • display almost no interest in innovation.

That’s an inventory of defects that would make Ronald Coase wonder exactly what it is that keeps law firms together as commercial entities.

Worse than that, Jordan points to a range of recent commentaries suggesting that things aren’t getting any better. I think he is correct. In fact, it is interesting to note that John Roberts spotted the germ of the problem in his 2004 book, The Modern Firm.

Many authors, including Ronald Coase and Herbert Simon, have identified the essential nature of the firm as the reliance on heirarchic, authority relations to replace the inherent equality among participants that markes market dealings. When you join a firm, you accept the right of the executives and their delegates to direct your behaviour, at least over a more-or-less commonly understood range of activities. …

Others … have challenged this view. They argue that any appearance of authority in the firm is illusory. For them, the relationship between employer and employee is completely parallel to that between customer and butcher. In each case, the buyer (of labor services or meat) can tell the seller what is wanted on a particular day, and the seller can acquiesce and be paid, or refuse and be fired. For these scholars, the firm is simply “a nexus of contracts” — a particularly dense collection of the sort of arrangements that characterise markets.

While there are several objections to this argument, we focus on one. It is that, when a customer “fires” a butcher, the butcher keeps the inventory, tools, shop, and other customers she had previously. When an employee leaves a firm, in contrast, she is typically denied access to the firm’s resources. The employee cannot conduct business using the firm’s name; she cannot use its machinery or patents; and she probably has limited access to the people and networks in the firm, certainly for commercial purposes and perhaps even socially. (The Modern Firm, pp.103-4)

The benefits Roberts identifies are almost always missing in a law firm. The firm’s name may be less significant than the lawyer’s and there is little machinery or patents. In the seven years since the book was published access to networks and people has become infinitely more straightforward, thanks to developments in social software and similar technologies.

Joining Roberts’s insights with those of Dave Gray and Jordan Furlong, I think it is likely that we will see much more fluid structures in law firms in coming years. Dave Gray’s podular arrangement need not be restricted to one organisation — what is to stop clients creating their own pods for specific projects, drawing together the good lawyers from a variety of firms? Could the panel arrangement now commonly in use by larger companies be a Trojan horse to allow them to pick off key lawyers whenever they need them? Technology is only going to make that easier.

So that leaves the support functions. In Dave Gray’s podular model, support is provided by a backbone, or platform.

Podular platform

For a podular system to work, cultural and technical standards are imperative. This means that a pod’s autonomy does not extend to choices in shared standards and protocols. This kind of system needs a strong platform that clearly articulates those standards and provides a mechanism for evolving them when necessary.

For small and large companies alike, the most advantageous standards are those that are most widely adopted, because those standards will allow you to plug in more easily to the big wide world – and the big wide world always offers more functionality, better and more cheaply than you can build it yourself. Platform architecture is about coordination and consistency, so the best way to organize it may not be podular. When it comes to language, protocols, culture and values, you don’t want variability, you want consistency. Shared values is one of the best ways to ensure consistent behavior when you lack a formal hierarchy. Consistency in standards is an absolute requirement if you want to enable autonomous units.

Interestingly, there is often little variation between different law firms in terms of their technical standards. In some practice areas, these are dictated by external agencies (courts, industry associations, etc.), whilst in others they converge because of intervention by common suppliers (in the UK, many firms use know-how and precedents provided by PLC) or simply the fact that in order to do their job lawyers have to share their basic knowledge (first-draft documents often effectively disclose a firm’s precedents to their competitors). It is a small step to a more generally accepted foundation for legal work.

Will clients push for this? Would they benefit from some form of crowd-sourced backbone to support lawyers working for them in a podular fashion? Time will tell, but don’t wait for the train to leave the station before you decide to board it.

Now and then

A couple of days ago, Patrick Lambe posted a really thoughtful piece considering the implications of heightened awareness from the new generation of social software tools as opposed to the traditional virtues of long-term information storage and access. If you haven’t read it, do so now. (Come back when you have finished.)

Laid down

The essence of Patrick’s piece is that when we focus our attention on the here and now (through Twitter or enterprise micro-blogging, for example), we forget to pay attention to the historically valuable information that has been archived away. This is not a problem with technology. He points to interesting research on academics’ use of electronic resources and their citation patterns.

How would online access influence knowledge discovery and use? One of his hypotheses was that “online provision increases the distinct number of articles cited and decreases the citation concentration for recent articles, but hastens convergence to canonical classics in the more distant past.”

In fact, the opposite effect was observed.

As deeper backfiles became available, more recent articles were referenced; as more articles became available, fewer were cited and citations became more concentrated within fewer articles. These changes likely mean that the shift from browsing in print to searching online facilitates avoidance of older and less relevant literature. Moreover, hyperlinking through an online archive puts experts in touch with consensus about what is the most important prior work—what work is broadly discussed and referenced. … If online researchers can more easily find prevailing opinion, they are more likely to follow it, leading to more citations referencing fewer articles. … By enabling scientists to quickly reach and converge with prevailing opinion, electronic journals hasten scientific consensus. But haste may cost more than the subscription to an online archive: Findings and ideas that do not become consensus quickly will be forgotten quickly.

Now this thinning out of long term memory (and the side effect of instant forgettability for recent work that does not attract fast consensus) is observed here in the relatively slow moving field of scholarly research. But I think there’s already evidence (and Scoble seems to sense this) that exactly the same effects occur when people and organisations in general get too-fast and too-easy access to other people’s views and ideas. It’s a psychosocial thing. We can see this in the fascination with ecologies of attention, from Tom Davenport to Chris Ward to Seth Godin. We can also see it in the poverty of attention that enterprise 2.0 pundits give to long term organisational memory and recordkeeping, in the longer term memory lapses in organisations that I have blogged about here in the past few weeks…

Jack Vinson adds another perspective on this behaviour in a post responding to Patrick’s.

I see another distinction here.  The “newer” technologies are generally about user-engagement and creation, whereas the “slower” methods are more focused on control and management activities much more so than the creation.  Seen in this light, these technologies and processes spring from the situation where writing things down was a time-consuming process.  You wanted to have it right, if you went to that much effort.  Unfortunately, the phrase “Document management is where knowledge goes to die” springs to mind.

In knowledge management, we are trying to combine the interesting knowledge that flows between people in natural conversation as well as the “hard knowledge” of documented and proven ideas and concepts.  KM has shown that technology just can’t do everything (yet?) that humans can do.  As Patrick says, technology has been a huge distraction to knowledge management.

I think Jack’s last comment is essential. What we do is a balance between the current flow and the frozen past. What I find fascinating is that until now we have had few tools to help  us with the flow, whereas the databases, archives, taxonomies and repositories of traditional KM and information management have dominated the field. I think Patrick sounds an important warning bell. We should not ignore it. But our reaction shouldn’t be to reverse away from the interesting opportunities that new technologies offer.

It’s a question (yet again) of focus. Patrick opens his post with a complaint of Robert Scoble’s.

On April 19th, 2009 I asked about Mountain Bikes once on Twitter. Hundreds of people answered on both Twitter and FriendFeed. On Twitter? Try to bundle up all the answers and post them here in my comments. You can’t. They are effectively gone forever. All that knowledge is inaccessible. Yes, the FriendFeed thread remains, but it only contains answers that were done on FriendFeed and in that thread. There were others, but those other answers are now gone and can’t be found.

Yes, Twitter’s policy of deleting old tweets is poor, but even if they archived everything the value of that archive would be minimal. Much of what I see on Twitter is related to the here and now. It is the ideal place to ask the question, “I’m looking at buying a mountain bike. For $1,000 to $1,500 what would you recommend?” That was Scoble’s question, and it is time-bound. Cycle manufacturers change their offering on a seasonal and annual basis. The cost of those cycles also changes regularly. The answer to that question would be different in six months time. Why worry about storing that in an archive?

Knowledge in law firms is a curious blend of the old and the new. Sometimes the law that we deal with dates back hundreds of years. It is often essential to know how a concept has been developed over an extended period by the courts. The answer to the question “what is the current position on limitations of liability in long-term IT contracts?” is a combination of historic research going back to cases from previous centuries and up to the minute insight from last week’s negotiations on a major outsourcing project for a client. It is a real combination of archived information and current knowledge. We have databases and law books to help us with the archived information. What we have been lacking up until recently is an effective way of making sure that everyone has access to the current thinking. As firms become bigger and more scattered (across the globe, in some cases) making people aware of what is happening across the firm has become increasingly difficult.

Patrick’s conclusion is characteristically well expressed.

So while at the level of technology adoption and use, there is evidence that a rush toward the fast and easy end of the spectrum places heavy stresses on collective memory and reflection, at the same time, interstitial knowledge can also maintain and connect the knowledge that makes up memory. Bipolarity simply doesn’t work. We have to figure out how to see and manage our tools and our activities to satisfy a balance of knowledge needs across the entire spectrum, and take a debate about technology and turn it into a dialogue about practices. We need to return balance to the force.

That balance must be at the heart of all that we do. And the point of balance will depend very much on the demands of our businesses as well as our interest in shiny new toys. Patrick is right to draw our attention to the risks attendant on current awareness, but memory isn’t necessarily all it is cracked up to be. We should apply the same critical eye to everything that comes before us — how does this information (or class of information) help me with the problems that I need to solve? The answer will depend heavily on your organisational needs.

Back to basics

Recently I have caught up with two Ur-texts that I really should have read before. However, the lessons learned are two-fold: the content (in both cases) is still worthy of note, and one should not judge a work by the way it is used.

Recycling in Volterra

In late 1991, the Harvard Business Review published an article by Ikujiro Nonaka containing some key concepts that would be used and abused in the name of knowledge management for the next 18 years (and probably beyond). In “The Knowledge-Creating Company” (reprinted in 2007) Nonaka described a number of practices used by Japanese companies to use their employees’ and others’ tacit knowledge to create new or improved products.

Nonaka starts where a number of KM vendors still are:

…despite all the talk about “brain-power” and “intellectual capital,” few managers grasp the true nature of the knowledge-creating company — let alone know how to manage it. The reason: they misunderstand what knowledge is and what companies must do to exploit it.

Deeply ingrained in the traditions of Western management, from Frederick Taylor to Herbert Simon, is a view of the organisation as a machine for “information processing.” According to this view, the only useful knowledge is formal and systematic — hard (read: quantifiable) data, codified procedures, universal principles. And the key metrics for measuring the value of new knowledge are similarly hard and quantifiable — increased efficiency, lower costs, improved return on investment.

Nonaka contrasts this with an approach that is exemplified by a number of Japanese companies, where managing the creation of new knowledge drives fast responses to customer needs, the creation of new markets and innovative products, and dominance in emergent technologies. In some respects, what he describes presages what we now call Enterprise 2.0 (although, tellingly, Nonaka never suggests that knowledge creation should involve technology):

Making personal knowledge available to others is the central activity of the knowledge-creating company. It takes place continuously and at all levels of the organization. And … sometimes it can take unexpected forms.

One of those unexpected forms is the development of a bread-making machine by the Matsushita Electric Company. This example of tacit knowledge converted into explicit has become unrecognisable in its repetition in numerous KM articles, fora, courses, and so on. Critically, there is no actual conversion — the tacit knowledge of how to knead bread dough is not captured as an instruction manual for bread making. What actually happens is that the insight gained by the software developer Ikuko Tanaka by observing the work of the head baker at the Osaka International Hotel was converted into a simple improvement in the way that an existing bread maker kneaded dough prior to baking. The expression of this observation was a piece of explicit knowledge — the design of a new bread maker, to be sold as an improved product.

That is where the critical difference is. To have any value at all in an organisation, peoples’ tacit knowledge must be able to inform new products, services, or ways of doing business. Until tacit knowledge finds such expression, it is worthless. However, that is not to say that all tacit knowledge must be documented to be useful. That interpretation is a travesty of what Nonaka has to say.

Tacit knowledge is highly personal. It is hard to formalize and, therefore, difficult to communicate to others. Or, in the words of philosopher Michael Polanyi, “We know more than we can tell.” Tacit knowledge is also deeply rooted in action and in an individual’s commitment to a specific context — a craft or profession, a particular technology or product market, or the activities of a work group or team.

Nonaka then explores the interactions between the two aspects of knowledge: tacit-tacit, exlpicit-explicit, tacit-explicit, and explicit-tacit. From this he posits what is now known as the SECI model. In this original article, he describes four stages: socialisation, articulation, combination and internalisation. Later, “articulation” became “externalisation.” It is this stage where technology vendors and those who allowed themselves to be led by them decided that tacit knowledge could somehow be converted into explicit as a business or technology process divorced from context or commitment. This is in direct contrast to Nonaka’s original position.

Articulation (converting tacit knowledge into explicit knowledge) and internalization (using that explicit knowledge to extend one’s own tacit knowledge base) are the critical steps in this spiral of knowledge. The reason is that both require the active involvement of the self — that is, personal commitment. …

Indeed, because tacit knowledge includes mental models and beliefs in addition to know-how, moving from the tacit to the explicit is really a process of articulating one’s vision of the world — what it is and what it ought to be. When employees invent new knowledge, they are also reinventing themselves, the company, and even the world.

The rest of Nonaka’s article is rarely referred to in the literature. However, it contains some really powerful material about the use of metaphor , analogy and mental models to generate new insights and trigger valuable opportunities to articulate tacit knowledge. He then turns to organisational design and the ways in which one should manage the knowledge-creating company.

The fundamental principle of organizational design at the Japanese companies I have studied is redundancy — the conscious overlapping of company information, business activities, and managerial responsibilities. …

Redundancy is important because it encourages frequent dialogue and communication. This helps create a “common cognitive ground” among employees and thus facilitates the transfer of tacit knowledge. Since members of the organization share overlapping information, they can sense what others are struggling to articulate. Redundancy also spreads new explicit knowledge through the organization so it can be internalized by employees.

This silo-busting approach is also at the heart of what has now become known as Enterprise 2.0 — the use of social software within organisations. What Nonaka described as a natural form for Japanese organisations was difficult for Western companies to emulate. The legacy of Taylorism has proved too hard to shake off, and traditional enterprise technology has not helped.

Which is where we come to the second text: Andrew McAfee’s Spring 2006 article in the MIT Sloan Management Review: “Enterprise 2.0:The Dawn of Emergent Collaboration.” This is where the use of Web 2.0 technologies started to hit the mainstream. In reading this for the first time today — already having an an understanding and experience of the use of blogs and wikis in the workplace — it was interesting to see a different, almost historical, perspective. One of the most important things, which we sometimes forget, is McAfee’s starting point. He refers to a study of knowledge workers’ practices by Thomas Davenport.

Most of the information technologies that knowledge workers currently use for communication fall into two categories. The first comprises channels — such as e-mail and person-to-person instant messaging — where digital information can be created and distributed by anyone, but the degree of commonality of this information is low (even if everyone’s e-mail sits on the same server, it’s only viewable by the few people who are part of the thread). The second category includes platforms like intranets, corporate Web sites and information portals. These are, in a way, the opposite of channels in that their content is generated, or at least approved, by a small group, but then is widely visible — production is centralized, and commonality is high.

So, what is the problem with this basic dichotomy?

[Davenport’s survey] shows that channels are used more than platforms, but this is to be expected. Knowledge workers are paid to produce, not to browse the intranet, so it makes sense for them to heavily use the tools that let them generate information. So what’s wrong with the status quo?

One problem is that many users aren’t happy with the channels and platforms available to them. Davenport found that while all knowledge workers surveyed used e-mail, 26% felt it was overused in their organizations, 21% felt overwhelmed by it and 15% felt that it actually diminished their productivity.In a survey by Forrester Research, only 44% of respondents agreed that it was easy to find what they were looking for on their intranet.

A second, more fundamental problem is that current technologies for knowledge workers aren’t doing a good job of capturing their knowledge.

In the practice of doing their jobs, knowledge workers use channels all the time and frequently visit both internal and external platforms (intranet and Internet). The channels,however, can’t be accessed or searched by anyone else, and visits to platforms leave no traces. Furthermore,only a small percentage of most people’s output winds up on a common platform.

So the promise of Enterprise 2.0 is to blend the channel with the platform: to use the content of the communication channel to create (almost without the users knowing it) a content-rich platform. McAfee goes on to describe in more detail how this was achieved within some examplar organisations — notably Dresdner Kleinwort Wasserstein. He also derives a set of key features (Search, Links, Authorship, Tagging, Extensions and Signals (SLATES) to describe the immanent nature of Enterprise 2.0 applications as distinct from traditional enterprise technology.

What interests me about McAfee’s original article is (a) how little has changed in the intervening three years (thereby undermining the call to the Harvard Business Press to rush his book to press earlier than scheduled), and (b) which of the SLATES elements still persist as critical issues in organisations. Effective search will always be a challenge for organisational information bases — the algorithms that underpin Google are effectively unavailable, and so something else needs to be simulated. Tagging is still clearly at the heart of any worthwhile Enterprise 2.0 implementation, but it is not clear to me with experience that users understand the importance of this at the outset (or even at all). The bit that is often missing is “extensions” — few applications deliver the smartness that McAfee sought.

However, the real challenge is to work out the extent to which organisations have really blurred the channel/platform distinction by using Enterprise 2.0 tools. Two things suggest to me that this will not be a slow process: e-mail overload is still a significant complaint; and the 90-9-1 rule of participation inequality seems not to be significantly diluted inside the firewall.

Coincidentally, McAfee has posted on his blog today, asking for suggestions for a new article on Enterprise 2.0, as well as explaining some of the delay with his book.

Between now and the publication date the first chapter of the book, which describes its genesis, goals, and structure, is available for download. I’m also going to write an article about Enterprise 2.0 in Harvard Business Review this fall. While I’ve got you here, let me ask a question: what would you like to have covered in the article?  Which topics related to Enterprise 2.0 should it discuss? Leave a comment, please, and let us know — I’d like to crowdsource the article a bit. And if you have any questions or comments about the book, I’d love to hear them.

I have made my suggestions above, Andy. I’ll comment on your blog as well.

We are all in this together

A couple of links to start with: John Stapp and “Has ‘IT’ Killed ‘KM’?

Picture credit: Bill McIntyre on Flickr

I don’t have much truck with heroes. Many people do great things, in the public eye and otherwise, and it seems invidious to single certain individuals out mainly because they are better known than others who are equally worthy of credit. However, I make an exception for John Stapp.

Every time you get into a car and put on a seat belt (whether required to by law or not), you owe a debt to Dr Stapp. As a doctor in the US Air Force, he took part in experiments on human deceleration in the late 1940s. During the Second World War it had been assumed that the maximum tolerable human deceleration was 18G (that is, 18 times the force of gravity at sea level), and that death would occur above that level. The Air Force wanted to test whether this was really true, and so a research project was set up. In order to test the hypothesis, an anthropomorphic dummy was to be shot down a test track and abruptly brought to a halt. Measuring equipment would be used to gauge the effect of the deceleration on the dummy. An account of the project is provided in the Annals of Improbable Research. That account indicates that Stapp had little confidence in the dummy.

While the brass assigned a 185-pound, absolutely fearless, incredibly tough, and altogether brainless anthropomorphic dummy — known as Oscar Eightball — to ride the Gee Whiz, David Hill remembers Stapp had other ideas. On his first day on site he announced that he intended to ride the sled so that he could experience the effects of deceleration first-hand. It was a statement that Hill and everyone else found shocking. “We had a lot of experts come out and look at our situation,” he remembers. “And there was a person from M.I.T. who said, if anyone gets 18 Gs, they will break every bone in their body. That was kind of scary.”
But the young doctor had his own theories about the tests and how they ought to be run, and his nearest direct superiors were over 1000 miles away. Stapp’d done his own calculations, using a slide rule and his knowledge of physics and human anatomy, and concluded that the 18 G limit was sheer nonsense. The true figure he felt might be twice that if not more.

In the event, Oscar the dummy was used merely to test the efficacy of the test track and the ballistic sled on which his seat was first accelerated and then decelerated. Once that was done, testing could start.

Finally in December 1947 after 35 test runs, Stapp got strapped into the steel chariot and took a ride. Only one rocket bottle was fired, producing a mere 10 Gs of force. Stapp called the experience “exhilarating.” Slowly, patiently he increased the number of bottles and the stopping power of the brakes. The danger level grew with each passing test but Stapp was resolute, Hill says, even after suffering some bad injuries. And within a few months, Stapp had not only subjected himself to 18 Gs, but to nearly 35. That was a stunning figure, one that would forever change the design of airplanes and pilot restraints.

The initial tests were done with the subject (not always Stapp) facing backwards. Later on, forward-facing tests were done as well. Over the period of the research, Stapp was injured a number of times. Many of these injuries had never been seen before — nobody had been subjected to such extreme forces. Some were more mundane — he broke his wrist twice; on one occasion resetting the fracture himself as he walked back to his office. It is one thing to overcome danger that arises accidentally, quite another to put oneself directly in such extreme situations.

And he did it for the public good.

…while saving the lives of aviators was important, Kilanowski says Stapp realized from the outset that there were other, perhaps even more important aspects to his research. His experiments proved that human beings, if properly restrained and protected, could survive an incredible impact.

Cars at the time were incredibly dangerous places to be. All the padding, crumple zones and other safety features that we now take for granted had yet to be introduced.

Improving automobile safety was something no one in the Air Force was interested in, but Stapp gradually made it his personal crusade. Each and every time he was interviewed about the Gee Whiz, Kilanowski notes, he made sure to steer the conversation towards the less glamorous subject of auto safety and the need for seatbelts. Gradually Stapp began to make a difference. He invited auto makers and university researchers to view his experiments, and started a pioneering series of conferences. He even managed to stage, at Air Force expense, the first ever series of auto crash tests using dummies. When the Pentagon protested, Stapp sent them some statistics he’d managed to dig up. They showed that more Air Force pilots died each year in car wrecks than in plane crashes.

While Stapp didn’t invent the three point auto seatbelt, he helped test and perfect it. Along with a host of other auto safety appliances. And while Ralph Nader took the spotlight when Lyndon Johnson signed the 1966 law that made seatbelts mandatory, Stapp was in the room. It was one of his real moments of glory.

Ultimately, John Stapp is a hero to me because he was true to his convictions — he had a hypothesis and tested it on himself. In the modern business vernacular, he ate his own dogfood. Over and above that, he did it because he could see a real social benefit. His work, and (more importantly) the way he did it, has directly contributed to saving millions of lives over the last 60 years. Those of us who seek to change our environments, whether at work or home, or in wider society, should heed his example. If there are things that might make a difference, we shouldn’t advocate them for others (even dummies) without checking that they work for us.

Now, the other link. Greg Lambert at the 3 Geeks and a Law Blog has extended the critique of IT failing to spot and deal with the current financial crisis by suggesting that KM is equally to blame.

Knowledge Management was originally an idea that came forth in the library field as a way to catalog internal information in a similar way we where cataloging external information. However, because it would be nearly impossible for a librarian to catalog every piece of internal information, KM slowly moved over to the IT structure by attempting to make the creator of the information (that would be the attorney who wrote the document or made the contact) also be the “cataloger” of the information. Processes were created through the use of technology that were supposed to assist them in identifying the correct classification. In my opinion, this type of self-cataloging and attempt at creating a ultra-structured system creates a process that is:

  1. difficult to use;
  2. doesn’t fit the way that lawyers conduct their day-to-day work;
  3. gives a false sense of believing that the knowledge has been captured and can be easily recovered;
  4. leads to user frustration and “work around” methods; and
  5. results in expensive, underutilized software resources.

In a comment on that post, Doug Cornelius says:

I look at KM 1.0 as being centralized and KM 2.0 as being personalized. The mistake with first generation KM and why it failed was that people don’t want to contribute to a centralized system.

We have to be careful, as Bill Ives points out, not to throw out the baby in our enthusiasm to replace the 1.0 bathwater with nice fresh 2.0 bubbles. However, Greg and Doug do have a point. We made a mistake in trying to replicate the hundreds or thousands of databases walking round our organisations with single inanimate repositories.

The human being is an incredible thing. It comes with a motive system and an incredibly powerful (but probably unstructured) data storage, computation and retrieval apparatus. Most (probably all) examples of homo sapiens could not reproduce the contents of this apparatus, but they can produce answers to all sorts of questions. The key to successful knowledge activities in an organisation, surely, is to remember that each one of these components adds a bit of extra knowledge value to the whole.

Potentially, then, we are all knowledge heroes. When we experiment with knowledge, the more people who join in, the better the results. And the result here should be, as Greg points out, to “help us face future challenges.” We can only do that by taking advantage of the things that the people around us don’t realise that they know.

It’s mine and I will choose what to do with it

This isn’t a political blog, and it is a coincidence that I came across a couple of things that chime with each other on the same day that the UK government has started to reverse from its enthusiastic promotion of ID cards for all.

The first juicy nugget came from Anne Marie McEwan. In writing about social networking tools and KM, she linked some of the requirements for successful social software adoption (especially the need for open trusting cultures) to the use of technology for monitoring.

And therein lies a huge problem, in my strong view. Open, trusting, transparent cultures? How many of them have you experienced? That level of monitoring could be seen as a version of Bentham’s Panopticon. Although the research is now quite old, there was a little publicised (in my view) ESRC-funded research project in the UK, The Future of Work, involving 22 universities and carried out over six years. One of the publications from that research was a book, Managing to Change?. The authors note that:

“One area where ICT is rapidly expanding management choices is in monitoring and control systems … monitoring information could connect with other parts of the HRM agenda, if it is made accessible and entrusted to employees for personal feedback and learning. This has certainly not happened yet and the trend towards control without participation is deeply disquieting.

If ICT-based control continues to be seen as a management prerogative, and the monitoring information is not shared with employees, then this is likely to become a divisive and damaging issue.”

On the other hand, the technology in the right hands and cultures creates amazing potential for nurturing knowledge and innovation.

What struck me about this was that (pace Mary Abraham’s concerns about information disclosure), people quite freely disclose all sorts of information about themselves on public social networking sites, such as Facebook, LinkedIn, Twitter, and so on. The fact is that some of this sharing is excessive and ill-advised, but even people who have serious reservations about corporate or governmental use of personal information lose some of their inhibition.

Why do they do this? In part it may be naïveté, but I think sometimes this sharing is much more knowing than that. What do they know, then? The difference between this voluntary sharing and forced disclosure is the identification of the recipients and (as Anne Marie recognises) trust. Basically, we share with people, not with organisations.

The second thing I found today was much more worrying. The UK Government is developing a new strategy for sharing people’s personal information between different government departments. It starts from a reasonable position:

We have a simple aim. We want everyone who interacts with Government to be able to establish and use their identity in ways which protect them and make their lives easier. Our strategy seeks to deliver five fundamental benefits. In future, everyone should expect to be able to:

  • register their identity once and use it many times to make access to public services safe, easy and convenient;
  • know that public services will only ask them for the minimum necessary information and will do whatever is necessary to keep their identity information safe;
  • see the personal identity information held about them – and correct it if it is wrong;
  • give informed consent to public services using their personal identity information to provide services tailored to their needs; and
  • know that there is effective oversight of how their personal identity information is used.

All well and good so far, but then buried in the strategy document is this statement (on p.11):

When accessing services, individuals should need to provide only a small amount of information to prove that they are who they say they are. In some situations, an individual may only need to use their fingerprint (avoiding the need to provide information such as their address).

But I can change my address (albeit with difficulty). I can never change my fingerprints. And fingerprints are trivially easy to forge. Today alone, I must have left prints on thousands of surfaces. All it takes is for someone to lift one of those, and they would have immediate access to all sorts of services in my name. (An early scene in this video shows it being done.

What I really want to be able to do is something like creating single-use public keys where the private key is in my control. And I want to be able to know and control where my information is being used and shared.

Going back to KM, this identity crisis is what often concerns people about organisationally forced (or incentivised) knowledge sharing. Once they share, they lose control of the information they provided. They also run the risk that the information will be misused without reference back to them. It isn’t surprising that people react to this kind of KM in the same way that concerned citizens have reacted to identity cards in the UK: rather than No2ID, we have No2KM (stop the database organisation).