The past few weeks we’ve been reading all about tacit knowledge and the benefits it can have. It took me a while to figure out what tacit knowledge was, and – as can be seen by my last post – I’m still learning. This week I was somewhat glad to take a different perspective, because it’s something that’s been itching at the back of my mind: what if what we know is wrong?
Kumar and Chakrabarti (2012) take on a case study of the challenger disaster, discussing it in terms of bounded knowledge. I won’t throw a bunch of quotes out this time; instead I want to focus on this idea of “bounded knowledge” and, more specifically, how it relates to tacit knowledge. For all the details, I highly recommend reading Kumar and Chakrabarti (2012) – they lay it out in a way I don’t have space for. Right now, I’m going to lay it out in a simplified way that makes sense to me.
Imagine your brain is a large bin. All the knowledge you have is inside this bin, piled in randomly like a tub of legos straight from the store. The lego pieces are color-coded based on some internal scheme, and they are all different sizes and shapes. When you need the knowledge, you pull it out of the bin. You know – tacitly – that all the legos of the same color relate to each other. It’s like breathing: you don’t have to think about it, you just know, your body knows, and it takes no conscious thought. That tacit knowledge puts a filter on your brain so when you’re working on a “blue” project you automatically – like breathing – pull all the “blue” lego pieces out. Related knowledge is pulled to the forefront.
But what if the piece of knowledge that keeps your blue legos together is coded green?
Kumar and Chakrabarti (2012) say that the green lego is outside the bounds of our awareness. Yes, we have the knowledge. But our tacit brains have already dismissed the knowledge as irrelevant or unimportant to the current project. We’ve filtered it out long before we consciously consider the knowledge.
So tacit knowledge – tacit knowing – can sometimes make us blind to things right in front of us. In a stable environment with time for testing and retesting that may not be a huge issue. But – as can be seen with the Challenger example – in other high-risk environments it can be disastrous.
Which moves me pretty quickly to Massingham (2010) and one quote I can’t resist repeating: “the brain does not work in the way decision trees suggest it should” (465). Well, mine doesn’t so I can certainly agree with that. I went from bounded awareness to legos. But Massingham (2010) seems to be dancing around a similar topic: in a high-risk environment with no clear way of prioritizing work, we automatically create tacit filters which tell us what to do first. But these filters can cause seemingly unimportant requests to fall to the bottom of the pile, where they wait for longer than they should for a resolution. For an organization with many requests coming in daily, such prioritization methods can cause delays to last weeks or months. By that point it may take more effort than when it was first presented, or the situation may have changed. This can cost the business money, time, or resources – and in high-risk environments could result in even larger disasters. What if an ambulance doesn’t have insulin because checking the supplies was deemed a low-priority task, and they’ve responded to a case with a diabetic?
So what do we do? If we can willingly ignore knowledge, and if the way we prioritize tasks based on the knowledge we have is wrong, is there any way to “fix” things? It seems the problem lies with our tacit knowledge and with tacit filters, which is with knowledge we can’t articulate and may not even be aware we have. I’m going to make another leap here to Huber (1991). While this article is older than the other two, and is focused more on organizational learning, I want to call out a few links between Huber (1991) and potential answers to these questions. One of the components of learning Huber (1991) talks about is “unlearning”. Not to get too Star Wars, but we must “unlearn what [we] have learned” before we can make new filters for our knowledge. With tacit knowledge this is difficult, and requires a good deal of practice. But Yoda was pretty smart – and if Luke can “unlearn” that a spaceship is too heavy for a single man to lift, I’m sure I can “unlearn” a few things as well.
Another idea Huber (1991) brings up is this thought that sometimes organizations have knowledge but they don’t get to the right place (p. 101). It gets back to that lego metaphor again. Department A has red legos and Department B has yellow legos. Department B could really use a red lego, but has no idea that Department A has a red legos. And Department A doesn’t know Department B needs red legos, so they never offer to share. It’s bounded awareness on a larger scale, where the organization is a single brain and departmental boundaries – tacit knowledge that IT information stays with IT and business knowledge stays with the business team and operational knowledge stays with operations – allows an organization to only be aware of information specific to a certain situation, no matter how relevant or important information coded for another department might be.
Huber, G. P. (1991). Organizational learning: The contributing processes and the literatures. Organization Science, 2(1), 88115. URL:http://www.jstor.org/stable/2634941
Kumar J, A., & Chakrabarti, A. (2012). Bounded awareness and tacit knowledge: Revisiting Challenger disaster. Journal of Knowledge Management, 16(6), 934949. doi:10.1108/13673271211276209
Massingham, P. (2010). Knowledge risk management: A framework.Journal of Knowledge Management, 14(3), 464485. doi:10.1108/13673271011050166