Boundaries on Awareness


The past few weeks we’ve been reading all about tacit knowledge and the benefits it can have. It took me a while to figure out what tacit knowledge was, and – as can be seen by my last post – I’m still learning. This week I was somewhat glad to take a different perspective, because it’s something that’s been itching at the back of my mind: what if what we know is wrong?

Kumar and Chakrabarti (2012) take on a case study of the challenger disaster, discussing it in terms of bounded knowledge. I won’t throw a bunch of quotes out this time; instead I want to focus on this idea of “bounded knowledge” and, more specifically, how it relates to tacit knowledge. For all the details, I highly recommend reading Kumar and Chakrabarti (2012) – they lay it out in a way I don’t have space for. Right now, I’m going to lay it out in a simplified way that makes sense to me.

Imagine your brain is a large bin. All the knowledge you have is inside this bin, piled in randomly like a tub of legos straight from the store. The lego pieces are color-coded based on some internal scheme, and they are all different sizes and shapes. When you need the knowledge, you pull it out of the bin. You know – tacitly – that all the legos of the same color relate to each other. It’s like breathing: you don’t have to think about it, you just know, your body knows, and it takes no conscious thought. That tacit knowledge puts a filter on your brain so when you’re working on a “blue” project you automatically – like breathing – pull all the “blue” lego pieces out. Related knowledge is pulled to the forefront.

But what if the piece of knowledge that keeps your blue legos together is coded green?

Kumar and Chakrabarti (2012) say that the green lego is outside the bounds of our awareness. Yes, we have the knowledge. But our tacit brains have already dismissed the knowledge as irrelevant or unimportant to the current project. We’ve filtered it out long before we consciously consider the knowledge.

So tacit knowledge – tacit knowing – can sometimes make us blind to things right in front of us. In a stable environment with time for testing and retesting that may not be a huge issue. But – as can be seen with the Challenger example – in other high-risk environments it can be disastrous.

Which moves me pretty quickly to Massingham (2010) and one quote I can’t resist repeating: “the brain does not work in the way decision trees suggest it should” (465). Well, mine doesn’t so I can certainly agree with that. I went from bounded awareness to legos. But Massingham (2010) seems to be dancing around a similar topic: in a high-risk environment with no clear way of prioritizing work, we automatically create tacit filters which tell us what to do first. But these filters can cause seemingly unimportant requests to fall to the bottom of the pile, where they wait for longer than they should for a resolution. For an organization with many requests coming in daily, such prioritization methods can cause delays to last weeks or months. By that point it may take more effort than when it was first presented, or the situation may have changed. This can cost the business money, time, or resources – and in high-risk environments could result in even larger disasters. What if an ambulance doesn’t have insulin because checking the supplies was deemed a low-priority task, and they’ve responded to a case with a diabetic?

So what do we do? If we can willingly ignore knowledge, and if the way we prioritize tasks based on the knowledge we have is wrong, is there any way to “fix” things? It seems the problem lies with our tacit knowledge and with tacit filters, which is with knowledge we can’t articulate and may not even be aware we have. I’m going to make another leap here to Huber (1991). While this article is older than the other two, and is focused more on organizational learning, I want to call out a few links between Huber (1991) and potential answers to these questions. One of the components of learning Huber (1991) talks about is “unlearning”. Not to get too Star Wars, but we must “unlearn what [we] have learned” before we can make new filters for our knowledge. With tacit knowledge this is difficult, and requires a good deal of practice. But Yoda was pretty smart – and if Luke can “unlearn” that a spaceship is too heavy for a single man to lift, I’m sure I can “unlearn” a few things as well.

Another idea Huber (1991) brings up is this thought that sometimes organizations have knowledge but they don’t get to the right place (p. 101). It gets back to that lego metaphor again. Department A has red legos and Department B has yellow legos. Department B could really use a red lego, but has no idea that Department A has a red legos. And Department A doesn’t know Department B needs red legos, so they never offer to share. It’s bounded awareness on a larger scale, where the organization is a single brain and departmental boundaries – tacit knowledge that IT information stays with IT and business knowledge stays with the business team and operational knowledge stays with operations – allows an organization to only be aware of information specific to a certain situation, no matter how relevant or important information coded for another department might be.

916 Words.


Huber, G. P. (1991). Organizational learning: The contributing processes and the literatures. Organization Science, 2(1), 88­115. URL:

Kumar J, A., & Chakrabarti, A. (2012). Bounded awareness and tacit knowledge: Revisiting Challenger disaster. Journal of Knowledge Management, 16(6), 934­949. doi:10.1108/13673271211276209 

Massingham, P. (2010). Knowledge risk management: A framework.Journal of Knowledge Management, 14(3), 464­485. doi:10.1108/13673271011050166


5 thoughts on “Boundaries on Awareness

  1. Your lego example is phenomenal. It made so much sense and made me understand tacit knowledge on a level I hadn’t previously. The quip at the ended about bounded awareness made me realize how they related.

    “So tacit knowledge – tacit knowing – can sometimes make us blind to things right in front of us.” When you wrote this it made me stop and think about if this ever happened to me. I couldn’t think up an example (maybe because I’m still processing this) but can you? I’m trying to find an example I can relate to (because however much I understand the Challenger disaster I can’t quite relate to it).


    • I think finding an example in ourselves is truly difficult because those filters are still running; the few I can think of were only made obvious in hindsight. The one example I can think of off hand comes from a movie troupe, because I’m a lot less introspective than I’d like to be.

      Imagine you have this friend you like hanging out with. You go to movies with this friend, go to meals with this friend, talk with this friend on the phone, gossip – all the normal things friends do. Sometimes it’s in a group setting and sometimes it’s just the two of you.

      Then, one day, you discover this friend thought the two of you were dating.

      In hindsight the activities you did together could seem date-like, particularly when you were alone. But because you thought of your friend as a friend and not a potential date, it never occurred to you that your friend might see them differently. You missed all the little signs because you weren’t looking for them, and any time you noticed them they didn’t seem important.

      It’s an example that can be related to a little easier than the Challenger disaster. I still think seeing that internal filter is really difficult, and it’s possible it can only be seen in hindsight. The concept is built around the fact that you are not aware of any boundaries on your knowledge at the time it occurs.

      Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s