Communicating the Complexity of Design using Cynefin

In the following series of articles, I will unpack various personal experiences that I have come across, and what I’ve been doing to help alleviate some of the issues that I’ve been confronted with.

There is more to “Design” than pretty pictures on the screen. It runs much deeper. It is problem-solving, driving toward a solution that elicits an outcome an organization is focusing on. These outcomes could affect a variety of states like emotion, trust, engagement, or in my case learning. Designing is difficult, but not all design problems are created equal.

Over the course of the last 2-years, Macmillan Learning has strapped a jetpack on learning science, research, and design; propelling our team’s reach across the organization as we co-build digital products for higher education, with the help of some brilliant stakeholders, and amazing students and instructors. While this is exciting, I will admit, it has come with some lessons learned that I’ve been chipping away little by little. How wouldn’t it? Design is messy, and as you scale design, it is bound to get even messier.

The “Squiggle of Design”

As organizations start to scale, discussions around process, standardizations, accountability, and an onslaught of stakeholder voices start to rise. Predictability is paramount in many people’s eyes, but when you look at the design squiggle above, you ask yourself, “how do you make that feel predictable?” The lack of predictability makes design and research very difficult to plan in a consistent way. Design always brings unexpected turns that we need to lean into, rather than pull away from. While rational, this could lead to organizational problems and friction across departments. Developers have a plan and are able to predict their velocity, Marketers have a plan, Sales have a plan. Pretty much every stakeholder in the room has a plan, and while the designer may also have a plan, it may feel erratic as they respond to insights in their designerly ways. Our foundational work encompasses the messy side of early research that could potentially block everyone in the room from completing their own plans. This leads to comments like, “design is a bottleneck” and a lack of trust in design might start to build, causing areas in the organization to go rogue completely skipping design in an effort to get their side of the plan done.

Lack of trust comes from being hurt by an outcome such as missing a deadline, and not feeling like you completely understand why it happened. This brings on questions like “What does design do?”, “What is your process?”, and a bunch of seemingly obvious questions that are surprisingly difficult to communicate to non-designers. In some cases, this might lead to creating the standardization of a process across the organization, and you probably end up with some flavor of a human-centered design process like Design Thinking, and a variety of process maps on a slide deck in an effort to visually articulate what design is across a series of slides, in a way that stakeholders could understand. I’ve made several of these slide decks.

Around this point, design risks being seen as a checklist of activities that you go through. Interviews, Journeys Maps, Sketching, Prototyping, User Testing, etc, rinse and repeat. Congrats, we now have effectively gained predictability of process, but it doesn’t take long to learn that we still lack predictability of the outcome, continue to miss deadlines and continue to be called a “bottleneck”. Yep, been here too.

As I mentioned earlier, problems come in all shapes and sizes, and by definition standardization, while necessary, risks treating every problem the same. That means that there will be times that designers, and everyone around them, will do things that they know in their gut will lead to no valuable learning, and just go through the motions as they check each box feeling good about themselves that they are able to better communicate their work.

The issue with this is that if everything is going through a checklist of tasks, and many of those tasks are leading to minimal to no return on our investment, then the problems of little value will eat up time that could be spent seeking out areas of differentiation and experiences that truly delight our users. Living in this world will surely stifle innovation.

Welcome Cynefin

When working in a large organization, words matter, as demonstrated by the slide decks built to help communicate with the organization. Stakeholders want to be able to report back to their stakeholders and explain how the project is going. When designers adapt to problems and change the process to fit the need, it may be difficult for non-designers to understand causing them to feel lost and concerned. There is no reason why we shouldn’t be able to get the best of both worlds. Give our team the flexibility and permission to adapt, but at the same time be able to communicate the approach and rational with stakeholders so they feel more connected. Most importantly, this opens up the opportunity to get to our original intention of doing “just enough” research, getting to the solutions that stakeholders truly care about faster and more efficiently.

Cynefin is a framework that was developed to help with decision-making. This framework is divided into 5 areas as described in the image below: Obvious, Complicated, Complex, Chaotic, and Disorder.

The best way I could explain what each means is to give you some examples of how I think about this through the lens of UX. You could also hear David J. Snowden describe the framework he developed in 1999, based on concepts from knowledge management and organizational strategy in the video below.

With an obvious problem, there is one clear solution or interaction that your customer clearly expects. Anything other than that might cause confusion and hurt the experience. You might categorize a login flow as an obvious problem. When designing a login, there is a very high probability that your end result will look something like this.

Your page will likely have a couple input fields with a button that says “Log In”, you will have a place for register for a new account, and there will be some way of retrieving a password if you forgot it. Anything that drifts too far from this might be distracting and confuse your audience.

Obvious problems means that you start by categorizing the problem, and then respond to it. Very little research needs to be done on this. Just get it done and move on.

This doesn’t mean that they aren’t tested. I typically sprinkle “obvious” problems into the mix of testing less obvious problems. So if I am testing a complex flow, I will start the prototype from the log-in screen, and not even mention it or waste any time trying to gain any insights on it since every minute counts and I have many other more critical research questions to get answers for. If something goes horribly wrong it is likely that you’d be able to sense it and respond, moving it to the Complicated realm.

From a stakeholder engagement perspective, more likely than not, I will not waste their time asking questions about obvious problems, because I need them to spend more time answering other problems for me or my team. They, like you, have booked calendars. You want to make sure they feel like they are bringing value every time they meet with you.

This is not to say that log-in flows should not be innovated on. If the context in which this problem is being experienced calls for innovation, or your company can somehow capitalize on creating a better log-in experience, then log-in is no longer an obvious problem. The problem shifts from “How do we get our customers authenticated into our site”, to something like “How do we reduce the number of passwords our customers need to remember”. It likely falls in the Complex realm. You see this type of thing with authentication solutions from Google and Facebook, and Biometric and Facial-recognition from iOS and Android smartphones. Those organizations have spent a lot of time innovating log-in, and are reaping the rewards due to the convenience they are offering their customers of not having to remember another password.

While we aren’t breaking any new ground with this type of problem, there isn’t one clear pattern to go with like there is with an Obvious problem. A couple of examples might include search or navigation.

The approach we take with a navigation problem will require a deep understanding of the content, it’s organization, level or hierarchy, and a slew of other things. There will likely be some card sorting activities, testing of various designs, but in the end, it will likely fall in one of several buckets. While more work, the outcome is still somewhat predictable.

Some additional activities that help to speed up complicated problems are to do an audit of existing solutions inside and outside of your industry. A fun and easy way to do that is by facilitating a Lightning Demo, one of the steps of the Google Ventures Design Sprint. Use it to find out what you and your stakeholders like of each, how do they fit in your context, then pick a couple approaches and prototype with your content to test with your customers.

Complicated problems need dedicated research for them. While the patterns may exist as a jumping off point, there is no certainty that you and your team will pick the correct one, and you are bound to be surprised by the result which will require a fair amount of iteration.

Depending on how complicated the problem is, I tend to favor the RITE method, for early stages of complicated problem solving and slowly move towards a more traditional approach in later iterations if the problem requires a high level of certainty that we get it right.

If you want to keep some rigor and save some time you could also go with an unmoderated approach using something like www.UserTesting.com. Just bear in mind that while unmoderated test might feel like you will save time, if you haven’t done them in the past there is a lot of risk in creating a script and scenario that your participants understand and could provide usable feedback on. My suggestion is to test your test by running just one and seeing if the participant got hung up on a task instruction. Then you make your tweaks and send it off to your participants. Another tip is to make sure you give your participant a post-task survey to let you know their overall satisfaction with the flow. When you wake up the next morning and see that 20 people have gone through your test, it is helpful to have some sort of signal that points you right to the people that had problems. If you have 10 people doing a 15-minute test, that is close to 2 hours of footage. Synthesis adds a multiple of 1.5, meaning that will run up to 5+ hours of work.

This is the area of differentiation. The problems that live here, if solved successfully, will make your product special. However, to that end, this area is emergent and there are no clear patterns to select from. It is often unpredictable and requires much more probing and analyzing before jumping into a solution.

An example might be Google Slides Assistant tab. I don’t work for Google, but I’d imagine that their driving question might have sounded something like “How might we allow every person that use Google Sheets feel like they can quickly interpret data regardless of familiarity with spreadsheets”.

This question is rich, and until I saw Google’s answer I wouldn’t have been able to tell you to an answer with any level of certainty. This requires much higher investment in time and resources to get right, but if you succeed that investment should be paid back in the form of customer delight.

Communication and a keen sense of the business objectives are essential here. For complex problems, you should gather a diverse group of people to help you come up with innovative, creative solutions. There are many approaches that you could do here, including a traditional HCD process, but many times I select some flavor of design sprint.

Design sprints, what many see as a recipe for innovation, could prove really valuable with these types of problems. While useful in some non-complex (complicated) scenarios, this process takes a group of cross-disciplinary participants through a 5-day process which culminates with a high fidelity prototype being tested with customers. In the video below, Jake Knapp, author of the Sprint Book give a 90-second explanation of the sprint process.

With any luck, chaotic scenarios are a minimum, but they do and will happen. This type of problem might start with an email with Urgent! on the title. In the email, you learn that a critical flow is not working and the customer service reps are being swamped with complaints. These request will typically come from “the top” ordering everyone to stop what they are doing and address the problem right now. This is more than a “fire”, this is “being on fire”. Don’t overthink it…stop, drop, and roll!

When you run into this you want to act based on what you know to be true today, then keep an eye on the results, that you will later respond after you determine if the patch was successful. Notice I call this a patch, it is not a solution, so you may need to revisit this and come up with a better approach using Obvious, Complicated, or Complex tactics.

If this does happen a lot then it may likely be a HIPPO (Highest Paid Person’s Opinion) situation. If it is determined that you have a HIPPO in the room then there are a couple of reasons.

  • It might be a person that is not fully bought into the design-based process and needs some convincing. This will require actions that build trust and make alliances with people in the organization that could champion your approach. These people may be reactionary, jumping right to solutions like “move the button here” rather than take a step back to investigate the root cause.
  • You might work at a “Visionary” organization that is recreating an industry and the organization spots an emerging mega-trend before someone else sees or acts on it. This is an opportunity to completely reshape an industry and time is of the essence. Think Jeffrey Sprecher and Netflix, as they reshaped the media-industry landscape.

Sometimes, it isn’t clear which of the other four domains is dominant, and people generally rely on decision-making techniques that are known and comfortable. This is where I started, which as mentioned earlier in this article, is not optimal. If you find yourself in this area then you should gather whatever needed information is required to be able to move the problem into the Obvious, Complicated, or Complex domain.

Now what?

Now that we have placed our list of problems into buckets of complexity we are able to better schedule a roadmap, but more importantly, we are able to do this while sharing a common vernacular with our stakeholders. This takes us a few steps closer to what developers do with t-shirt sizing development stories without giving exact dates which allow for some flexibility in approach.

Beyond Cynefin

While Cynefin has been a really helpful lens to look at design problems though and has helped me a lot. It also feels incomplete because we don’t solve problems in a vacuum. There are people we need to work with - stakeholder, other designers. We also work within a variety of environments that have their own values. If your organization is large enough, there might be various cultures to navigate across groups under the same roof. As well as other factors that might get in the way of success from a design and innovation perspective.

In the next article, I will explore The Stacey Matrix contextualized around situations I’ve encountered as a User Experience designer.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Alex Britez

Designer, Developer, Dad & maker of things that teach stuff. Sr Designer at Microsoft VS Code & MakeCode & Adjunct Instructor @ NYU’s Digital Media for Learning