Problem Solving

Dealing With Complexity – Solving Wicked Problems

Allen Downey, in his book Think Complexity, identifies a shift in the axes of scientific models:

Equation-based –> Simulation-based

Analysis –> computation

These new models allow us to not only predict behavior, but to also introduce randomness and give agents more detail than we see in classical approaches like Game Theory.

DARPA and various other government agencies and corporations lead the way in the early years for simulations. And slowly it filtered down through the intellectual strata until some K-12 programs started using NetLogo to teach kids about cell structures, the behavior of gas molecules and emergent complexity. The options at our fingertips still aren’t anywhere near as good as they will be in 5 or so years, but it’s what we have to work with.

Allen goes through several pages of changes in scientific modeling caused by the equation to simulation and analysis to computation shift, you can read it on page 16-22 if you’re curious (the book is free in PDF form).

This brings me around to the other point of my post: humans are horrible at working with complexity for a lot of reasons. One of the biggest I’ve seen so far is working memory. There is too much information, and people can’t sort through it quickly enough. And even when they can, they can’t hold enough of it inside of their heads to make the connections they need to understand their situation and plan out possible contingencies.

The average person can hold 5-9 objects in their working memory at a time, which seriously hinders their ability to figure out large complex scenarios with hundreds of thousands of probabilities. Simulations can work around this by giving you real time feedback on changes in variables while incorporating randomness, but for regular analysis finding ways to “visualize” information seems to work well, more on this in a minute.

Nassim Nicholas Taleb’s Black swan theory represents one of the many articulations of the “Unknown Unknown” categories we have to deal with. They are the side effect of an environment too complex for most people to understand. He used two examples, the 9/11 attack and the mortgage meltdown:
An example Taleb uses to explain his theory is the events of 11 September 2001. 9/11 was a shock to all common observers. Its ramifications continue to be felt in many ways: increased levels of security; “preventive” strikes or wars by Western governments. The coordinated, successful attack on the World Trade Center and The Pentagon using commercial airliners was virtually unthinkable at the time. However, with the benefit of hindsight, it has come to be seen as a predictable incident in the context of the changes in terrorist tactics

Common observers didn’t think it was possible, however many experts had already considered such a scenario:

After the 1988 bombing of Pan Am Flight 103 over Lockerbie, Scotland, Rescorla worried about a terrorist attack on the World Trade Center. In 1990, he and a former military colleague wrote a report to the Port Authority of New York and New Jersey, which owns the site, insisting on the need for more security in the parking garage. Their recommendations, which would have been expensive, were ignored, according to James B. Stewart‘s biography of Rescorla, Heart of a Soldier.[7]

After Rescorla’s fears were borne out by the 1993 World Trade Center bombing, he gained greater credibility and authority, which resulted in a change to the culture of Morgan Stanley,[7] whom he believed should have moved out of the building, as he continued to feel, as did his old American friend from Rhodesia, Dan Hill, that the World Trade Center was still a target for terrorists, and that the next attack could involve a plane crashing into one of the towers.[8] He recommended to his superiors at Morgan Stanley that the company leave Manhattan. Office space and labor costs were lower in New Jersey, and the firm’s employees and equipment would be safer in a proposed four-story building. However, this recommendation was not followed as the company’s lease at the World Trade Center did not terminate until 2006. At Rescorla’s insistence, all employees, including senior executives, then practiced emergency evacuations every three months.[9]

Feeling that the authorities lost legitimacy after they failed to respond to his 1990 warnings, he concluded that employees of Morgan Stanley, which was the largest tenant in the World Trade Center (occupying 22 floors), could not rely on first responders in an emergency, and needed to empower themselves through surprise fire drills, in which he trained employees to meet in the hallway between stairwells and go down the stairs, two by two, to the 44th floor.[7]

  • March 2001 – Italian intelligence warns of an al Qaeda plot in the United States involving a massive strike involving aircraft, based on their wiretap of al Qaeda cell in Milan.
  • July 2001 – Jordanian intelligence told US officials that al-Qaeda was planning an attack on American soil, and Egyptian intelligence warned the CIA that 20 al Qaeda Jihadists were in the United States, and that four of them were receiving flight training.
  • August 2001 – The Israeli Mossad gives the CIA a list of 19 terrorists living in the US and say that they appear to be planning to carry out an attack in the near future.
  • August 2001 – The United Kingdom is warned three times of an imminent al Qaeda attack in the United States, the third specifying multiple airplane hijackings. According to the Sunday Herald, the report is passed on to President Bush a short time later.
  • September 2001 – Egyptian intelligence warns American officials that al Qaeda is in the advanced stages of executing a significant operation against an American target, probably within the US.

Likewise, the mortgage meltdown was technically a black swan, but was easily predictable if you saw the pattern of ownership which clearly indicated fraud.

Taleb’s answer to this problem is not to try and predict possible future scenarios, but to simply make yourself more resilient. I don’t disagree with resilience, but I think an expanded approach can be taken here. The flaw that lead to the Black swans was not being able to make connections with information. If we don’t know what scenarios are most likely, we could just as easily end up putting too much effort on defense instead of looking for exponential returns on our resources.

How do we know when we’re looking at a very complex problem? Complex systems tend to be made up of diverse agents with interdependent relationships that change over time. So the question and the answers are changing. The behavior, emotions and motivations of the people in the problem are shifting. The connections between them also change. What does that mean?

For that we turn to Rittel and Webber:
Ten Criteria for Wicked Problems

Rittel and Webber characterise wicked problems by the following 10 criteria. (It has been pointed out that some of these criteria are closely related or have a high degree overlap, and that they should therefore be condensed into four or five more general criteria. I think that this is a mistake, and that we should treat these criteria as 10 heuristic perspectives which will help us better understand the nature of such complex social planning issues.)

1. There is no definite formulation of a wicked problem.

“The information needed to understand the problem depends upon one’s idea for solving it. This is to say: in order to describe a wicked problem in sufficient detail, one has to develop an exhaustive inventory for all the conceivable solutions ahead of time.” [This seemingly incredible criterion is in fact treatable. See below.]
2. Wicked problems have no stopping rules.

In solving a tame problem, “… the problem-solver knows when he has done his job. There are criteria that tell when the solution or a solution has been found”. With wicked problems you never come to a “final”, “complete” or “fully correct” solution – since you have no objective criteria for such. The problem is continually evolving and mutating. You stop when you run out of resources, when a result is subjectively deemed “good enough” or when we feel “we’ve done what we can…”
3. Solutions to wicked problems are not true-or-false, but better or worse.

The criteria for judging the validity of a “solution” to a wicked problem are strongly stakeholder dependent. However, the judgments of different stakeholders …”are likely to differ widely to accord with their group or personal interests, their special value-sets, and their ideological predilections.” Different stakeholders see different “solutions” as simply better or worse.
4. There is no immediate and no ultimate test of a solution to a wicked problem.

“… any solution, after being implemented, will generate waves of consequences over an extended – virtually an unbounded – period of time. Moreover, the next day’s consequences of the solution may yield utterly undesirable repercussions which outweigh the intended advantages or the advantages accomplished hitherto.”
5. Every solution to a wicked problem is a “one-shot operation”; because there is no opportunity to learn by trial-and-error, every attempt counts significantly.

“… every implemented solution is consequential. It leaves “traces” that cannot be undone … And every attempt to reverse a decision or correct for the undesired consequences poses yet another set of wicked problems … .”
6. Wicked problems do not have an enumerable (or an exhaustively describable) set of potential solutions, nor is there a well-described set of permissible operations that may be incorporated into the plan.

“There are no criteria which enable one to prove that all the solutions to a wicked problem have been identified and considered. It may happen that no solution is found, owing to logical inconsistencies in the ‘picture’ of the problem.”
7. Every wicked problem is essentially unique.

“There are no classes of wicked problems in the sense that the principles of solution can be developed to fit all members of that class.” …Also, …”Part of the art of dealing with wicked problems is the art of not knowing too early which type of solution to apply.” [Note: this is very important point. See below.]
8. Every wicked problem can be considered to be a symptom of another [wicked] problem.

Also, many internal aspects of a wicked problem can be considered to be symptoms of other internal aspects of the same problem. A good deal of mutual and circular causality is involved, and the problem has many causal levels to consider. Complex judgements are required in order to determine an appropriate level of abstraction needed to define the problem.
9. The causes of a wicked problem can be explained in numerous ways. The choice of explanation determines the nature of the problem’s resolution.

“There is no rule or procedure to determine the ‘correct’ explanation or combination of [explanations for a wicked problem]. The reason is that in dealing with wicked problems there are several more ways of refuting a hypothesis than there are permissible in the [e.g. physical] sciences.”
10. [With wicked problems,] the planner has no right to be wrong.

In “hard” science, the researcher is allowed to make hypotheses that are later refuted. Indeed, it is just such hypothesis generation that is a primary motive force behind scientific development (Ritchey, 1991). Thus one is not penalised for making hypothesis that turn out to be wrong. “In the world of … wicked problems no such immunity is tolerated. Here the aim is not to find the truth, but to improve some characteristic of the world where people live. Planners are liable for the consequences of the actions they generate …”

How, then, does one tackle wicked problems? Some 20 years after Rittel & Webber wrote their article, Jonathan Rosenhead (1996), of the London School of Economics, presented the following criteria for dealing with complex social planning problems – criteria that were clearly influenced by the ideas presented by Rittle, Webber and Ackoff.
Accommodate multiple alternative perspectives rather than prescribe single solutions
Function through group interaction and iteration rather than back office calculations
Generate ownership of the problem formulation through stakeholder participation and transparency

Facilitate a graphical (visual) representation of the problem space for the systematic, group exploration of a solution space
Focus on relationships between discrete alternatives rather than continuous variables
Concentrate on possibility rather than probability

The morphology grid is somewhat popular for mapping out these types of problems, figure out the “finite states”, the root variables that cause change, and then map them out in a grid format. Mark Proffitt’s Predictive Innovation does a good job of this:

Likewise, Lt. General Paul Van Riper mentions that the best leaders tend to be good at managing complex problems:

Problem Solving

The Mathematician’s Trap & How Intelligent People Deceive Themselves

This little story about the great mathematician John Von Neumann has always been one of my favorites. I will tell it the way I first heard it. I have since heard a few variations on the story, leading me to think that there may be a component of Urban Legend to it. But I really don’t care, because I think it’s such a great story that it’s worth retelling.

John Von Neumann was considered by many to be one of the most brilliant minds of the twentieth century. He reportedly had an IQ of 180. He was a pioneer of Game Theory, which was very important during the nuclear arms race. (Because GT assumes that all players act in their own best enlightened self-interest, GT turned out to be a much better model for evolutionary biology than for human behavior.) He was also one of the two people (Alan Turing being the other) who is credited with being the father of the modern computer.

The story goes that someone once posed to Von Neumann the following problem:

Two trains are 20 miles apart on the same track heading towards each other at 10 miles per hour, on a collision course. At the same time, a bee takes off from the nose of one train at 20 miles per hour, towards the other train. As soon as the bee reaches the other train, it bangs huwey and heads off at 20 miles per hour back towards the first train. It continues to do this until the trains collide, killing the bee.

Back to our friend the bee. We now have an expression for how far the bee flies after n legs
7. d’n = 2D * 1/2*(3^n – 1)/3^n = D*(3^n – 1)/3^n
and we need to solve it for how far the bee flies before dying. In actuality, the bee will stop flying (okay, “in actuality” this would never happen) when the distance between the trains is less than the body length of the bee. However, since the summation quickly converges on the solution, we can assume that the bee is a point, ignore the famous paradox, and do the summation up to n=infinity.
7. d’n = D*(3^n – 1)/3^n = D*(3^n/3^n – 1/3^n) = D*(1 – 1/3^n)
Quick refresher on infinite limits: as we let n get infinitely large, 3^n approaches infinity, and 1/3^n approaches zero. Therefore the limit as n approaches infinity is
d = limit(n–>infinity)[D*(1 – 1/3^n)] = D*(1-0) = D = 20 miles!

We have just solved this problem by the infinite series method. Infinite series are very important in Mathematical Analysis and Pre-Calculus as they form the basis for derivatives and integration and everything which is Calculus and Differential Equations. That’s why all students of Math, Physics and Engineering get to do a ton of infinite series problems before they graduate. The reason I call this problem the “Mathematician’s Trap,” is because virtually all mathematicians who see this problem will try to solve it the way we just did.

However, if you were to give the problem to someone who’s only had basic Algebra, they might solve it differently. The trains crash at the midway point which is at 10 miles. Since each train is going 10 mph, this takes one hour. During that same hour, the bee is flying at 20 mph, therefore the bee flies20 miles! Wow, that was a lot easier!

The moral of the story is that being smarter or better educated can often times put you at a disadvantage. When someone is trained at doing something a certain way, that action is virtually automatic. It takes great insight to be able to “step outside of the box” and ask if there’s an easier way to do it. (I’m currently working on another post on just this subject which should hopefully be up soon.) Even brilliant mathematicians will fall into the trap, which brings us back to John Von Neumann.

When posed with the above problem (or some variation of it), JVN took all of five to ten seconds to come up with the correct solution. This floored the questioner who said “I’m impressed that you didn’t fall for the Mathematician’s Trap.” After getting a perplexed look from our genius, he asked “How did you solve the problem?”

“By infinite series, of course!”


 Another good one:

Highly intelligent people may turn out to be rather poor thinkers.

They may need as much, or more, training in thinking skills than, other people.  This is an almost complete reversal of the notion that highly intelligent people will automatically be good thinkers.

1)    A highly intelligent person can construct a rational and well-argued case for virtually any point of view.  The more coherent this support for a particular point of view, the less the thinker sees any need actually to explore the situation.  Such a person may then become trapped into a particular view simply because he can support it (see Hypothesis Traps).

2)    Verbal fluency is often mistaken for thinking.  An intelligent person learns this and is tempted to substitute one for the other.

3)    The ego, self-image and peer status of a highly intelligent person are too often based on that intelligence.  From this arises the need to be always right and clever.

4)    The critical use of intelligence is always more immediately satisfying than the constructive use.  To prove someone else wrong gives you instant achievement and superiority.  To agree makes you seem superfluous and a sycophant.  To put forward an idea puts you at the mercy of those on whom you depend for evaluation of the idea.  Therefore, too many brilliant minds are trapped into this negative mode (because it is so alluring).

5)    Highly intelligent minds often seem to prefer the certainty of reactive thinking (solving puzzles, sorting data) where a mass of material is placed before them and they are asked to react to it.  This is called the “Everest effect” since the existence of a tough mountain is sufficient reason for the best climbers to react to it.  In projective thinking, the thinker has to create the context, the concepts, and the objectives.  The thinking has to be expansive and speculative.  Through natural inclination or perhaps early training, the highly intelligent mind seems to prefer the reactive type of thinking.  Real life more usually demands the projective type.

6)    The sheer physical quickness of the highly intelligent mind leads it to jump to conclusions from only a few signals.  The slower mind has to wait longer and take in more signals and may reach a more appropriate conclusion.

7)    The highly intelligent mind seems to prefer – or is encouraged – to place a higher value on cleverness than on wisdom.  This may be because cleverness is more demonstrable.  It is also less dependent on experience (which is why physicists and mathematicians often make their “genius” contributions at an early age).

See also:
Problem Solving Psychology

The Dangerous Art of the Right Question

Real questions, useful questions, questions with promising attacks, are always motivated by the specific situation at hand.  They are often about situational anomalies and unusual patterns in data that you cannot explain based on your current mental model of the situation, like Poirot’s letter.  Real questions frame things in a way that creates a restless tension, by highlighting  the potentially important stuff that you don’t know. You cannot frame a painting without knowing its dimensions. You cannot frame a problem without knowing something about it. Frames must contain situational information.

The same dynamic occurs at personal and global levels. Here are terrible personal questions:

  1. How can I be happy?
  2. What career do I want?
  3. How can I lose weight?

Here are examples of corresponding questions that are useful:

  1. Are people with strong friendships happier than loners? (Answer: yes)
  2. What is the top reason people leave jobs? (Answer: they dislike their immediate manager)
  3. What causes food addiction? (Answer: carefully-engineered concoctions of salt, sugar and fat)

Here are terrible global questions:

  1. How can we create peace in the Middle East?
  2. What can we do about global warming?
  3. How can we reform Wall Street?

Here are potentially useful corresponding questions:

  1. Do Israelis and Arabs communicate in different ways (Answer: yes)
  2. Why are summers getting warmer and wetter, while winters are getting colder and snowier? (Answer: I don’t know; climatologists might)
  3. Is the principle of limited liability a necessary condition for a free market economy? (Answer: I don’t know)

  1. The Poirot Method: This is the basic trail-of-clues method of focusing on an anomaly that your current mental model cannot account for. Since my colleague Dave and I often argue about Poirot vs. Holmes, let me throw the Holmes camp a bone (heh!): the classic Holmes’ question of the “dog that didn’t bark in the night” is an excellent insight question.
  2. The Jack Welch Method: Also known as the “stretch.” You ask ridiculously extreme versions of ordinary formulaic questions. Instead of asking “How do we grow market share 3% in the next year?” You ask, “How do we grow our market 10x in the next 3 months?” The question so clearly strains and breaks the existing mental model that you are forced to think in weirder ways (the question is situation-driven because numbers like 3%, 1 year, 10x and 3 months will need to come from actual knowledge).
  3. The 42 Method: Sometimes the right answer is more easy to find than the right question. Entrepreneurs are often in this boat. They don’t know who will use their product or why, but they just know that their product is the answer to some important question somewhere.  They are often wrong, but at least they are productively wrong. If you don’t get the “42″ reference, don’t worry about it.


Problem Solving

SPACED REPETITION – Learning & Memorization

Far too long for me to properly quote, just take my advice and read it if you plan on learning anything in the next few years. It even quotes the classic “You And Your Research”.

Previous Posts On Learning, Memory & Analysis:

Index of Memorization Methods

You And Your Research – Richard Hammond

CARVER Analysis Process

Advanced Analytical Techniques Used By Intelligence Analysts

2 Ways Of Solving Hard Problems

Top 10 Data Mining Mistakes (for Heuristics Analysis)

How Your Memory Works

Neuroplasticity Archetypes

Daniel Tammet – Memorization & Ways Of Knowing

Business Problem Solving

Resourcefulness & Control

Like real world resourcefulness, conversational resourcefulness often means doing things you don’t want to. Chasing down all the implications of what’s said to you can sometimes lead to uncomfortable conclusions. The best word to describe the failure to do so is probably “denial,” though that seems a bit too narrow. A better way to describe the situation would be to say that the unsuccessful founders had the sort of conservatism that comes from weakness. They traversed idea space as gingerly as a very old person traverses the physical world. [1]

The unsuccessful founders weren’t stupid. Intellectually they were as capable as the successful founders of following all the implications of what one said to them. They just weren’t eager to. Link

There are great startup ideas lying around unexploited right under our noses. One reason we don’t see them is a phenomenon I call schlep blindness. Schlep was originally a Yiddish word but has passed into general use in the US. It means a tedious, unpleasant task.

No one likes schleps, but hackers especially dislike them. Most hackers who start startups wish they could do it by just writing some clever software, putting it on a server somewhere, and watching the money roll in—without ever having to talk to users, or negotiate with other companies, or deal with other people’s broken code. Maybe that’s possible, but I haven’t seen it.

One of the many things we do at Y Combinator is teach hackers about the inevitability of schleps. No, you can’t start a startup by just writing code. I remember going through this realization myself. There was a point in 1995 when I was still trying to convince myself I could start a company by just writing code. But I soon learned from experience that schleps are not merely inevitable, but pretty much what business consists of. A company is defined by the schleps it will undertake. And schleps should be dealt with the same way you’d deal with a cold swimming pool: just jump in. Which is not to say you should seek out unpleasant work per se, but that you should never shrink from it if it’s on the path to something great.

The most dangerous thing about our dislike of schleps is that much of it is unconscious. Your unconscious won’t even let you see ideas that involve painful schleps. That’s schlep blindness.

How do you overcome schlep blindness? Frankly, the most valuable antidote to schlep blindness is probably ignorance. Most successful founders would probably say that if they’d known when they were starting their company about the obstacles they’d have to overcome, they might never have started it. Maybe that’s one reason the most successful startups of all so often have young founders.

In practice the founders grow with the problems. But no one seems able to foresee that, not even older, more experienced founders. So the reason younger founders have an advantage is that they make two mistakes that cancel each other out. They don’t know how much they can grow, but they also don’t know how much they’ll need to. Older founders only make the first mistake.

Ignorance can’t solve everything though. Some ideas so obviously entail alarming schleps that anyone can see them. How do you see ideas like that? The trick I recommend is to take yourself out of the picture. Instead of asking “what problem should I solve?” ask “what problem do I wish someone else would solve for me?” If someone who had to process payments before Stripe had tried asking that, Stripe would have been one of the first things they wished for.


What would someone who was the opposite of hapless be like? They’d be relentlessly resourceful. Not merely relentless. That’s not enough to make things go your way except in a few mostly uninteresting domains. In any interesting domain, the difficulties will be novel. Which means you can’t simply plow through them, because you don’t know initially how hard they are; you don’t know whether you’re about to plow through a block of foam or granite. So you have to be resourceful. You have to keep trying new things.

Be relentlessly resourceful.

That sounds right, but is it simply a description of how to be successful in general? I don’t think so. This isn’t the recipe for success in writing or painting, for example. In that kind of work the recipe is more to be actively curious. Resourceful implies the obstacles are external, which they generally are in startups. But in writing and painting they’re mostly internal; the obstacle is your own obtuseness. [2] Link

What  Control Is 

As a former spec ops guy, pilot, author, CEO, etc.  (control heavy professions), I’ve learned that being in control is NOT about:

  • Controlling the behavior of anybody else.
  • Control over EVERY detail and every situation (the micro environment).
  • Control of everything that’s going on in the world (the macro environment).

If you attempt any of the above, you are a control freak.  You won’t be happy with yourself, people will find you miserable to be around, and you will be unlikely to achieve the results you seek.

Real control, the kind of control that keeps you alive on dangerous missions and gets you out-sized results regardless of how difficult things become, is simple.  It’s control over:

  • Preparation.  Planning.  Skills.  Resources.  Enter every situation ahead of the power curve.
  • Direction.  No plan survives first contact.  Know where you are going.
  • Process.   How you get things done, matters.

As you can see, real control is about knowing how to think correctly. Link

Some Suggestions For Building Self Control:

  • Simulate poverty. Sleep on the floor for 2 weeks. Use very light padding if your back is too sensitive.
  • Wear raggedy, old clothing for these 2 weeks.
  • Fast for a day, cut down on your overall calorie content for the 2 week period
  • Stop whatever you are doing, and on a set time, find a simple object like a doorknob and focus only on that object for 5 minutes. If any thoughts come, let them wash over you and restore your focus only on the sight of the object. Increase it up to 30 minutes gradually, so that you can pull yourself out of whatever train of thought you are in and achieve a hard focus.
  • Spend 30 days straight without whining about anything. Reset the clock each time you fuck it up. You should focus only on analyzing and fixing problems, not getting negative emotions involved into them.
  • Don’t ejaculate at all for 14 days and make sure you lift weights. Other people have gone into more detail so I won’t rehash it all again.
  • Do a CARVER matrix on any goals you have and figure out what is most important and what should slide. Knowing that you’re working on hard problems makes it easier to focus
Business Problem Solving

Rules of Productivity, Solving Problems

How do we get more work done? It is a question that every manager and every passionate worker faces. Yet, for the most part, teams operate on gut instinct and habit. The results are less than optimal.

Topics covered include:

Having a methodology of solving problems, a way of prioritizing problems, a way of figuring out how much cognitive load you are carrying, how much time you can work without losing concentration and knowing how to focus your mind all effect your ability to solve problems. With a bit of experience and feedback, you should be able to feel when your brain’s performance is going down, and stop and then recover.
As time goes on, ideas are very likely to become more valuable and will be judged by their clarity, usefulness and market demand. Make sure you are a good idea creator.

hacker culture Problem Solving Psychology

2 Ways Of Working Through Hard Problems

1. Write the problem out and then read it aloud. Sometime’s it’s easier to make sense of something if you hear it. 2. Create pictures of what you are thinking about, if you need to be creative link them with other images you already have (IE Roman Room Method). A lot of people have a hard time picturing things vividly, so you have to work up to being able to really use the technique properly.