A Tale of Two Banks and the “Three Friends Go Out to Lunch” Brain Teaser

I recently had two separate conversations with two different people, both in banking.

One is a VP at a Regional Bank in California.  The other is the CEO of a bank based on the Middle East, growing by expansion and acquisitions.  Two different worlds, night and day.  Both deal with loans and try to mange risk while trying to grow.  But the similarities end quickly.

After the conversations, I was reminded of the following “Three Friends go out to Lunch” problem I read years ago in some brain teaser book.  Enjoy!  My response is here.

Three friends go out to a restaurant and order three lunch specials.  The bill comes out to $30 (with tax, no tip).  They decide to leave $10 each.  As they leave out the door, the owner realizes that the bill was incorrectly calculated: it should have been $25 including tax.  He sends the waiter out with five one dollar bills.  The waiter, feeling slighted by not having received a tip, keeps $2 and gives a dollar bill to each of the the three friends.

So each friend paid $9 (originally paid $10 each, but each got a dollar back).  The waiter keeps $2.  Three times $9 is $27, plus the two dollars is $29.  Where is the other dollar?

Advertisements

Be on the Lookout for Sparklines!

What are sparklines?

Although it’s not clear who was the first to use them, Edward Tufte gets the credit for introducing the term “sparklines” to describe these bite-sized graphics.  I suspect that we’ll all see more of them in the future, now that Microsoft Excel 2010 has these built into the graphing features.

You have probably seen these before.  I see them when I check out my stocks at yahoo finance.  Tiny charts under the “Intraday” column tell me if the price is up or down from previous close as well as how they have changed throughout the day.  Since it’s a time chart where the entire x-axis is the trading day, I also get a sense of how much of the trading day is left.  It’s got a lot of data packed into a small space.

The three key elements of a sparkline are:

  • data intensity
    (lots of data instead of having a few data points, even if the data is not labeled)
  • graphical simplicity
    (no extraneous “chartjunk“, text or labels)
  • sized to fit in with exiting text
    (there is no need for the eye to travel far from the text or other information)

That’s it!  It’s wonderfully simple and refreshing.  There’s a lot you give up, of course, since we may want to know the values associated with the axes, or if we see them in small multiples, we will want to know if the scales are comparable across the multiples.  But we gain so much in the simplicity.

Error in Aetna’s Model Pricing Model

It’s not often that a model gets front page attention at the Wall Street Journal.  Last Friday, WSJ placed an article on Aetna’s pricing model trouble on the front page of the Marketplace section.  According to the article, Aetna had to pull a request for increasing prices for California policyholders when they discovered “miscalculations” in the monthly premiums.  It’s not clear from the article if there were multiple errors or the magnitude of the errors.  This story comes a few months after another insurer, Wellpoint, received similar press about how external auditors found mathematical errors in their filing.

How can this happen?  You would expect that with something so important, there would be ways to make sure that “model errors” would be caught and fixed BEFORE things get filed and rate increases are announced.  (Approximately 765,000 Wellpoint and Aetna policyholders would have been affected by the “wrong” price increase had the errors not been found.)

I have not worked with either company, but I have seen several kinds of similar mistakes in pricing/cost models.  The fact that many mathematical models have mistakes is not a surprise for those who study how models are built and used.  Ray Panko, whose research focuses on spreadsheet errors, has collected data from several authors that show that 88% of spreadsheets have errors.  Furthermore, Panko has found that 1% of all spreadsheet formulas are in error.  One percent may seem small… until you consider how “big” one mistake can be.  Can you imagine how many formulas must be in Wellpoint or Aetna’s model?

I recall talking to folks at NASA about failure rates for the 10,000 components they may have on a Space Shuttle.  One tenth of one percent failure (0.1%) is still 10 components.  Some components have backups or may be non-essential, but many components are essential to the survival of the crew and the success of the mission.

Think about your last “big model”.  How many errors do you think it has?  Would you be open to letting others audit it?

How Much Oil is Leaking in the Gulf?

The oil continues to leak, and we are now starting see the oil hit the shores.  We have live images from the leak 5,000 feet below the water surface.  But how much oil is actually leaking from the wellhead?

Initial estimates (Day 5) from the US Coast Guard and BP placed the estimate at 1,000 barrels/day.  Last week, the “official” government group revised the estimates at 20,000 – 40,000 barrels/day.  Less than a week later, the estimate is now at 35,000 – 60,000 barrels/day.

When the late May estimates came out, there was an interesting quote from Ira Leifer, University of California, Santa Barbara: “It would be irresponsible and unscientific to claim an upper bound, …it’s safe to say that the total amount is significantly larger.”  He wants to make sure the estimate has an asterisk because he wants “to stand up for academic integrity.”  In fact, there’s a whole document available by the university that will explain how the scientists came up with their estimate.  (from WSJ article here)

But I suspect that most of us will not care too much for the actual method or the details.  Maybe a chart like below is helpful since it summarizes the “growth” of the estimates over time.  By doing so, I am (on purpose? inadvertently?) suggesting a story and a conclusion.  What do you read from it?

Report Date Barrels/Day Source / Reported by Method Link
April 24 (Day 5) 1,000 USCG, BP, thru cbc.ca Info from ROV (remote operating vehicles) and surface oil slick Link
April 28 (Day 9) 5,000 NOAA Satellite pictures Link
May 12 (Day 23) 70,000 Steven Wereley, Purdue, for an NPR story particle image velocimetry, based on videotape Link
May 12 (Day 23) 20,000 – 100,000 Eugene Chang, UC-Berkeley, for an NPR story “Pencil and paper” based on pipe diameter (from video) Link
May 27 (Day 34) 12,000 – 19,000 12,000 – 25,000
(depending on source)
Flow Rate Technical Group
(NOAA, USCG, Minerals Mgt)
Based on multiple methods
(blog entry author’s guess)
Link 1

Link 2

Jun 10 (Day 52) 20,000 – 40,000
25,000 – 30,000
(depending on source)
Plume Modeling Team of The Flow Rate Technical Group Revised from earlier, based on additional video from BP Link
Jun 15 (Day 57) 35,000 – 60,000 Deepwater Horizon Incident Joint Information Center, reported by cnn “based on updated information and scientific assessments,” Link

So what can we learn from this?  We all (think we) want lots of data.  It’s helpful when it’s summarized in a way that seems to make sense.  But when we are confronted with data that we are not used to seeing (how may of us deal in BARRELS of oil, or work with flow rates?) we need some anchor, some comparisons, something that helps us make sense of numbers.

No matter how you count it, this is a lot of oil.  But does it really matter that it’s 15,000 or 60,000 barrels/day?  If you are part of cleanup or doing planning for the the collection, it may help you with the planning.  But you’re also going to want to know some other info, such as how long will it flow, how the flow has changed over time and the related “total leakage”.  Even with this last bit of info, you’re more interested in the amount that ends up on the shore or the amount that actually possible to reclaim.

For most of us, the accuracy of the flow rates do not matter so much.  It’s a lot of oil, and we need some way to get a handle on it.  Most of us will not remember the actual number (or in this case, the changing range of numbers).

Besides, no one will really know the true amount that has spilled.

Unintended Consequences of a Unmanned Speed Gun

Perhaps you have seen these unmanned speed guns.  Some are temporary, perhaps around construction zones or around dangerous curves.  Some are permanently placed as part of a sign, a flashing set of numbers indicating your speed just under the sign with the posted speed limit.  Many work well; they are relatively low-cost reminders of the need to watch our speed.

Except when they don’t.  Near where my brother lives, there is a slightly upward-sloping stretch of a 4-lane street that starts at a stop light and goes near a school zone.  The speed limit is 40MPH.  Because of the slope, it’s actually difficuly to reach the speed limit by the time you reach the speed gun.  Difficult if you are using normal acceleration.  The “watch your speed” zone has now become a “how fast am I going” zone as drivers use the convenience of the speed gun to see how fast his or her car can reach.

And so we have an example of unintended consequences.  Here, it’s much worse than the typical ones.  The “fix” actually encourages the opposite behavior.

In systems thinking, we describe an “archetype” called “fixes that fail”.  Sometimes, “fixes” work for awhile, then fail.  Sometimes, they fail from the start.  Sometimes, they work in some cases, but fail in others.

What “fixes” are you working on now in your organization or personal life?  Could they be candidates for unintended consequences or “fixes that fail”?

Just for Fun

Non Sequitor

You give complexity…a bad name

Complexity was the topic of the lead article in NYT’s Week In Review this Sunday.  The accompanying picture was a 4x4x4 Rubic’s Cube with a somewhat-hidden dollar sign made from the white tiles and a not-so-hidden message of how things are connected.  It’s an apt picture-analogy (IMHO, it’s not perfect… more on that in the comments).  If you’ve ever tried to solve a Rubic’s Cube (or other similar puzzles), you know the frustration of trying to put the pieces in the right place.  Moving ONE piece into the right place is easy.  That, of course, messes up other things out of place, hence the curse of the connected world of complexity!

During the course of the article, David Segal mentions a range of today’s complex problems, from the wars in Iraq and Afghanistan, finance industry’s collapse, our health care system, and the recent oil spill in the Gulf of Mexico.  “Complexity used to be so simple”, he writes several times.  What happened?  Are we doomed to go the way of the Roman Empire, undone by our self-made complexity?

I agree with him that “just about every profession has become complicated in recent decades”.  This alone is not a bad thing.  The reality is that we live in a more networked, interdependent, larger-scale, faster-moving world.  Keeping things artificially simplistic may work if you can keep your world simple, but most of us cannot will our world to be or stay simple.

Many of our business problems are mired in complexity.  Solving one aspect of the problem shifts a burden elsewhere (like the linked tiles on the Rubic’s Cube).  The complexities of regulations and contract terms are such that you need a room full of “subject matter experts” just to see what’s even possible.  Data is never as clean as you’d like.  People change minds.  Our own organizations sometimes work against us in solving problems we all want solved.

So what can we do as managers (in a business setting)? I have been fortunate enough to see some things work in business settings.  Although there is no single solution, there are some tools and best practices that, with some modification at your end, may be helpful in addressing the complexities of your business.

Are you good at estimating?

We all estimate. Whenever we say, “I’ll be home in about 30 minutes” or “I need about 50 inches of tape”, we are estimating. Some of us even estimate as part of our jobs. Project managers, sales reps, executives, coders… whether we estimate lines of code, weeks of effort, new customers, revenue and profit, we make educated guesses based on our experience, observations and other sources.

But how good are we at estimating?

Here is a little exercise. On a sheet of paper, write down 1 through 10 on the left side of the page. Next to each number, draw two blanks, so that you can provide two answers for each number. Like this:

1. _________ _________
2. _________ _________
3. _________ _________
4. _________ _________

and so on to “10”.

Your job is to provide a “90% certainty” estimate for the questions below. You don’t have to get the answer correct, just provide a range of numbers–write your “low estimate” on the first blank and your “high estimate” on the second blank on each line.

  1. What was the production cost of “Gone with the Wind”?
  2. How old was Alexander the Great when he died?
  3. Wikipedia lists Burj Khalifa in Dubai as the tallest building. How tall is it in feet (or meters)?
  4. If you walk at the average speed of 3 miles/hour, how long in months would it take to walk the distance of Earth’s equator?
  5. How many times can Earth fit inside Jupiter?
  6. How many people signed the US Declaration of Independence?
  7. How many countries are there in South America?
  8. In what year did the world’s population surpass 2 billion people?
  9. How many pairs of legs does a common house centipede (Scutigera coleoptrato) have?
  10. What is the “as the crow flies” distance (on miles or km) between Beijing, China and Amsterdam, Netherlands?

For answers and the second part of this post, see comments.  But don’t scroll down or click on link before you take the quiz!

What is a “good model”?

Chances are, you have worked with models.  You may build and run complex models spanning many years or detailing lots of “steps” or “lines”.  You may have also used some simple models, such as a hand-drawn map that tells you how to get to the park.

A model is simply “some representation of reality”.  You have a real product line and a set of salespeople.  They take real orders from real customers.  If all goes well, you will receive real dollars (or Euros) for the sale.  So you have a speadsheet for projecting sales revenue associated with all this.  After a hard day at work, you may go running at a nearby park, and a simple hand-drawn map may suffice in getting you there.

Both the “sales projection spreadsheet” and the “hand drawn map to the park” are models.  There are two key features of a model, any model, that can be illustrated through the above examples.

1. A model must be constructed with more or less a specific question in mind.
2. A model is “good” or “bad” in light of this purpose.

So a hand-scribbled model, not drawn to scale, with some streets that are not labeled may indeed suffice for your manoeuvring through the neighborhood and finding the park.  But if you wanted to lay down utility lines and plan some street-ripping construction, you would want a different model, one that shows more specific dimensions and perhaps what kind of surface materials you are dealing with.

In many of the modeling and analytics work I do, I get asked, “how accurate is it?” or “how much data is in it?”.  I believe that the questions are valid, but the first questions to ask are: “what are we trying to solve?”.

Before you build a model, think about what you are trying to do.  Who is the audience?  How will the model (or the results) be used? What kind of questions will people ask?  Then we can go about discussing “level of detail” or “what kind of data”.

Welcome to Your World

This is an exciting time for business and management.

The rules are changing. Access to tools and information has made it easier for you to know more and respond faster.  No longer are you limited by your size, pedigree, title, technology or other constraints.  Information, tools and methods that were only available to a select group of individuals and organizations are now becoming more accessible.  You can connect with partners & customers, access and analyze competitive data, and tap into efficient resources in ways that only a few years ago may have seemed futuristic.

The rules are changing, being re-written by smarter customers and new competitors.  The current ride through your industry’s cycle probably feels different from previous cycles.  Your past experience, while still valuable, must be combined with new insights and execution nimbleness.

How exciting!  As business gets more complex, opportunities to win exist across all levels.  Entire industries & organizations are redefining what it means to build and manage partnerships, taking advantage of economies of scale.  Individuals are charting bold career-propelling paths by riding the wave of complexity.

Exciting!  Or maybe it’s more strange and risky than exciting.

For how can we make decisions when the old rules don’t apply?  How do we know what rules are still “good” and which ones need to be modified?  Can we rely on past experience, something that has worked well in the past?  How is it possible to understand the relevant interdependencies in this new world?  Data is cheap and vast, but how do we overcome volume with insight?  Has the complexities of today’s business surpassed our ability to make sound decisions?

This blog is a meant to be a resource for practitioners of business analytics. “Business Analytics” can mean a wide range of things. This blog will use the term as it pertains to solving complex business problems (more on “complexity” later).

Armed with the right tools and methods, there are many organizations and individuals who are taming the complexities of their ever-changing world. We may be familiar with the Amazons and the Googles, but there are plenty of other organizations, big and small, who are doing some amazing things.

We’ll cover stories, talk about tools, and provide some examples. Hopefully we’ll also provide some diversions along the way.

Welcome!