Tag Archives: errors

Feeling Sick or Unlucky

Let’s play doctor.Image result for doctor holding stethoscope

Let’s say you have a patient who shows signs of a disease that’s tricky to diagnose.  In fact, of the people who show these symptoms, only 1 in 100 have the disease.  The test is only successful in detecting the disease 90% of the time.  The test can also fail by incorrectly indicating a “false positive” (i.e., test results show you have the disease when in fact, you do not) 9% of the time.

How do you feel about them odds?

Since this case study is appearing on this blog, you are correct in thinking there is a trick.  In the real world, physicians are confronted with these type of odds all the time.  To make matters worse, the percentages are even murkier, for example, with overlapping or contradicting studies.  The neuroscientist Daniel Levitin, in his book ‘A Field Guide to Lies’ cites a study the indicates that 90% of physicians make the error of conducting the test.
What’s the error?  The odds are that about 9 in 10 positive results are actual false positives.  If the test shows that your patient has the disease, you are nine times more wrong than you are right.  This test, is therefore, useless, or worse than useless.
One thing I have learned is that maintaining one or two numbers and the relationship between them is relatively easy.  When you have to deal with three numbers, even if the math is easy, things get hard really fast.  To do the above math in your head, you have to do a few things.
  1. Track the “patient does not have the disease” part of the equation.  Using numbers from above, 99 do not have the disease, 9% false positive is about 9 people.
  2. Compare that to the “correct positive” of the 1 person who has the disease and gets a positive result”.  Let’s round up and say it’s one person.

Nine false positives to one correct positive.  Feeling lucky?

Advertisements

Why Your Construction Project Estimates are Wrong

According to WSJ’s The Numbers Guy, most infrastructure projects end up with very large overruns.   Surprised?   Probably not.  This is one of those casual conversation topics where many of us can point to a project close to home that have “busted the budget”.   New Jersey’s rail tunnel to New York City, Boston’s Big Dig, Sydney Opera House, your local sports stadium, the highway project on the other side of town… what you have been suspecting (that these big projects become more expensive) is correct, at least according to people who study these things.

So why is this the case?   Like many complex situations, there are several elements at play.  Estimating is always a tricky task.  Even experts cannot estimate as well as they think they should.  In fact, having expertise often affects your estimates in two (bad) ways.  According to Professors Magne Jørgensen and Dale Griffin, there is a link between forward looking perspective and irrational optimism.  You are an expert, you are asked to provide an estimate, you feel optimistic about the future, and give a favorable estimate.  According to Nassim Nicholas Taleb, author of The Black Swan, expertise gives you an unwarranted sense of certainty in situations where such certainty cannot exist.

There are other factors at play.  For example, the larger the project, the more likely that there will be more stakeholders and more ideas.  The public may want better aesthetics.  Public projects expand as other smaller projects are folded into the original project.  Also, the longer the project, the larger the risk of labor and material cost increases.  Some projects also end up paying for “externalities cost” that may not have been originally planned.

If there are other bids involved, the winning project was more likely to be the most optimistic.  This is the so-called winner’s curse.  Yipee, we won the project… And we’ve also convinced the public, officials (and ourselves!) that our estimates are correct.

Finally, there could be misrepresentation and outright lying.  The articles acknowledge this, but does not go into research findings.  I suspect, as the article implies, that there is very little systemic large-scale lying in the industry.  It’s most likely the other factors described above.

Big estimates are big numbers, and we do not deal well with big numbers, experts included.  We remember numbers incorrectly.  For example, we may remember that a project was supposed to be $100MM, but forget that with the additional approved budget, the project eventually was set at $126MM.  It’s easier to remember something like “$100MM”.   Also, we don’t like to deal with ranges and uncertainty, and this is exacerbated with the large numbers we are dealing with.

So, is it wrong to be optimistic?   Are we just lying to ourselves?  Is it possible to have expertise and accuracy?

Error in Aetna’s Model Pricing Model

It’s not often that a model gets front page attention at the Wall Street Journal.  Last Friday, WSJ placed an article on Aetna’s pricing model trouble on the front page of the Marketplace section.  According to the article, Aetna had to pull a request for increasing prices for California policyholders when they discovered “miscalculations” in the monthly premiums.  It’s not clear from the article if there were multiple errors or the magnitude of the errors.  This story comes a few months after another insurer, Wellpoint, received similar press about how external auditors found mathematical errors in their filing.

How can this happen?  You would expect that with something so important, there would be ways to make sure that “model errors” would be caught and fixed BEFORE things get filed and rate increases are announced.  (Approximately 765,000 Wellpoint and Aetna policyholders would have been affected by the “wrong” price increase had the errors not been found.)

I have not worked with either company, but I have seen several kinds of similar mistakes in pricing/cost models.  The fact that many mathematical models have mistakes is not a surprise for those who study how models are built and used.  Ray Panko, whose research focuses on spreadsheet errors, has collected data from several authors that show that 88% of spreadsheets have errors.  Furthermore, Panko has found that 1% of all spreadsheet formulas are in error.  One percent may seem small… until you consider how “big” one mistake can be.  Can you imagine how many formulas must be in Wellpoint or Aetna’s model?

I recall talking to folks at NASA about failure rates for the 10,000 components they may have on a Space Shuttle.  One tenth of one percent failure (0.1%) is still 10 components.  Some components have backups or may be non-essential, but many components are essential to the survival of the crew and the success of the mission.

Think about your last “big model”.  How many errors do you think it has?  Would you be open to letting others audit it?