Friday, 30 August 2013

Test Estimations issues and Fermi's technique

According to the standards there are plenty of testing estimations techniques that applies to effort estimation, possible amount of bugs among other activities that might be estimated.

The most common estimations are:

  • Percentage: This is based on some rate defined by some standard or expert. Some books mentions that testing consumes 30% of the total effort of the project. I have been asking around and the market uses a rate of 35% of the total time for testing activities. Which leads to many problems because its not considering all activities or maybe over considering depending on the project.
  • Analogy: This technique uses information from previous similar projects. Its necessary to have a good record of activities from previous projects or search about other companies experience with similar projects. 
  • Expert: This technique requires a meeting with the responsible of each activity. Its effective but it takes long and usually miss the estimation if the ones responsible wants to impress their boss. 
 These estimations do not solve the issue of getting short or missing deadlines. To make it more accurate a tester can use Test cases design techniques. There are more than 100 TCDT.

There are specially two ways of estimation that impressed me and helped me a lot: Six-Sigma and Fermi's technique. About Six-sigma, its super useful and its one of the most accurate technique but you need information, the more information you have the better result. However Fermi's technique is useful for a first approach. Take a look at this from the Nasa site (https://www.grc.nasa.gov/www/k-12/Numbers/Math/Mathematical_Thinking/fermis_piano_tuner.htm)

Fermi's Piano Tuner Problem 

As a lecturer, Enrico Fermi used to challenge his classes with problems that, at first glance, seemed impossible. One such problem was that of estimating the number of piano tuners in Chicago given only the population of the city. When the class returned a blank stare at their esteemed professor, he would proceed along these lines:
  1. From the almanac, we know that Chicago has a population of about 3 million people.
  2. Now, assume that an average family contains four members so that the number of families in Chicago must be about 750,000.
  3. If one in five families owns a piano, there will be 150,000 pianos in Chicago.
  4. If the average piano tuner
    1. serviced four pianos every day of the week for five days
    2. rested on weekends, and
    3. had a two week vacation during the summer,
  1. then in one year (52 weeks) he would service 1,000 pianos. 150,000/(4 x 5 x 50) = 150, so that there must be about 150 piano tuners in Chicago.
This method does not guarantee correct results; but it does establish a first estimate which might be off by no more than a factor of 2 or 3--certainly well within a factor of, say, 10. We know, for example, that we should not expect 15 piano tuners, or 1,500 piano tuners. (A factor of 10 error, by the way, is referred to as being 'to within cosmological accuracy.' Cosmologists are a somewhat different breed from physicists, evidently!!!)


This is a technique that can be easily applied for testing purposes.

We tend to estimate effort but there are lot of unexpected factors that might affect that estimation, such as identifying more bugs than expected, slower execution than expected, etc...

In my experience I haven't seen too many companies estimating how many bugs will find per test cycle. Testing is a negative activity which is not executed to prove that something works but to identify most of the anomalies. Keeping that in mind the amount of anomalies identified might affect us reaching deadlines. 

Nasa for instance, considers that 10% of the amount of lines of code developed in C++ represents the minimum amount of bugs to be identified. 

There are many metrics to calculate the amount of Bugs considering all aspects of the project. Which is the most effective technique from your experience?

0 comments: