TMMi en Latinoamerica desde Agosto

TMMi ya posee una presencia oficial en Argentina y Latinoamérica de la mano de QAustral S.A.

HOW TO AVOID TESTERS MAKING MISTAKES OR MISSING BUGS?

Volvo S60 - 2012 Issues

Volvo denies there is a problem with our car> if so, then why a "software upgrade" .

Facebook Testing a Transaction Service Competitive to PayPal for Mobile Purchases

Will you use it?

Alguna pregunta sobre testing o ISTQB?

Si tienen alguna consulta o algo referido a examenes de testing por favor envienme sus consultas, las respondere a la brevedad.

Tuesday, 12 November 2013

Bstriker Contest - Help us to choose the presentation of our BRAIN.


We would like to know your opinion about the attached videos (2), 1:20 seconds long. To compensate your time we will choose 1 participant to receive an iPad.

Its about a new test tool able to fully automated the entire testing process, identifying all software defects instantly.

Click the title of the article or copy+paste the link:
http://www.youtube.com/watch?v=rrSUQDLq7So&list=PLWYDjbkuhdsKXuGjgCV3qKvGDcs5jMnqs

Tuesday, 22 October 2013

Taller Practico de Testing de Software orientado a aprobar examenes oficiales. Duracion 4hs!

Con el fin de optimizar los tiempos para estudios y costos de los profesionales interesados en formación formal de Testing, QAustral ofrece un taller de 4 horas en el que se provee material de estudio y resumen para aprobar examenes de certificación de Testing de Software.  Tambien pueden participar personas que no tengan la ambición de rendir pero si de recibir formación rapida y efectiva sobre este tema.

Ver la imagen para mayor información y modalidades de participación.

Ademas mensualmente quien comparta mas veces la publicidad adjunta tendra el beneficio de participar gratuitamente!



Friday, 30 August 2013

Test Estimations issues and Fermi's technique

According to the standards there are plenty of testing estimations techniques that applies to effort estimation, possible amount of bugs among other activities that might be estimated.

The most common estimations are:

  • Percentage: This is based on some rate defined by some standard or expert. Some books mentions that testing consumes 30% of the total effort of the project. I have been asking around and the market uses a rate of 35% of the total time for testing activities. Which leads to many problems because its not considering all activities or maybe over considering depending on the project.
  • Analogy: This technique uses information from previous similar projects. Its necessary to have a good record of activities from previous projects or search about other companies experience with similar projects. 
  • Expert: This technique requires a meeting with the responsible of each activity. Its effective but it takes long and usually miss the estimation if the ones responsible wants to impress their boss. 
 These estimations do not solve the issue of getting short or missing deadlines. To make it more accurate a tester can use Test cases design techniques. There are more than 100 TCDT.

There are specially two ways of estimation that impressed me and helped me a lot: Six-Sigma and Fermi's technique. About Six-sigma, its super useful and its one of the most accurate technique but you need information, the more information you have the better result. However Fermi's technique is useful for a first approach. Take a look at this from the Nasa site (https://www.grc.nasa.gov/www/k-12/Numbers/Math/Mathematical_Thinking/fermis_piano_tuner.htm)

Fermi's Piano Tuner Problem 

As a lecturer, Enrico Fermi used to challenge his classes with problems that, at first glance, seemed impossible. One such problem was that of estimating the number of piano tuners in Chicago given only the population of the city. When the class returned a blank stare at their esteemed professor, he would proceed along these lines:
  1. From the almanac, we know that Chicago has a population of about 3 million people.
  2. Now, assume that an average family contains four members so that the number of families in Chicago must be about 750,000.
  3. If one in five families owns a piano, there will be 150,000 pianos in Chicago.
  4. If the average piano tuner
    1. serviced four pianos every day of the week for five days
    2. rested on weekends, and
    3. had a two week vacation during the summer,
  1. then in one year (52 weeks) he would service 1,000 pianos. 150,000/(4 x 5 x 50) = 150, so that there must be about 150 piano tuners in Chicago.
This method does not guarantee correct results; but it does establish a first estimate which might be off by no more than a factor of 2 or 3--certainly well within a factor of, say, 10. We know, for example, that we should not expect 15 piano tuners, or 1,500 piano tuners. (A factor of 10 error, by the way, is referred to as being 'to within cosmological accuracy.' Cosmologists are a somewhat different breed from physicists, evidently!!!)


This is a technique that can be easily applied for testing purposes.

We tend to estimate effort but there are lot of unexpected factors that might affect that estimation, such as identifying more bugs than expected, slower execution than expected, etc...

In my experience I haven't seen too many companies estimating how many bugs will find per test cycle. Testing is a negative activity which is not executed to prove that something works but to identify most of the anomalies. Keeping that in mind the amount of anomalies identified might affect us reaching deadlines. 

Nasa for instance, considers that 10% of the amount of lines of code developed in C++ represents the minimum amount of bugs to be identified. 

There are many metrics to calculate the amount of Bugs considering all aspects of the project. Which is the most effective technique from your experience?

Monday, 26 August 2013

New Android virus super sophisticated.

It is worth to read this following article:

http://www.technewsdaily.com/18290-super-android-trojan.html

"It seems like every other week, we hear of a new piece of malware that's been wreaking havoc on Android phones. But a new Trojan uncovered by Russian security firm Kaspersky Lab stands head and shoulders above the ever-growing crowd of Android-targeted malware.
What the malware, technically named Backdoor.AndroidOS.Obad.a, does is nothing new: It sends text messages, or SMS, to premium numbers, as the charges accrue on the infected-Android user's account.
In addition, the Trojan gathers personal data stored on the phone and sends it to a remote server controlled by the cyber-criminals who created it.
But how the criminals actually carry out the cyberattack is much more interesting — and more dangerous.  The coders of this malware obviously know what they're doing."

Friday, 23 August 2013

Whats the best Bug-Tracking tool for agile teams?

Keeping in mind that a tool should adapt to the company, not the other way around... how to check all tools available in the market? Traditional tools such as Bugzilla, Mantis, Jira among others are not really helpful in terms of speeding up or reducing the time needed to raise a bug properly. A good bug report shall include a description, steps to reproduce, environmental information, pictures or video.
Collecting the information takes time, this means that if the test plan is highly effective then its important to consider according to the tool I'm using how long a tester will take to raise bugs.
This leads to a difficult activity which is estimate how many bugs I'm going to find to achieve a good or accurate estimation. There are plenty of techniques to do so but this is a discussion for some another time.

Discussion now is which of the bug tracking tools is more effective reducing the time to raise a bug? One of the tools I have checked lately is Bonfire by atlassian.

https://www.atlassian.com/software/bonfire/whats-new

Its a tool intended to help agile teams to raise bugs fast with all necessary information by providing special features such as video recording, image selection without using complementary tools. Or raise a bug without leaving the app you are testing.
Providing detailed information in a bug will also contribute to avoid unnecessary communication:

  • Cannot reproduce the bug
  • Its not clear what the issue is
  • It works for me
  • Is not a bug
  • More information needed.

This might look simple but it affects directly the testing estimations and the tool selected is a big responsible for it.

I don't know so many tools trying to actually help dev or test team to work faster, they are more focused on collecting information and generating metrics. The main goal is to generate metrics.

Whats the best approach? Have you missed a deadline because of a too buggy app?

Tuesday, 20 August 2013

Finally, after years of knowing it... FB competes with PayPal

Check this out first ...

This site: http://hothardware.com/News/Facebook-Testing-a-Transaction-Service-Competitive-to-PayPal-for-Mobile-Purchases/  has published this article:
________________________________________________________________________

Facebook Testing a Transaction Service Competitive to PayPal for Mobile Purchases

Look out, PayPal: Here comes Facebook. In what seems like a rather common-sense move for the company to make, Facebook will be launching its own payments service next month, with flash-sales site JackThreads becoming the pilot partner. The service's mechanics will mimic PayPal's, where you can pay for an item online with a Facebook option. To take advantage of the service, a credit card would need to be tied to your Facebook account; there's no mention of debit cards, but that doesn't mean it won't be an option.
Since its launching over ten-years-ago, PayPal has become the dominant online payments vendor, and while its service is reliable overall, there are a lot of downsides to not having real competition. PayPal's fees, in my opinion, are rather high, and Google's Wallet service has done little to change PayPal's domination. Facebook could be PayPal's first real competitor.
Given the fact that Facebook is used almost religiously by so many people, it makes sense to believe that some might want to opt to use it for their payments online - it helps keep things neat. Plus, I'm a personal fan of using a proxy like this, rather than giving out my credit card information to every single merchant. That is the reason I stick to PayPal for virtually every online payment I need to make.
Given some of Facebook's policies though, and its insistence on having you give it as much of your info as possible, I'm not too sure I'll be quick to hand over my credit card information and use it for my online payments. But that said, I do hope that this starts a good war between it and PayPal, because competition is very good for the consumer.
__________________________________________________________________________

If you are a tester which scenario do you think will most likely fail in this new software? Paypal was tested for several years, they have a reputation about their system. Facebook instead have lost information and security breaches.

Will you use it or recommend it? I wont .... yet ... even though this info makes me happy because I knew it for years now and sharing it with my students.

Monday, 19 August 2013

Will you do it?

If you have the chance to work for a potential billion USD project, will you do it for a x amount of shares initially?


Si tuvieras oportunidad de trabajar para un proyecto que puede valer billones en corto plazo, lo harías solamente por un porcentaje de participación/ acciones?

How to avoid Testers making mistakes or missing bugs?

Nowadays testers are more recognized than few years ago ... testing areas are more professionals. Lot of companies invest in training their HHRR in specific qualifications (ie. ISTQB, CSTE, etc ...) Which is always good and welcome because it improves testers skills and motivates them. It wont contribute to improve soft skills such as ability to handle stressful situations, or communication with relatives among others.
Teaching to a tester new testing techniques doesn't help to avoid human mistakes. There are more than hundreds known techniques that contributes to test an object from many different ways for example: Assertion Testing, Fuzz Testing, Localization Testing,  Fault Injection ... We can all easily learn them but they cannot guarantee we won't make a mistake.
Then what lot of companies do is to combine with CMM-I, TMM-I, or any other process improvement methodology or software development method to reduce the possibility of a human mistake.

But what it seems to be a fashion is Test Automation to prevent testers missing bugs. For me is not one but a combination of few activities to reduce the change of missing bugs:

* Early Test Planning. If its not possible then minimal test documentation executing negative testing seems to be effective.
* Good test plan. Even identifying testers soft skills and taking advantage of them.
* Test Automation: Whatever is repetitive, Whatever is critical.

There is not magic trick that helps to avoid this or that I'm aware of. So, question remains unanswered, whats the best strategy for you to prevent testers making mistakes while testing or missing bugs?