6 Popular Myths in Test Automation You Must Know

Despite the several benefits test automation has to offer, many software testers still find excuses to not completely utilize these benefits. Faster releases, quicker feedback, frequent test execution, increased test coverage to development team are a few of the many advantages it possesses. Quite a few myths surround test automation and this blog will help you to identify them and embrace what it has to offer.
app testing
The most challenging task for a software tester when it comes to test automation is to understand its limitations and set goals accordingly.
Myths Surrounding Automated Testing

  • Myth #1: It’s better than Manual Testing
  • For those who claim this, you need to understand one thing; automated testing is not testing per se. It is checking of facts. When we have certain knowledge about a system under test, we enforce checks in the form of automated testing. The result of such a check will help confirm our understanding of the system.

However testing is a form of investigation, which gives us new information about the system under test. Hence we should refrain from being lenient to one or the other since both methods are required to get quality insight about an application.

  • Myth #2: 100% Automated Testing
  • 100% test coverage is impossible to achieve; and the same goes for test automation. While it is possible for us to increase test coverage by using more data, configurations, covering various operating systems and browsers, achieving 100% is an unrealistic goal.

More tests don’t mean better quality or confidence. The important thing is how good your test design is. Focus need to be put on the more important areas of functionality rather that chasing a full coverage.

  • Myth #3: Quick ROI Every Time
  • When implementing a test automation solution, the development of a framework is necessary and this will support operations.  This can be useful and meaningful for test case selection, reporting, data driven, etc. The framework development should be considered as a project on its own and thus requires an array of skilled developers. The process is also a time consuming one.

Scripting automated checks takes longer initially even with a fully functional framework. Hence when it is necessary to provide a quick feedback on the new feature, checking it manually is faster.

  • Myth #4: Automated Checks Have Higher Defect Detection Rate
  • While it is true that vendor-supplied or home-made test automation solutions are highly capable of performing complex operations; they will never be able to replace a human software tester. He is capable of identifying even the most subtle anomalies in the application.

An automated check is capable of checking only what they were programmed to. Therefore the scripts are only as good as the person who wrote them. If not scripted properly the automation test can easily overlook major flaws in these applications. In short, checking can prove the presence of a defect, but not necessary its absence.

  • Myth #5: Unit Test Automation is All That We Need
  • It should be understood that a unit test is only capable of identifying programmer errors and not his failures. When all the components are tied together to form a system, a much larger aspect of testing comes into the limelight. Most organization has their automated checks at the system UI layer.

The sheer volatility of the functionalities during development makes the process of scripting automated checks a tedious task. Spending time on automation for a functionality that might change is not advisable and may cause difficulties in the later stages of development.

  • Myth #6: System UI Automation is everything
  • Relying solely on automated checks, especially at UI layer can have a truck-load of negative impacts. The development stage will face numerous changes in the UI in the form of enhanced visual design and usability. If a similar change in the functionality is not in place a false impression about the state of the application will be indicated in the checks.

automation testing
Automation checks in the UI layer has a slower execution speed compared to the ones in the unit and API layers. This will result in a slower feedback process to the team. The root cause analysis takes longer as the exact location of the bug is unknown. Therefore it becomes necessary to identify the layers where the use of an automated test may become helpful.
Automated checks is not a onetime thing, it needs constant monitoring and updating. Above all you need to understand the limitations and set realistic goals to get the most out of you automated checks and most importantly your team.

8 Instances Software Bugs Proved To be Too Costly

The world has reached a point where everything is dependent on a set of codes. From the cars that you drive to military vehicles, and department stores to top secret military installation, everything runs on computer programs. The integration of software to our day to day life has truly made life easier.
As helpful as software has been, they have also contributed to some of the most bizarre and catastrophic losses to nations and companies worldwide. Most of these have occurred due to improper software testing methodologies.
app testing
The results were devastating in terms of financial damages and in the some serious cases, even human life was sacrificed. This blog brings insight into some of the most outrageous events that took place in history by reliving those moments.
More often than not, software integration using computers gets it right and gets the job done. But when things start to fall apart, all hell breaks loose.
1. Almost World War III

On the night of September 26, 1983, the early warning system of the Soviet alerted of a nuclear strike launched by America. What could have been a worldwide bloodshed was averted, thanks to the Soviet Air Defence officer Stanislav Petrov. Later he said in the Washington Post that he “had a funny feeling in my gut” about the authenticity of the warning. Investigations proved that the alarm system was faulty.
2. Faulty Mars Climate Orbiter

NASA is known for many blunders, but none more embarrassing than what happened with its Mars Climate Orbiter. Launched on December 11, 1998, the mission to better understand the only other planet capable of supporting life in our solar system would bring United States to the forefront in astronomical research.
But what happened was quite different. An error in the ground-based computer software resulted in a $326.7 million lose to the agency. The Orbiter went missing after 286 days of its launch. Software testing showed that a slight miscalculation caused the Orbiter to enter Mars’s atmosphere at the wrong entry point causing it to disintegrate.
3. Bug Triggered Blackout

Eight US states and Canada were given a scare by a tiny bug in the software thread affecting 50 million people. What the authorities described as a race condition bug, was caused when two separate threads of a single operation used the same element of code. The lack of synchronization caused the threads to tangle and eventually crash the system. This caused 256 power plants to go offline causing major disruptions and widespread panic.
4. Glitch in Patriot Missiles

The bugs that were mentioned so far were responsible for some major financial loses but the software error in the Patriot Missiles caused the lives of 28 with an additional 100+ injured. The missiles were designed to protect the American barracks from the scud missile during the Gulf war. But the bug caused a delay in tracking the missiles real-time rendering the barracks defenceless to the Iraqi attacks. The loss of human life is what makes this one of the most costly software testing mistakes in history.
5. The IRS Debacle

The Internal Revenue Service lost something between $200 and $300 million in revenue in 2006 while depending on computer software to find potential fraud cases in returns claiming refunds. The tax collection agency later found that the software was inoperable, but by then it was too late.
6. $440 million in 30 Minutes    

The losses were even higher for Knight Capital Group, when the bugs in the company’s trading algorithm decided to buy high and sell low on 150 different stocks. A market-making firm that had an outstanding reputation up until August of 2012 managed to hit rock-bottom in just 30 minutes; surely it has to be a world record.
By the time the company addressed the issue the losses were cataclysmic. The company lost $440 million as compared to their net income of $296 million in 2011. Even the company stock price dropped 62 percent in one day, according to Bloomberg Business week.
7. 450 Violent Offenders Given Parole

This embarrassing and dangerous event took place in California when 450 high-risk prisoners were released into the public. The state which decided to reduce its prison population by 33,000, releasing non-violent offenders, instead went on to grant non-revocable paroles to approximately 450 violent felons. A huge misread by the software algorithm. Many of them remain free even today.
8. The AT&T Crisis

On January 15, 1990 around 60,000 AT&T customers were denied the luxury of making long distance calls. Initially the company believed it was being hacked until finally the real culprit was found in the form of a software bug.
The company updated its software to make the processes faster. Well, be careful what you wish for. The process became faster than expected and the server sent two messages to the subsequent server causing the switches to reboot in a loop. By the time the issue was taken care of, AT&T has lost $60 million in long distance charges for the dropped calls.