Archive for the ‘myths of performance testing’ Tag

Performance Testing Myths

I had started the blog with every intention to post more regularly. However, I am not been able to do so for reasons galore. Anyways here I bring bust some of the myths that surround performance testing as a practice.

Myth #1

Performance Testing is done to break the system just as functional testing is done to break the code.

Performance testing is not done to break the system. The principle objective of performance testing is to get an insight as to how an application would behave when it would go live. Thus breaking the system is not the objective. However, we would sometimes definitely want to know the point beyond which the system would crash. This is stress testing aspect of performance testing and not the entire performance testing. Performance testing consists of various types of test. Some of the examples are Load Testing,  Stress Testing and Endurance Testing. Each of the tests are conducted with a specific objective in mind. Thus not all tests are done for all the projects. But they included as per the requirements of each of the projects.

Myth #2

Performance Testing is all about scripting.

This I have discussed in my last post. You can check it out here

Myth #3

Performance Testing is a line extension of functional testing.

This is a very popular myth among many of the functional testers. The myth has its origin in the evolution of testing practice as a whole. Initially all the testing which was executed manually. Then tools were created to automate the tools and thus the automation testers came in place. Extending this thought process to performance testing, it is widely believed that performance testing is just learning another tool and some definitions like hits/sec and throughput. Another place from where the myth is permeated is when most of the test managers view performance testing through the lens of functional testing. This creates problem conducting a performance test projects as all the metrics then collected would have been defined through the lens of functional testing. Test metrics applicable to functional and automation testing are to measure performance testing as well.

For example a dry run of automation scripts is done to get some very obvious bugs. However a dry run of performance test scripts is done to ensure the scripts created are robust and would not fail when an actual test execution is performed.

Myth #4

The results that are true for a single server can be simply extrapolated to 2 or more servers in production.

Scenario – The client – I have a single server for doing performance testing but in production there will be 4 servers. So we can simply take the test results and extrapolate to get the result for the 4 servers.  Any performance test engineer would never back his results against extrapolation. The reason – The moment an environment different from PT environment comes, the number of variables that get introduced in the equations would increase and thus would throw our results out of the window.

Simply put, even after 100 years of research in understanding weather patterns, we still are unable to figure out whether there will be drought or flood next year. You can argue that we can at least predict weather for the current day. Well, that bit we can do with performance testing as well.

Well that’s all I have got. Do drop in other myths that you may have experienced to make this list more comprehensive. Till then cya 🙂

Advertisements
%d bloggers like this: