Scalability testing refers to the testing which ensures that the software is compatible with newly added features and can adapt itself, to the situation where, it may have to adjust itself with the network, set-up, system libraries, etc.
Scalability testing is non-functional testing. It is not confined to the software application but it can also be applied to hardware testing and database testing as well. But in recent times, its scope is seen in the application is the software industry.
As, we shall confine our studies to software testing only, so we shall not discuss its application in hardware or database level.
Scalability testing is a method of analyzing the performance of the related application, module or the entire software. Its performance will be measured in its ability to scale up, itself to a new environment or a scalable environment.
Generally speaking, scalability testing is the way that how the software would react in a scalable situation, where the number of the user request, the interrupts, the interface, the attributes, and all other user required parameters can be managed by the software.
It is important to note that, scalability testing is not the same for all types of application. For example, for an application which is a virtual server to be installed in a system. The attributes which are required for its testing may be, CPU usage, disk space, memory consumed, server response time, etc.
Let us take another example when we are carrying out this test, on a web browser application. In this case, the parameters or the attributes which are required to carry out the test will be different. Such as data usage, no of request handled, memory consumed, CPU cycle used, etc.
So, we have seen that different types of parameters are used for testing different types of application or softwares. It is important to note that, carrying out this test, on an application which is much more complicated requires a lot of programming experience as well.
Therefore, this can be concluded that, although, this testing seems simple, but is not that simple in the practical environment. Developers or testers much have knowledge about the software as well as the hardware to carry out the test successfully.
In the software industry, there is are different myths and confusion, about how Scalability testing is different from load testing. There exist this confusion because, both the testing seems to be performing the same thing, that is testing the performance of the software.
The case is that in scalability testing performance is tested keeping in mind that the performance parameters are to be designed in such a way, that it becomes easier to make the software scalable.
On the other hand, in the load testing, it is checked that how the performance parameters behave when a maximum number of request is made into the software.
Another perspective is that, in the case of the scalability testing, the load on the software is also tested, but both the minimum and the maximum load.
So, it is seen that, in scalability testing, the first phase is a load testing and then it should be confirmed that now the software retains the same properties when it is made scalable or already applied in a scalable way.
Both the testing has a very different vision and different goals. That is why there should not be any confusion about the two different types of tests, that is the scalability testing and load testing.
Let us consider a scenario, where we see what is really tested in scalability testing, and how the performance issues can be maintained after the testing.
Consider, a website where requests are limited to 100 at a time.
Now, the problem is that, after the 100 requests/second, it is seen that the website hangs or is unable to handle anymore request when the 100 req/sec criteria are met. This is the problem because this software cannot be considered as scalable, as failed at some point after its bandwidth is met.
Now, to make this same website scalable, what we can do is, making the request time a bit slower, so that number of users may increase per second, that is, more than 100. But response time will be less compared to the request making time.
Now, the user may request a webpage and will be able to stay on the page and then may wait for a few seconds. And at the same time, multiple users will be able to access the website and wait a few seconds for the next response.
This way, a number of users are able to access the website, so it is a situation when the bandwidth is the same, but the number of users is able to access the website. So, we have made the same website, with constant bandwidth scalable. This is the power of scalability testing.
As stated earlier, the testing parameters are not the same for all the applications. For some, of the software, it may be the network, on the other hand for some other software it can be the number of input the software can handle at a time.
As the software runs on hardware, and the internet is needed everywhere, keeping this in mind some attributes are discussed below, which are believed to the parameters which are extremely, important. They are:
Response time means the time it requires for the system to respond to a user request. Simply, it may be defined as the time between the user request and the application time.
This is one of the major attributes which is directly able to guess the efficiency of the application. For example, an already shown in the example of the website, where we are given a limited bandwidth, but to make the website scalable, we have used increased the response time.
It can be directly seen that as the user load increases on software, the response time is reduced or intentionally has to be reduced so that more users can at least access the website.
Now, it is important to note that, it is not the case for the different types of software. For softwares which are much more complicated and highly secure, there can be the other parameters like CPU cycle stealing, etc which has to be taken into consideration while analyzing the response time.
Response time is not very easy to analyze. It may be seen in some softwares that different modules are interconnected to each other. Now the main concern is not the interface which has become accessible to the users. But also activation of all the modules which are connected.
Let us take an example, let we visiting a URL which would take us to a payment gateway. Now the customer or the user has visited into the site, it is okay. But the list of the banks may not have loaded and so the user may not proceed with the payment.
This is the case, that response time to be considered as pure should be the exact time between the URL to respond (in case of web application or websites) and loading of all the modules which are required to make the page responsive and should be containing all the functionalities as expected.
From the server's point of view, the case is that, even if all the modules are loaded in the interface of the software, still the server may not be connected to the modules or may be slower compared to the loading speed of the webpage, as the internet speed all around the world.
This is the problem when the server is located in some remote area where there is a problem of data communication. In that case, response time is added to the server response time. And thus the equivalent response time may be very large, in such cases.
It is the responsibility of the Scalability testing, that it should be able to create a load balancing scheme so that the request divided with increased response time(as seen in our example) is coming up with the server response time.
Scalability Testing is carried out to check whether the overall response time is not that much degraded even when the software is scaled.
Throughput is generally the output in unit time by any software. It may differ from application to application. For example, throughout for a webpage may be, the number of requests it is able to handle per unit time or second.
On the other hand, in a server-side application, this same parameter can be measured by the number of queries it is able to address in unit time. For an image captcha generator application, it can be like a number of strings processed in one second.
Therefore, it is seen that throughput is mainly, about the number of individual outputs made by the software or handles the software.
So, it can be understood that the scalability testing is responsible for the software to be maintaining its throughput even when the software is transferred into a scalable environment or more number of queries will be expected from it.
This is also one of the major attributes of white analyzing the performance of the software. The CPU usage can be a sole objective which has to be taken care of. It is important to note that, the hardware is a limited entity. Like coding and software, hardwares cannot be increased rapidly.
So, it is very important to use the system resources which is CPU usage in our case, efficiently, so that even when more features or users are added into the system, the smart CPU usage is able to handle the requests carefully.
The way to make the CPU utilization very efficient is to use coding syntax which does the same thing, but efficiently. Best way to do this is to develop CPU usage algorithm, dead code elimination, code optimization, etc which not only saves CPU usage but also saves memory.
Now, it is also important to note that, memory is also hardware, and so cannot be added as per user's request. So, in this case also, we should be able to style out the code of the software in such a way, that best memory utilization can be done, relating to the increased performance of the system.
As our sole objective is to optimize the software is such a way, that more request or queries can be handled by the software with the same addressable memory present in the system, in which the software is installed.
Some way to do it, are the code optimization technique, applying memory saving algorithm, best programming procedures, clearing bugs, etc.
Scalability testing is the way to check whether the software can be made to act in such a way, that there is a minimum utilization of memory for generating maximum output.
If seen clearly, that the entire network is also hardware. And it is the most complicated type of hardware which cannot be changed or modified very easily. Therefore, there is a necessity to carry out a test which can optimize the use of the network and increase software efficiency as well.
To carry out this, the software is embedded with different network algorithms, which can sense the traffic of the packets (in case of the web application or servers) and may apply its own logic to use the network efficiently.
As, stated, that for different types of softwares (online or offline), the hardware requirements are different and the trick required to address those scalable problems is also different from each other.
Scalability testing is a software testing type in which all those attributes of the software is tested which are responsible for the scalability of the software. For software to be scalable, the software must possess some modules and functions a well as APIs that may make the software scalable.
There are different attributes that should be kept in mind when we are concern about scalability. Such as the response time, the load time, efficiency, etc.
The scalability of the software is necessary when the software functionality of the software has to spread into a wider range.
For example, software designed for managing a school with a single campus. Suddenly, let the school wants to expand its campus to three different places, now if the current software can be extended to the three different campuses also, we call the software scalable.
This is only one scope of scalability, there are many others.