A sprint, in Agile software development, is a set period of time during which specific work has to be completed and made ready for review.
The idea of having a time box is to ensure that immediately after the sprint, they can retrospect and adapt to make improvements.
This ability to inspect and adapt is the most critical goal of a sprint that allows the team to inspect the product increment iteratively and also improve how they work together as a team.
Few Things about Sprint :
Sprint Planning takes place at the beginning of an iteration or interval and the team determines how much work it thinks is possible to be accomplished by the end of the interval.
Daily Scrum takes place every day and is usually a stand-up meeting with not more than 10–15 minutes time. The goal here is that the teams exchanges information about ongoing work, identifies synergies or impediments and ideally makes quick planning for the day. (happens for 30 minutes)
Every day of the team has Daily Stand-up at the same time of day, where all the stakeholder are available
The Daily Stand-up is not a status meeting but information sharing and asking for help meeting
Sprint Review meeting is the meeting where the product that was implemented in the interval is demonstrated to the product owner and possible other stakeholders or even people who are interested. This meeting is an important source of feedback about the product.
The Sprint Review Meeting is conducted at last day of Sprint before Demo and Retrospective
The Sprint Review Meeting is for stakeholder feedback on the increment of current increment as well as product
The Sprint Review Meeting is facilitated by Scrum Master
All members of the team attend the Sprint Review Meeting
The Sprint Review Meeting is time boxed in to two hours per week of Sprint duration
Sprint Retrospective meeting is a team meeting at the end of an iteration which implements the agile principles of reflect and tune. Usually, the team identifies what went well and should be strengthened and also what didn’t work and should be removed.
A Retrospective is strictly for the team. Only pigs are allowed, no chickens
In the Retrospective loop back the action items of previous Retrospectives
All team members attend every Retrospective
Client expects only two items, What you have to improve, what you should have done better. (They may not listen even you have some concerns with the development team, if the development team drops the code at last day and they will say it is an agile process you have to do it (in what universe, I do not understand)
The Scrum Master serves the Product Owner in several ways, including:
Ensuring that goals, scope, and product domain are understood by everyone on the Scrum Team as well as possible;
Finding techniques for effective Product Backlog management;
Helping the Scrum Team understand the need for clear and concise Product Backlog items;
Ensuring the Product Owner knows how to arrange the Product Backlog to maximize value;
Scrum master should address the impediments if any and should resolve to mitigate them and associated concerns.
Identify the areas that could have been handled better in the retrospective. The plan should be to create the list, group and discuss then vote per the priority.
The key stakeholder of Scrum Projects is the Product Owner One integral responsibility of the Product Owner is to convey the importance and significance of the Scrum Project to the Scrum Team.
The PO is no one but the representative of the client, the client may not be able to attend these small meeting, so they have created miniatures of them in the form of PO.
In some organization this person is also known as Business Analyst aka BA, he/she is the one who knows the actual intentions of the client. PO gives demo of the sprint product to the client after the Sprint.
As I do not know much about Dev team, so I am directly jumping into Testing Team ( I am a tester, the lazy one in the testing team )
A Testing team may consist, Team lead, and automation tester, manual tester or same person handle end to end process of a user story.
Testing team will perform below tasks in Sprint.
User stories are short, simple descriptions of a feature told from the perspective of the person who desires the new capability, usually a user or customer of the system.
They typically follow a simple template:
As a < type of user >, I want < some feature > so that < I achieve something >.
Most of the time PO writes the user story, people will say anyone can write the user story
Here anyone means the people who represent the client (So tester or developer or scrum master cannot write the user story, who is remaining now in scrum team, PO. ha ha ha)
On automation feasibility study, the automation team will decide whether to automate user story or not
Feasibility is decided based on the following factors:
1. The feature is automatable or not with current technology
2. We will decide whether it is possible to automate the user story in this sprint or not.
3. Does user story contains the flakiness or not, if the story has flakiness then tester should write the manual test cases based on this flakiness in a way that User story test cases are split into an automation test case and manual test case.
Note : Flakiness is nothing but will the automation passes in all the times (if there is no functionality change)
By splitting the user story onto manual and automation is helpful to write good automation tests, this is the reason why feasibility should be done before writing the test cases
Test case preparation is nothing but the combination of test steps to test the user story. User story can have any number of test cases but ideally 2-4 only.
Test cases must be written based on the feasibility study of automation
Test case review is nothing but the review of your test steps in the test case to check whether you have covered the user story or not.
Peer review: It is done by another tester who hasn’t written those test cases but is familiar with the system under test. The reviewer must have better knowledge than the person who wrote it.
Review by a PO: It is done by PO, who has great knowledge about the requirements and Application.
In Automation script Development, so called us will start writing the automation test script for the manual test steps, We should use the assertion statement only when a test steps say to verify something otherwise automation tester should not assertion for test steps.
When we write the test script we should make sure, test cases present in the test suite are not related to each other, we should avoid writing the dependent test cases.
We should make sure we are cleaning the mess that we are creating, for example, if you are creating an order, end of the script you should remove the order.
When we start writing we should not assume anything like there will be some details present in the application which we can use it
Always your tests should start with scratch and you have to create your test data, so that your test will not fail because of the data issues
Never consider your test case as failed unless the verification point fails, test step failures must not be considered as a test failure, only verification failures should be considered as test failures
Always write the method names in a legible way, so that when some person reads a method or test name, he should able to understand it.
In agile you should be ready with skeleton code and with dummy locators, when the development team deploys the code, you should be in a position to replace you dummy locators and should be able to execute with minimal change
In an actual agile process, development team should give the locators prior to deployment or even before development itself.
In more agile process, the development team should only use unique locators throughout the application, for example, unique id or some attributes which give details about the element.
A Tester should never push code without executing multiple times.
I understand your question, when do we need to perform manual testing ?
The answer is never, while writing the automation code, you would be checking the details of an element or a feature for writing the verifications. I hope that itself covers manual testing.
In this activity, your senior or knowledgeable person in the team reviews the automation code of yours
In most of the organizations, the code will be reviewed by seniors, sometimes seniors who does not have good technical knowledge, we cannot do anything for that. For Code review guideline we have created a separate chapter : Selenium Code review, please do visit and learn.
Cross browser testing means to validate any application on various browsers to ensure that it is working and quality is validated.
Sometimes, you will find scenarios during testing which will occur on one browser but not with another browser.
Reasons behind cross browser testing :
1. To test layout, UI issues on different browsers and browser versions.
2. To test the core functionalities and performance on them.
3. Test the web application on mobile browsers, and check the layout when the orientation of the device is changed
4. To ensure all UI elements are rendering properly, and scripts are executed without any issue
5. Browser incompatibility with OS. Etc.
Browser to be considered for cross browser testing (major ones):
1. Google chrome
3. Microsoft Edge
5. Internet Explorer
In Cross Platform testing, we will be testing the test script in different operating systems, sometimes different version of operating system.
Tester doesn’t refer any of the documented test cases rather he/she is exploring the application with the intention to break it.
It might involve going deeper into an application, making data combinations which are never thought of and so on. So in this case, if you find a bug, you don’t have a test case to refer it.
You should test the parts of the application which are developed only, you should not try to test features which are not developed.
I enjoy doing exploratory testing, we used to get gifts or cookies if we were able to find good bugs as part of exploratory testing (In my first project 2012)
Go-Live testing is nothing but client side tester will start testing the application that we gave a sign off on the live/production environment.
This is the scary part, where client escalates the vendor or testing team if t=client finds any bugs on the code, which they have covered in User Story.
Testing and development team should be available when the client performs the Go-Live testing.
Remember this Quote No one remembers when you are right, but no one forgets when you are wrong, I used wear this Quoted t-shirt for my Go-Live testing.
Sometimes Product Owner will not have enough knowledge of the feature or the product, so they will add little little things in between, if we question they will say it is an Agile process.
Testing team will not have enough time like development team, if the development team does not give information about the feature like locators
Mostly developers influence the PO, if there is a minor difference in pixels( 1 px difference) . After five or six times there they have given 5 px difference, Now they will question QA team why there is 5px difference. Bu no one cares when you said there is 1 px difference.
Mostly if you are working as vendors (most of as do, including me ), then your management treats the clients as god, you cannot even oppose them.
If you(tester) and the client representative are from same county then your life will be ruined, but if you have other county guy as client representative, then the attitude not be there